hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
e5ae9e4e6ad545f3e72ff9af284107fea4d910c2
134
md
Markdown
index_1.md
intfrr/intfrr.github.io
1b0d150dea1063a2968a2fa23c3291835de6a71a
[ "MIT" ]
null
null
null
index_1.md
intfrr/intfrr.github.io
1b0d150dea1063a2968a2fa23c3291835de6a71a
[ "MIT" ]
null
null
null
index_1.md
intfrr/intfrr.github.io
1b0d150dea1063a2968a2fa23c3291835de6a71a
[ "MIT" ]
null
null
null
--- layout: page title: Hello World! tagline: By Kriss Rott --- {% include JB/setup %} This is my first post. Thanks for passing by.
14.888889
45
0.686567
eng_Latn
0.998555
e5aea6068edc1abcabb19c23fb7ca078f4bb6aca
1,905
md
Markdown
microsoft-edge/hosting/chakra-hosting/jssetobjectbeforecollectcallback-function.md
OfficeGlobal/edge-developer.pt-BR
c13392532188207eaaac183c0e9ad18db670a057
[ "CC-BY-4.0", "MIT" ]
null
null
null
microsoft-edge/hosting/chakra-hosting/jssetobjectbeforecollectcallback-function.md
OfficeGlobal/edge-developer.pt-BR
c13392532188207eaaac183c0e9ad18db670a057
[ "CC-BY-4.0", "MIT" ]
null
null
null
microsoft-edge/hosting/chakra-hosting/jssetobjectbeforecollectcallback-function.md
OfficeGlobal/edge-developer.pt-BR
c13392532188207eaaac183c0e9ad18db670a057
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: Define uma função de retorno de chamada que é chamada pelo tempo de execução antes da coleta de lixo de um objeto. title: Função JsSetObjectBeforeCollectCallback | Documentos da Microsoft ms.custom: '' ms.date: 01/18/2017 ms.prod: microsoft-edge ms.reviewer: '' ms.suite: '' ms.tgt_pltfrm: '' ms.topic: reference ms.assetid: ea2cbd94-d8b0-4fa9-a4a1-c75a4e338eaf caps.latest.revision: 3 author: MSEdgeTeam ms.author: msedgedevrel manager: '' ms.openlocfilehash: 77a59c6ace96809c0b232c96aa9639555e7badcd ms.sourcegitcommit: 6860234c25a8be863b7f29a54838e78e120dbb62 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 04/09/2020 ms.locfileid: "10562105" --- # Função JsSetObjectBeforeCollectCallback Define uma função de retorno de chamada que é chamada pelo tempo de execução antes da coleta de lixo de um objeto. ## Sintaxe ```cpp STDAPI_(JsErrorCode) JsSetObjectBeforeCollectCallback( _In_ JsRef ref, _In_opt_ void *callbackState, _In_ JsObjectBeforeCollectCallback objectBeforeCollectCallback ); ``` #### Parâmetros `ref` O objeto para o qual registrar o retorno de chamada. `callbackState` Estado fornecido pelo usuário que será devolvido ao retorno de chamada. `objectBeforeCollectCallback` A função de retorno de chamada sendo definida. Use NULL para limpar o retorno de chamada registrado anteriormente. ## Valor de retorno O código `JsNoError` se a operação tiver sido bem-sucedida, um código de falha também. ## Comentários O retorno de chamada é invocado no thread de execução do tempo de execução atual, portanto a execução é bloqueada até que o retorno de chamada seja concluído. Essa API só tem suporte no modo EdgeHTML. ## Requisitos **Header:** jsrt. h ## Consulte também [Referência (tempo de execução JavaScript)](../chakra-hosting/reference-javascript-runtime.md)
32.844828
161
0.76168
por_Latn
0.993386
e5aed731a72b1ea6b7d84fa40250e4f8dea9f5d5
3,787
md
Markdown
doc/plugins.md
srand/finit
393834b253dc5bc9204aa6388c5e484778713fde
[ "MIT" ]
468
2015-01-26T02:20:12.000Z
2022-03-28T08:27:41.000Z
doc/plugins.md
srand/finit
393834b253dc5bc9204aa6388c5e484778713fde
[ "MIT" ]
190
2015-02-18T12:16:53.000Z
2022-03-23T21:00:54.000Z
doc/plugins.md
srand/finit
393834b253dc5bc9204aa6388c5e484778713fde
[ "MIT" ]
59
2015-01-17T13:45:54.000Z
2022-03-04T12:53:21.000Z
Hooks & Plugins =============== * [Plugins](#plugins) * [Hooks](#hooks) * [Bootstrap Hooks](#bootstrap-hooks) * [Runtime Hooks](#runtime-hooks) * [Shutdown Hooks](#shutdown-hooks) Finit can be extended to add general functionality in the form of I/O monitors, or hook plugins. The following sections detail existing plugins and hook points. For more information, see the plugins listed below. Plugins ------- For your convenience a set of *optional* plugins are available: * *alsa-utils.so*: Restore and save ALSA sound settings on startup/shutdown. _Optional plugin._ * *bootmisc.so*: Setup necessary files and system directories for, e.g., UTMP (tracks logins at boot). This plugin is central to get a working system and runs at `HOOK_BASEFS_UP`. The `/var`, `/run`, and `/dev` file systems must be writable for this plugin to work. Note: On an embedded system both `/var` and `/run` can be `tmpfs` RAM disks and `/dev` is usually a `devtmpfs`. This must be defined in the `/etc/fstab` file and in the Linux kernel config. * *dbus.so*: Setup and start system message bus, D-Bus, at boot. _Optional plugin._ * *hotplug.so*: Setup and start either udev or mdev hotplug daemon, if available. * *rtc.so*: Restore and save system clock from/to RTC on boot/halt. * *modules-load.so*: Scans /etc/modules-load.d for modules to modprobe. * *netlink.so*: Listens to Linux kernel Netlink events for gateway and interfaces. These events are then sent to the Finit service monitor for services that may want to be SIGHUP'ed on new default route or interfaces going up/down. * *resolvconf.so*: Setup necessary files for `resolvconf` at startup. _Optional plugin._ * *tty.so*: Watches `/dev`, using inotify, for new device nodes (TTY's) to start/stop getty consoles on them on demand. Useful when plugging in a usb2serial converter to login to your embedded device. * *urandom.so*: Setup random seed at startup. * *x11-common.so*: Setup necessary files for X-Window. _Optional plugin._ Usually you want to hook into the boot process once, simple hook plugins like `bootmisc.so` are great for that purpose. They are called at each hook point in the boot process, useful to insert some pre-bootstrap mechanisms, like generating configuration files, restoring HW device state, etc. Available hook points are: Hooks ----- ### Bootstrap Hooks * `HOOK_ROOTFS_UP`: When `finit.conf` has been read and `/` has is mounted — very early * `HOOK_BASEFS_UP`: All of `/etc/fstab` is mounted, swap is available and default init signals are setup * `HOOK_NETWORK_UP`: System bootstrap, runlevel S, has completed and networking is up (`lo` is up and the `network` script has run) * `HOOK_SVC_UP`: All services in the active runlevel has been launched * `HOOK_SYSTEM_UP`: All services *and* everything in `/etc/finit.d` has been launched ### Runtime Hooks * `HOOK_SVC_RECONF`: Called when the user has changed something in the `/etc/finit.d` directory and issued `SIGHUP`. The hook is called when all modified/removed services have been stopped. When the hook has completed, Finit continues to start all modified and new services. * `HOOK_RUNLEVEL_CHANGE`: Called when the user has issued a runlevel change. The hook is called when services not matching the new runlevel have been been stopped. When the hook has completed, Finit continues to start all services in the new runlevel. ### Shutdown Hooks * `HOOK_SHUTDOWN`: Called at shutdown/reboot, right before all services are sent `SIGTERM` Plugins like `tty.so` extend finit by acting on events, they are called I/O plugins and are called from the finit main loop when `poll()` detects an event. See the source code for `plugins/*.c` for more help and ideas.
35.392523
74
0.737523
eng_Latn
0.995811
e5afaddd796010ac330cdef97afa95cc23467db0
123
md
Markdown
README.md
TimmyChan/curriculum_mapper
127a615239850669e5d734fb5b098fb03fdfc22a
[ "MIT" ]
2
2022-03-03T02:16:49.000Z
2022-03-13T01:47:12.000Z
README.md
TimmyChan/curriculum-mapper
127a615239850669e5d734fb5b098fb03fdfc22a
[ "MIT" ]
null
null
null
README.md
TimmyChan/curriculum-mapper
127a615239850669e5d734fb5b098fb03fdfc22a
[ "MIT" ]
null
null
null
# Curriculum Mapper ![cover](https://github.com/TimmyChan/curriculummapper/blob/main/docs/Quantum_Computing.gif?raw=true)
30.75
101
0.804878
kor_Hang
0.334248
e5aff3fb370a1532d8bbfa0f8847e5eafab19a81
1,155
md
Markdown
docs/visual-basic/misc/bc30696.md
muthu67/Docs
ae188fd42d40ff7106f5e62c90d5aa042b262ff5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30696.md
muthu67/Docs
ae188fd42d40ff7106f5e62c90d5aa042b262ff5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30696.md
muthu67/Docs
ae188fd42d40ff7106f5e62c90d5aa042b262ff5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "'<membername1>' and '<membername2>' cannot overload each other because they differ only by the types of optional parameters | Microsoft Docs" ms.date: "2015-07-20" ms.prod: .net ms.technology: - "devlang-visual-basic" ms.topic: "article" f1_keywords: - "bc30696" - "vbc30696" helpviewer_keywords: - "BC30696" ms.assetid: 6693a493-959e-4ff3-8c2d-52ed2d597f77 caps.latest.revision: 8 author: "stevehoag" ms.author: "shoag" translation.priority.ht: - "de-de" - "es-es" - "fr-fr" - "it-it" - "ja-jp" - "ko-kr" - "ru-ru" - "zh-cn" - "zh-tw" translation.priority.mt: - "cs-cz" - "pl-pl" - "pt-br" - "tr-tr" --- # '<membername1>' and '<membername2>' cannot overload each other because they differ only by the types of optional parameters Overloaded members must have different data types for one or more required parameters. **Error ID:** BC30696 ## To correct this error - Define at least one required parameter as a different data type. ## See Also [Overloads](../../visual-basic/language-reference/modifiers/overloads.md)
24.0625
177
0.667532
eng_Latn
0.862125
e5b01f5150d463477167252d784ea1712cb00f7c
5,877
md
Markdown
README.md
jonwhittlestone/dephpugger
80298c5736739323bcafc4d9cd4ef043a4d223cf
[ "MIT" ]
null
null
null
README.md
jonwhittlestone/dephpugger
80298c5736739323bcafc4d9cd4ef043a4d223cf
[ "MIT" ]
null
null
null
README.md
jonwhittlestone/dephpugger
80298c5736739323bcafc4d9cd4ef043a4d223cf
[ "MIT" ]
null
null
null
[![Build Status](https://travis-ci.org/tacnoman/dephpugger.svg?branch=master)](https://travis-ci.org/tacnoman/dephpugger) [![Code Climate](https://codeclimate.com/github/tacnoman/dephpug/badges/gpa.svg)](https://codeclimate.com/github/tacnoman/dephpug) [![HitCount](http://hits.dwyl.io/tacnoman/dephpugger.svg)](http://hits.dwyl.io/tacnoman/dephpugger) <img src="https://raw.githubusercontent.com/tacnoman/dephpugger/master/images/logo.png" alt="logo" title="Dephpugger logo" height="500"> # What is Dephpugger? Dephpugger (read depugger) is an open source lib to make a debug in php direct in terminal, without necessary configure an IDE. The dephpugger run in Php Built in Server using another command. You can use for: ## Web applications ### Lumen in example ![dephpugger web](https://raw.githubusercontent.com/tacnoman/dephpugger/master/images/dephpugger-web.gif) `Image 1.0 - Screenrecord for debug web` ## Cli Scripts ![dephpugger](https://raw.githubusercontent.com/tacnoman/dephpugger/master/images/dephpugger.gif) `Image 1.1 - Screenrecord for debug cli scripts` ## Another example [![demo](https://asciinema.org/a/115976.png)](https://asciinema.org/a/115976?autoplay=1) # Install To install you must run this code (using the composer). ```sh $ composer require tacnoman/dephpugger ``` ## Install globally ### In Linux or Mac Os X Run this command: ```sh $ composer global require tacnoman/dephpugger ``` And add in your ~/.bash_profile. ``` export PATH=$PATH:$HOME/.composer/vendor/bin ``` Now run `source ~/.bash_profile` and you can run the commands using only `$ dephpugger`. # Dependencies - PHP 7.0 or more (not tested in older versions) - Xdebug activate - A Plugin for your browser (If you want to debug a web application) ## Plugins for - [Chrome](https://chrome.google.com/webstore/detail/xdebug-helper/eadndfjplgieldjbigjakmdgkmoaaaoc) - [Firefox](https://addons.mozilla.org/pt-br/firefox/addon/the-easiest-xdebug/) - [Safari](https://github.com/benmatselby/xdebug-toggler) - [Opera](https://addons.opera.com/addons/extensions/details/xdebug-launcher/?display=en) You can run this commands to check your dependencies: ```sh $ vendor/bin/dephpugger requirements $ vendor/bin/dephpugger info # To get all values setted in xDebug # Or in global $ dephpugger requirements $ dephpugger info ``` # Usage To usage you must (after installation) run two binaries in `vendor/bin` folder. ```sh $ php vendor/bin/dephpugger debugger # Debugger waiting debug $ php vendor/bin/dephpugger server # Server running in port 8888 # Or in global $ dephpugger debugger $ dephpugger server ``` You must run in two different tabs (in next version you'll can run in an uniq tab). After run theese commands, you need to put the follow line in your code: ```php <?php # ... xdebug_break(); # This code is a breakpoint like ipdb in Python and Byebug in Ruby # .... ``` Now, you can open in your browser the page (localhost:8888/[yourPage.php]). When you request this page your terminal will start in breakpoint (like the image 1.0). To debugger a php script, you could run: ```sh $ php vendor/bin/dephpugger cli myJob.php # Or in global $ dephpugger cli myJob.php ``` ## Comands after run When you stop in a breakpoint you can make theese commands: | Command | Alias | Explanation | |-------------------|-------|----------------------------------------------------------------------| | next | n | To run a step over in code | | step | s | To run a step into in code | | set \<cmd>:\<value> | | Change verboseMode or lineOffset in runtime | | continue | c | To continue script until found another breakpoint or finish the code | | list | l | Show next lines in script | | list-previous | lp | Show previous lines in script | | help | h | Show help instructions | | $variable | | Get a value from a variable | | $variable = 33 | | Set a variable | | my_function() | | Call a function | | dbgp(\<command\>) | | To run a command in dbgp | | quit | q | Exit the debugger | # Configuration (is simple) The Dephpugger project has default options like a port, host, socket port, etc. You can change this values adding a file `.dephpugger.yml` in root directory project. You can create in your `.dephpugger.yml` file the configurations. Like this: ```yml --- debugger: host: mysocket.dev # default: localhost port: 9002 # default: 9005 lineOffset: 10 # default: 6 verboseMode: false # default: false historyFile: ~/.dephpugger_history # default: .dephpugger_history server: host: myproject.dev # default: localhost port: 8080 # default: 8888 path: ./public/ # default: null file: index.php # default: null ``` Theese values will replace the default configuration. # Full documentation To see the full documentation [click here](http://dephpugger.com). # How to use with phpunit, behat, codeception and others The documentation to use, [click here](http://dephpugger.com/Usage/Running_with_phpunit.html). # Run tests ```sh $ composer test ``` # Bugs? Send me an email or open an issue: Renato Cassino - Tacnoman - \<renatocassino@gmail.com\> [See our changelog](https://github.com/tacnoman/dephpugger/blob/master/CHANGELOG.md)
35.403614
352
0.634678
eng_Latn
0.783484
e5b13b17a709856f4ebcc279ceac0fb3b4a1000e
537
md
Markdown
README.md
BitbucksFoundation/Bitbucks-Project
0a84fefaf2cc28dced51dde72cc9a9219cca0ab1
[ "MIT" ]
null
null
null
README.md
BitbucksFoundation/Bitbucks-Project
0a84fefaf2cc28dced51dde72cc9a9219cca0ab1
[ "MIT" ]
null
null
null
README.md
BitbucksFoundation/Bitbucks-Project
0a84fefaf2cc28dced51dde72cc9a9219cca0ab1
[ "MIT" ]
null
null
null
Bitbucks is a clone of Bitcoin, with a rapidly accelerated inflation schedule. It features a 20-fold increase in basic block subsidy with minutely spacing, compensated by a halving event that occurs on a weekly basis. In short, Bitbucks is an economic experiment that features the original Bitcoin protocol on steroids- converting lengthy 4-year halving periods into weekly events. All while retaining the original maximum supply cap. Bitbucks allows us to witness a century-long inflation process condensed into the span of 6 months.
59.666667
77
0.821229
eng_Latn
0.999769
e5b23c654ea2fcca28e1a8c869fc74d442bd6d0e
762
md
Markdown
api/Outlook.RuleAction.Parent.md
seydel1847/VBA-Docs
b310fa3c5e50e323c1df4215515605ae8da0c888
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Outlook.RuleAction.Parent.md
seydel1847/VBA-Docs
b310fa3c5e50e323c1df4215515605ae8da0c888
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Outlook.RuleAction.Parent.md
seydel1847/VBA-Docs
b310fa3c5e50e323c1df4215515605ae8da0c888
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: RuleAction.Parent Property (Outlook) keywords: vbaol11.chm2204 f1_keywords: - vbaol11.chm2204 ms.prod: outlook api_name: - Outlook.RuleAction.Parent ms.assetid: 0280f2af-2877-ba8b-14e0-50bbfee4fb0e ms.date: 06/08/2017 localization_priority: Normal --- # RuleAction.Parent Property (Outlook) Returns the parent **Object** of the specified object. Read-only. ## Syntax _expression_. `Parent` _expression_ A variable that represents a [RuleAction](./Outlook.RuleAction.md) object. ## Remarks The parent of the **[RuleAction](Outlook.RuleAction.md)** object is the **[RuleActions](Outlook.RuleActions.md)** object. ## See also [RuleAction Object](Outlook.RuleAction.md) [!include[Support and feedback](~/includes/feedback-boilerplate.md)]
20.594595
122
0.759843
yue_Hant
0.454078
e5b262c4249126d2f5fbb8abb8c83ce1bf16d89f
303
md
Markdown
tabs/about.md
bstankie/BJS-Blog-chirpy
8212ddfcbf1e4dbd463701a825da360bba5d6cc2
[ "MIT" ]
null
null
null
tabs/about.md
bstankie/BJS-Blog-chirpy
8212ddfcbf1e4dbd463701a825da360bba5d6cc2
[ "MIT" ]
null
null
null
tabs/about.md
bstankie/BJS-Blog-chirpy
8212ddfcbf1e4dbd463701a825da360bba5d6cc2
[ "MIT" ]
1
2021-05-02T09:39:11.000Z
2021-05-02T09:39:11.000Z
--- title: About # The About page # v2.0 # https://resume.bstankie.com # © 2017-2019 Cotes Chung # MIT License --- # Brian J. Stankiewicz, Ph.D. * [Resume](https://resume.bstankie.com) * Insatiable Curiosity * Servant Leader * Blame Taker * Credit Giver * No BSer * Epic sh*t doer
15.15
40
0.636964
kor_Hang
0.58982
e5b270efb6aebd9345d15bde077cd3b3b54cbeac
5,494
md
Markdown
Tools.md
cantremember/virtualizing-windows-xp
d5f7f994602cd2395c8e505f0d9c47c36ce495b8
[ "WTFPL" ]
null
null
null
Tools.md
cantremember/virtualizing-windows-xp
d5f7f994602cd2395c8e505f0d9c47c36ce495b8
[ "WTFPL" ]
null
null
null
Tools.md
cantremember/virtualizing-windows-xp
d5f7f994602cd2395c8e505f0d9c47c36ce495b8
[ "WTFPL" ]
null
null
null
# Tools Here's a few things you'll want to have in your toolkit. ## VirtualBox [VirtualBox](https://www.virtualbox.org/) -- v4.3.18, running on Mac OSX -- was my container of choice. Creating an 32-bit Windows XP VM takes just a few clicks, so I'll skip the details. At various points, you may need to [download](https://www.virtualbox.org/wiki/Downloads) and install - the Oracle VM VirtualBox **Extension Pack**, onto your host workstation - the Oracle VM VirtualBox **Guest Additions**, via the 'Devices > Insert Guest Additions CD ...' menu option for your VM You can reach the VirtualBox BIOS by pressing the `F12` key while your VM is booting. If you have keyboard mapping issues in Mac OS X, [this article](http://www.micromux.com/2009/08/25/virtual-box-virtual-keys/) may be of help. ## Windows XP > Because sticking with an [end-of-lifed](http://windows.microsoft.com/en-us/windows/lifecycle) OS is just *that much fun*. Rip yourself an ISO of your old-school Windows XP Setup CD and you can access it in VirtualBox. It's best to start with an OS install that already includes SP2. Apparently, there's [a virus going around](http://blog.chron.com/techblog/2008/07/average-time-to-infection-4-minutes/) ... You'll need an XP license key for when you run the [Repair Installation](Techniques.md#xp-repair-installation) process. The [Recovery Console](Techniques.md#xp-recovery-console) provides some useful features as well. After you've made a repaired or fresh installation of XP, be sure to [install all the updates](Techniques.md#windows-updates). ## Virtual Disks Along the way, you'll be creating a lot of virtual disks. I chose the VMDK format for all of mine, so I just use the term 'VMDK' in these documents. I tend to used the Fixed Size option, vs. Dynamically allocated. Either way, it has a fixed maximum size. There are [steps you can take](https://blogs.oracle.com/virtualbox/entry/how_to_compact_your_virtual) to compact your VMDK at a later point in time. For this VM, I chose to split the file into 2Gb parts. Any `rsync`-ish backup strategy would have to replicate the *entire VMDK* if the virtual disk even got looked at funny. In this case, slicing it up made [Time Machine](https://support.apple.com/en-us/HT201250) happier. ## XP Helper > *TL;DR* - It's a VMDK of a bootable baseline instance of [Windows XP](Tools.md#windows-xp). I recommend that the first VM you set up be just a basic from-scratch Windows XP install with [all updates applied](Techniques.md#windows-updates). At any time you need to format or partition something in an XP-specific way, you can just attach your 'target' [VMDK](Tools.md#virtual-disks) via IDE controller to your XP Helper, boot it up, and pop into [Disk Management](Techniques.md#disk-management) or Notepad for editing your [`boot.ini` file](Techniques.md#bootini). ## Ghost My full-machine backup tool of choice was Norton Ghost. I ripped an ISO of **Norton Ghost 12** -- which has been [end-of-lifed](http://community.norton.com/en/forums/important-update-regarding-norton-ghost) -- to use as the Recovery OS under [VirtualBox](Tools.md#virtualbox). It seems fully backward-compatible with V2I files from prior versions of Ghost. You'll be using several tools that it provides - [Norton Support Utilities](Techniques.md#norton-support-utilities), to configure your [partitions](Techniques.md#partitioning) and [`boot.ini` file](Techniques.md#bootini) - The standard **'Recover My Computer' Wizard**, once your VMDK is partitioned. I tried the Recover OS on Norton Ghost 10 -- similarly [end-of-lifed](http://www.symantec.com/business/support/index?page=content&id=DOC7209) -- but I'm blocked by its DRM requirements even though I have multiple licenses. [Lots](http://community.norton.com/en/forums/norton-ghost-100-activation-error) [of](http://community.norton.com/en/forums/norton-ghost-10-activation-fails) [folks](https://community.norton.com/en/forums/cannot-reinstall-ghost-10-error-msg-says-too-many-installations) [have](http://community.norton.com/en/forums/norton-ghost-10-activation-fails-my-newer-machine) experienced the dreaded message > "You have exceeded the number of allowable installations for this product key." To overcome this issue, I tried to - [reclaim a Norton product license](https://support.norton.com/sp/en/us/norton-renewal-purchase/current/solutions/kb20100527223228EN_EndUserProfile_en_us), - [un-install Ghost cleanly](http://www.symantec.com/business/support/index?page=content&id=TECH110583) and re-install, - plus any number of other approaches, *none of which* solved the problem. Norton Ghost 12 has a little bit of DRM, but it's not nearly as nasty. Just make sure you've only activated it [on one machine](https://community.norton.com/en/forums/reinstalling-ghost-12-another-pc). ## Mounting a Virtual Disk Tools such as [VMDK Mounter](http://www.paragon-software.com/home/vd-mounter-mac-free/) (for the Mac) allow you to mount a VMDK on your host machine and perform read / write operations as if it were a remote file system. > Suggestions of other such tools are welcomed. Why would you want to do this? - You may want to use your favorite editor to modify the [`boot.ini` file](Techniques.md#bootini). - You may want an efficient way to pull files off of your [VMDK](Tools.md#virtual-disks). Files on your VMDK might deserve a new home, such as a location on the host machine which gets shared with your Windows XP instance.
47.773913
156
0.765198
eng_Latn
0.968941
e5b2a50eabee097e6fe2c23acbd8cd81ba084d40
190
md
Markdown
business-central/includes/local-func-setup-link.md
sergeyol/dynamics365smb-docs-pr.ru-ru
161a910d9b12c730d17c42e9e2aa975e2a06056c
[ "CC-BY-4.0", "MIT" ]
2
2020-05-18T17:20:18.000Z
2021-04-20T21:13:47.000Z
business-central/includes/local-func-setup-link.md
sergeyol/dynamics365smb-docs-pr.ru-ru
161a910d9b12c730d17c42e9e2aa975e2a06056c
[ "CC-BY-4.0", "MIT" ]
1
2019-07-09T13:12:10.000Z
2019-07-09T13:16:57.000Z
business-central/includes/local-func-setup-link.md
MicrosoftDocs/dynamics365smb-docs-pr.ru-ru
d26cb29c17e34b02c638186ac9d3dc65d6595ec7
[ "CC-BY-4.0", "MIT" ]
2
2019-10-12T19:50:14.000Z
2020-10-02T09:10:10.000Z
> [!IMPORTANT] > В зависимости из страны или региона может быть необходима дополнительная настройка. Дополнительные сведения см. в списке связанных статей в разделе [См. также](#see-also).
95
175
0.784211
rus_Cyrl
0.99578
e5b2d7e66f5d2b0ce98d0a595ace7978c9f4f2c6
627
md
Markdown
user/pages/03.tcan/_tcan-mobile-designs/project-phone-slider.md
testsecurityuser123/ci-configured-without-security
da7a5df0204ece10ff15894da65a0931fddd823a
[ "MIT" ]
4
2018-08-24T17:46:45.000Z
2018-08-24T17:46:51.000Z
user/pages/03.tcan/_tcan-mobile-designs/project-phone-slider.md
testsecurityuser123/ci-configured-without-security
da7a5df0204ece10ff15894da65a0931fddd823a
[ "MIT" ]
null
null
null
user/pages/03.tcan/_tcan-mobile-designs/project-phone-slider.md
testsecurityuser123/ci-configured-without-security
da7a5df0204ece10ff15894da65a0931fddd823a
[ "MIT" ]
null
null
null
--- title: 'Responsive Design' project_ps_background_image: building-4-min.jpg project_ps_slides: 1: image: t50i01-tcan-homepage-mobile-v1.1-min.png alt: 'Twin Cities Alumni Network | Homepage' 2: image: t50i01-tcan-homepage-mobile-v1.2-min.png alt: 'Twin Cities Alumni Network | Event Modal ' date: '06-09-2016 00:00' --- Many of TCAN's members live an active lifestyle, so designing a simple, consistent experience across all devices was a must. For a better experience on smaller screen sizes, special care was given to the navigation, event calendar, and membership criteria slider.
44.785714
263
0.732057
eng_Latn
0.91996
e5b2e1099d7217ba79d9b09e7dfefd127c8b8d8b
1,377
md
Markdown
Extension/extension.md
MiguelAlexanderMaldonado/CSharp-AdvanceTopics
4463bdf79d72c70b9cd6aaf82b5aa6b71ae5824c
[ "Apache-2.0" ]
null
null
null
Extension/extension.md
MiguelAlexanderMaldonado/CSharp-AdvanceTopics
4463bdf79d72c70b9cd6aaf82b5aa6b71ae5824c
[ "Apache-2.0" ]
null
null
null
Extension/extension.md
MiguelAlexanderMaldonado/CSharp-AdvanceTopics
4463bdf79d72c70b9cd6aaf82b5aa6b71ae5824c
[ "Apache-2.0" ]
null
null
null
# Extension methods [Extension methods](https://docs.microsoft.com/es-es/dotnet/csharp/programming-guide/classes-and-structs/extension-methods) enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. Extension methods are static methods, but they're called as if they were instance methods on the extended type. ## **Example** * Shorten ``` namespace ExtensionMethods { public static class StringExtensions { public static string Shorten(this String str, int numberOfWords) { if (numberOfWords < 0) throw new ArgumentOutOfRangeException("numberOfWords should be greater than or equal to 0."); if (numberOfWords == 0) return ""; var words = str.Split(' '); if (words.Length <= numberOfWords) return str; return string.Join(" ", words.Take(numberOfWords)) + "..."; } } } ``` ``` namespace ExtensionMethods { class Program { static void Main(string[] args) { string post = "This is supposed to be a very long blog post..."; var shortenedPost = post.Shorten(5); Console.WriteLine(shortenedPost); } } } ``` [Back to index](../README.md)
28.102041
373
0.604938
eng_Latn
0.946062
e5b315ebbba1ac474728373d1aa87aa573556992
29,657
md
Markdown
sccm/core/plan-design/network/pki-certificate-requirements.md
Acidburn0zzz/SCCMDocs.fr-fr
51ef830a5b64c4ad4a3148448549c28e9f9aa10d
[ "CC-BY-4.0", "MIT" ]
5
2018-11-03T23:38:27.000Z
2018-11-03T23:38:31.000Z
sccm/core/plan-design/network/pki-certificate-requirements.md
Acidburn0zzz/SCCMDocs.fr-fr
51ef830a5b64c4ad4a3148448549c28e9f9aa10d
[ "CC-BY-4.0", "MIT" ]
null
null
null
sccm/core/plan-design/network/pki-certificate-requirements.md
Acidburn0zzz/SCCMDocs.fr-fr
51ef830a5b64c4ad4a3148448549c28e9f9aa10d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Configuration requise des certificats PKI titleSuffix: Configuration Manager description: Découvrez la configuration requise pour les certificats PKI dont vous pourriez avoir besoin pour System Center Configuration Manager. ms.date: 11/20/2017 ms.prod: configuration-manager ms.technology: configmgr-other ms.topic: conceptual ms.assetid: d6a73e68-57d8-4786-842b-36669541d8ff author: aczechowski ms.author: aaroncz manager: dougeby ms.openlocfilehash: f8b0cc7b298c197214553a019f32fa4c05066bc4 ms.sourcegitcommit: f9b11bb0942cd3d03d90005b1681e9a14dc052a1 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 07/24/2018 ms.locfileid: "39229386" --- # <a name="pki-certificate-requirements-for-system-center-configuration-manager"></a>Configuration requise des certificats PKI pour System Center Configuration Manager *S’applique à : System Center Configuration Manager (Current Branch)* Les certificats d’infrastructure à clé publique (PKI) dont vous pouvez avoir besoin pour System Center Configuration Manager sont répertoriés dans les tableaux suivants. Ces informations présupposent une connaissance élémentaire des certificats PKI. Pour obtenir des instructions pas à pas pour le déploiement, consultez [Exemple détaillé de déploiement des certificats PKI pour Configuration Manager : Autorité de certification Windows Server 2008](/sccm/core/plan-design/network/example-deployment-of-pki-certificates). Pour plus d’informations sur les services de certificats Active Directory, consultez la documentation suivante : - Pour Windows Server 2012 : [Vue d’ensemble des services de certificats Active Directory](http://go.microsoft.com/fwlink/p/?LinkId=286744) - Pour Windows Server 2008 : [Services de certificats Active Directory dans Windows Server 2008](http://go.microsoft.com/fwlink/p/?LinkId=115018) Pour plus d’informations sur l’utilisation des certificats Cryptography API : Next Generation (CNG) avec Configuration Manager, consultez [Vue d’ensemble des certificats CNG](cng-certificates-overview.md). > [!IMPORTANT] > System Center Configuration Manager prend en charge les certificats (SHA-2) (Secure Hash Algorithm 2). Les certificats SHA-2 apportent un avantage important en termes de sécurité. Par conséquent, nous vous recommandons ce qui suit : > - Émettez de nouveaux certificats d’authentification serveur et client signés avec SHA-2 (qui inclut entre autres SHA-256 et SHA-512). > - Tous les services doivent utiliser un certificat SHA-2. Par exemple, si vous achetez un certificat public pour une utilisation avec une passerelle de gestion cloud, vérifiez qu’il s’agit d’un certificat SHA-2. > >À compter du 14 février 2017, Windows ne fait plus confiance à certains certificats signés avec SHA-1. En général, nous vous recommandons d’émettre de nouveaux certificats d’authentification serveur et client signés avec SHA-2 (qui inclut entre autres SHA-256 et SHA-512). Nous vous recommandons aussi d’utiliser un certificat SHA-2 pour tout service Internet. Par exemple, si vous achetez un certificat public pour une utilisation avec une passerelle de gestion cloud, vérifiez qu’il s’agit d’un certificat SHA-2.» > > Dans la plupart des cas, le passage à des certificats SHA-2 n’a pas d’impact sur les opérations. Pour plus d’informations, consultez cet article sur l’[application Windows des certificats SHA1](http://social.technet.microsoft.com/wiki/contents/articles/32288.windows-enforcement-of-sha1-certificates.aspx). À l’exception des certificats clients inscrits par System Center Configuration Manager sur les appareils mobiles et les ordinateurs Mac, des certificats créés automatiquement par Windows Intune pour la gestion des appareils mobiles et des certificats installés par System Center Configuration Manager sur les ordinateurs AMT, vous pouvez utiliser n’importe quelle infrastructure PKI pour créer, déployer et gérer les certificats suivants. Toutefois, lorsque vous utilisez des services de certificats Active Directory et des modèles de certificat, cette solution d’infrastructure à clé publique Microsoft peut faciliter la gestion des certificats. Utilisez la colonne **Modèle de certificat Microsoft à utiliser** des tableaux ci-dessous pour identifier le modèle de certificat qui correspond le plus aux spécifications du certificat. Seule une autorité de certification d’entreprise exécutée sur l’édition Enterprise ou Datacenter du système d’exploitation serveur, par exemple Windows Server 2008 Enterprise et Windows Server 2008 Datacenter, peut utiliser des certificats basés sur des modèles. > [!IMPORTANT] > Lorsque vous utilisez une autorité de certification d’entreprise et des modèles de certificats, n’utilisez pas les modèles de la version 3. Ces modèles de certificat créent des certificats incompatibles avec System Center Configuration Manager. Utilisez plutôt les modèles de la version 2 en suivant les instructions suivantes : > > - Pour une autorité de certification sur Windows Server 2012 : sous l’onglet **Compatibilité** des propriétés du modèle de certificat, spécifiez **Windows Server 2003** pour l’option **Autorité de certification** et **Windows XP/Server 2003** pour l’option **Destinataire du certificat** . > - Pour une autorité de certification sur Windows Server 2008 : quand vous dupliquez un modèle de certificat, conservez la sélection par défaut de **Windows Server 2003 Enterprise** quand vous y êtes invité par la boîte de dialogue **Dupliquer le modèle**. Ne sélectionnez pas **Windows Server 2008, Enterprise Edition**. Utilisez les sections suivantes pour afficher les spécifications du certificat. ## <a name="BKMK_PKIcertificates_for_servers"></a> Certificats PKI pour serveurs |Composant System Center Configuration Manager|Rôle du certificat|Modèle de certificat Microsoft à utiliser|Informations spécifiques du certificat|Mode d’utilisation du certificat dans System Center Configuration Manager| |-------------------------------------|-------------------------|-------------------------------------------|---------------------------------------------|----------------------------------------------------------| |Systèmes de site qui exécutent Internet Information Services (IIS) et qui sont configurés pour les connexions clientes HTTPS :<br /><br /> <ul><li>Point de gestion</li><li>Point de distribution</li><li>Point de mise à jour logicielle</li><li>Point de migration d’état</li><li>Point d'inscription</li><li>Point proxy d'inscription</li><li>Point de service web du catalogue des applications</li><li>Point du site web du catalogue des applications</li><li>Point d’enregistrement de certificat</li></ul>|Authentification du serveur|**Serveur Web**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du serveur (1.3.6.1.5.5.7.3.1)**.<br /><br /> Si le système du site accepte les connexions en provenance d'Internet, Nom d'objet ou Autre nom de l'objet doit correspondre au nom de domaine Internet complet (FQDN).<br /><br /> Si le système de site accepte les connexions en provenance de l’intranet, Nom d’objet ou Autre nom de l’objet doit correspondre soit au nom de domaine complet de l’intranet (recommandé), soit au nom de l’ordinateur, selon la configuration du système de site.<br /><br /> Si le système de site accepte les connexions entrantes Internet et intranet, il est nécessaire de spécifier le nom de domaine Internet complet et le nom de domaine complet de l'intranet (ou le nom de l'ordinateur) en les séparant par une esperluette (&.<br /><br /> **Remarque** : quand le point de mise à jour logicielle accepte des connexions client uniquement à partir d’Internet, le certificat doit contenir à la fois le nom de domaine complet (FQDN) Internet et le nom de domaine complet (FQDN) intranet.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> System Center Configuration Manager ne spécifie pas de longueur de clé maximale prise en charge pour ce certificat. Pour tout problème lié à la taille de clé pour ce certificat, consultez votre PKI et la documentation IIS.|Ce certificat doit se trouver dans le magasin personnel du magasin de certificats de l'ordinateur.<br /><br /> Ce certificat de serveur web est utilisé pour authentifier ces serveurs sur le client et pour chiffrer toutes les données transférées entre le client et ces serveurs à l’aide du protocole SSL (Secure Sockets Layer).| |Point de distribution cloud|Authentification du serveur|**Serveur Web**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du serveur (1.3.6.1.5.5.7.3.1)**.<br /><br /> Le nom d'objet doit contenir un nom de service défini par le client et un nom de domaine sous forme de nom de domaine complet, servant de nom commun pour l'instance spécifique du point de distribution cloud.<br /><br /> La clé privée doit être exportable.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> Longueurs de clé prises en charge : 2 048 bits.|Ce certificat de service permet d’authentifier le service du point de distribution cloud auprès des clients Configuration Manager et de chiffrer l’intégralité des données transférées entre ces clients par le biais du protocole SSL (Secure Sockets Layer). Ce certificat doit être exporté au format Public Key Certificate Standard (PKCS #12), et le mot de passe doit être connu, de sorte qu'il puisse être importé lors de la création d'un point de distribution cloud.<br /><br /> **Remarque :** ce certificat est utilisé conjointement avec le certificat de gestion Windows Azure. | |serveurs de système de site exécutant Microsoft SQL Server|Authentification du serveur|**Web server**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du serveur (1.3.6.1.5.5.7.3.1)**.<br /><br /> Le nom du sujet doit contenir le nom de domaine complet (FQDN) de l'intranet.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> La longueur maximale de la clé est de 2 048 bits.|Ce certificat doit se trouver dans le magasin personnel du magasin de certificats de l’ordinateur. System Center Configuration Manager le copie automatiquement dans le magasin des personnes autorisées pour les serveurs de la hiérarchie System Center Configuration Manager qui peuvent avoir besoin d’établir une relation de confiance avec le serveur.<br /><br /> Ces certificats sont utilisés pour l'authentification de serveur à serveur.| |Cluster SQL Server : serveurs de système de site exécutant Microsoft SQL Server|Authentification du serveur|**Web server**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du serveur (1.3.6.1.5.5.7.3.1)**.<br /><br /> Le nom d'objet doit contenir le nom de domaine complet (FQDN) de l'intranet du cluster.<br /><br /> La clé privée doit être exportable.<br /><br /> Le certificat doit avoir une période de validité d’au moins deux ans quand vous configurez System Center Configuration Manager de sorte qu’il utilise le cluster SQL Server.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> La longueur maximale de la clé est de 2 048 bits.|Après avoir demandé et installé ce certificat sur un nœud du cluster, exportez le certificat et importez-le dans les autres nœuds du cluster SQL Server.<br /><br /> Ce certificat doit se trouver dans le magasin personnel du magasin de certificats de l’ordinateur. System Center Configuration Manager le copie automatiquement dans le magasin des personnes autorisées pour les serveurs de la hiérarchie System Center Configuration Manager qui peuvent avoir besoin d’établir une relation de confiance avec le serveur.<br /><br /> Ces certificats sont utilisés pour l'authentification de serveur à serveur.| |Surveillance de système de site pour les rôles de système de site suivants :<br /><br /><ul><li>Point de gestion</li><li>Point de migration d’état</li></ul>|Authentification du client|**Authentification de station de travail**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Les ordinateurs doivent présenter une valeur unique dans les champs Nom de l'objet ou Autre nom de l'objet.<br /><br /> **Remarque :** si vous indiquez plusieurs valeurs pour Autre nom de l’objet, seule la première est utilisée.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> La longueur maximale de la clé est de 2 048 bits.|Ce certificat est requis sur les serveurs de système de site répertoriés, même si le client System Center Configuration Manager n’est pas installé. Cette configuration permet de surveiller et de signaler au site l’intégrité de ces rôles de système de site.<br /><br /> Le certificat de ces systèmes de site doit se trouver dans le magasin personnel du magasin de certificats de l'ordinateur.| |Serveurs exécutant le module de stratégie System Center Configuration Manager avec le service de rôle du Service d’inscription de périphérique réseau.|Authentification du client|**Authentification de station de travail**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Il n’existe aucune exigence particulière pour l’objet du certificat ou l’autre nom de l’objet. Vous pouvez utiliser le même certificat pour plusieurs serveurs exécutant le service d’inscription de périphérique réseau.<br /><br /> Les algorithmes de hachage SHA-2 et SHA-3 sont pris en charge.<br /><br /> Longueurs de clé prises en charge : 1 024 bits et 2 048 bits.|| |Systèmes de site ayant un point de distribution installé|Authentification du client|**Authentification de station de travail**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Il n’existe aucune exigence particulière pour l’objet du certificat ou l’autre nom de l’objet. Vous pouvez utiliser le même certificat pour plusieurs points de distribution. Toutefois, nous vous recommandons d’utiliser un certificat différent pour chaque point de distribution.<br /><br /> La clé privée doit être exportable.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> La longueur maximale de la clé est de 2 048 bits.|Ce certificat a deux objectifs :<br /><br /><ul><li>Il authentifie le point de distribution sur un point de gestion HTTPS avant que le point de distribution n'envoie des messages d'état.</li><li>Lorsque l’option de point de distribution **Activer la prise en charge PXE pour les clients** est activée, le certificat est envoyé aux ordinateurs. Si des séquences de tâches du processus de déploiement du système d’exploitation comportent des actions du client, comme la récupération de la stratégie client ou l’envoi d’informations d’inventaire, les ordinateurs clients peuvent se connecter à un point de gestion HTTPS pendant le déploiement du système d’exploitation.</li></ul> Ce certificat n'est utilisé que pour la durée de la procédure de déploiement du système d'exploitation et n'est pas installé sur le client. En raison de cette utilisation temporaire, le même certificat peut être utilisé pour chaque déploiement du système d'exploitation si vous ne souhaitez pas utiliser plusieurs certificats de client.<br /><br /> Le certificat doit être exporté au format Public Key Certificate Standard (PKCS #12). Le mot de passe doit être connu, de sorte qu’il puisse être importé dans les propriétés du point de distribution.<br /><br /> **Remarque :** la configuration requise pour ce certificat est la même que celle du certificat client des images de démarrage pour le déploiement de systèmes d’exploitation. Dans la mesure où les exigences sont les mêmes, vous pouvez utiliser le même fichier de certificat.| |Serveur de système de site exécutant le connecteur Microsoft Intune|Authentification du client|Non applicable : Intune crée automatiquement ce certificat.|La valeur **Utilisation avancée de la clé** doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Trois extensions personnalisées identifient de manière unique l’abonnement Intune du client.<br /><br /> La taille de la clé est de 2 048 bits et elle utilise l’algorithme de hachage SHA-1.<br /><br /> **Remarque :** vous ne pouvez pas modifier ces paramètres. ces informations sont fournies uniquement à titre d'information.|Ce certificat est demandé et installé automatiquement dans la base de données de Configuration Manager quand vous vous abonnez à Microsoft Intune. Quand vous installez le connecteur Microsoft Intune, ce certificat est ensuite installé sur le serveur de système de site qui exécute le connecteur Microsoft Intune. Il est installé dans le magasin de certificats de l’ordinateur.<br /><br /> Ce certificat est utilisé pour authentifier la hiérarchie Configuration Manager auprès de Microsoft Intune à l’aide du connecteur Microsoft Intune. Toutes les données ainsi transférées utilisent le protocole SSL (Secure Sockets Layer).| ### <a name="BKMK_PKIcertificates_for_proxyservers"></a> Serveurs web proxy pour la gestion du client basée sur Internet Si le site prend en charge la gestion des clients basés sur Internet et que vous utilisez un serveur Web proxy avec terminaison SSL (pontage) pour les connexions Internet entrantes, le serveur Web proxy exige les certificats répertoriés dans le tableau suivant. > [!NOTE] > Si vous utilisez un serveur Web proxy sans terminaison SSL (tunnel), aucun autre certificat n'est requis sur le serveur Web proxy. |Composant d'infrastructure réseau|Rôle du certificat|Modèle de certificat Microsoft à utiliser|Informations spécifiques du certificat|Mode d’utilisation du certificat dans System Center Configuration Manager| |--------------------------------------|-------------------------|-------------------------------------------|---------------------------------------------|----------------------------------------------------------| |Serveur Web proxy acceptant les connexions de clients sur Internet|Authentification de serveur et authentification de client|1. <br /> **Serveur Web**<br /><br /> 2. <br /> **Authentification de station de travail**|Nom de domaine complet Internet dans le champ Nom de l’objet ou Autre nom de l’objet : Si vous utilisez des modèles de certificats Microsoft, l’autre nom de l’objet n’est disponible qu’avec le modèle pour station de travail.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.|Ce certificat est utilisé pour authentifier les serveurs suivants auprès de clients Internet et pour crypter toutes les données transférées entre le client et ce serveur en utilisant le protocole SSL :<br /><br /><ul><li>Point de gestion Internet</li><li>Point de distribution basé sur Internet</li><li>point de mise à jour logicielle Internet</li></ul> L’authentification client est utilisée pour ponter les connexions client entre les clients System Center Configuration Manager et les systèmes de site basés sur Internet.| ## <a name="BKMK_PKIcertificates_for_clients"></a> Certificats PKI pour les clients |Composant System Center Configuration Manager|Rôle du certificat|Modèle de certificat Microsoft à utiliser|Informations spécifiques du certificat|Mode d’utilisation du certificat dans System Center Configuration Manager| |-------------------------------------|-------------------------|-------------------------------------------|---------------------------------------------|----------------------------------------------------------| |Ordinateurs clients Windows|Authentification du client|**Authentification de station de travail**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Les ordinateurs clients doivent présenter une valeur unique dans les champs Nom de l'objet ou Autre nom de l'objet.<br /><br /> **Remarque :** si vous indiquez plusieurs valeurs pour Autre nom de l’objet, seule la première est utilisée.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> La longueur maximale de la clé est de 2 048 bits.|Par défaut, System Center Configuration Manager recherche des certificats d’ordinateur dans le magasin personnel du magasin de certificats d’ordinateur.<br /><br /> À l’exception du point de mise à jour logicielle et du point de site web du catalogue des applications, ce certificat authentifie le client auprès de serveurs de système de site qui exécutent IIS et qui sont configurés pour utiliser le protocole HTTPS.| |clients d'appareils mobiles|Authentification du client|**Session authentifiée**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> SHA-1<br /><br /> La longueur maximale de la clé est de 2 048 bits.<br /><br /> **Remarques** :<br /><br /><ul><li>Ces certificats doivent être au format binaire codé DER X.509.</li><li>Le format X.509 codé en Base64 n'est pas pris en charge.</li></ul>|Ce certificat authentifie le client de l’appareil mobile auprès des serveurs de système de site avec lesquels il communique, tels que les points de gestion et les points de distribution.| |Images de démarrage pour le déploiement de systèmes d'exploitation|Authentification du client|**Authentification de station de travail**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Aucune configuration spécifique n'est requise pour le champ Nom de l'objet ou Autre nom de l'objet (SAN) du certificat, et vous pouvez utiliser le même certificat pour toutes les images de démarrage.<br /><br /> La clé privée doit être exportable.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> La longueur maximale de la clé est de 2 048 bits.|Le certificat est utilisé si les séquences des tâches dans la procédure du déploiement du système d’exploitation comprennent des actions du client telles que la récupération de la stratégie du client ou l’envoi des données d’inventaire.<br /><br /> Ce certificat n'est utilisé que pour la durée de la procédure de déploiement du système d'exploitation et n'est pas installé sur le client. En raison de cette utilisation temporaire, le même certificat peut être utilisé pour chaque déploiement du système d'exploitation si vous ne souhaitez pas utiliser plusieurs certificats de client.<br /><br /> Ce certificat doit être exporté au format Public Key Certificate Standard (PKCS #12), et le mot de passe doit être connu, de sorte qu’il puisse être importé dans les images de démarrage de System Center Configuration Manager.<br /><br /> Ce certificat est temporaire pour la séquence de tâches et n’est pas utilisé pour installer le client. Lorsque vous avez un environnement avec HTTPS uniquement, le client doit avoir un certificat valide pour communiquer avec le site et pour que le déploiement continue. Le client peut générer automatiquement un certificat quand il est joint à Active Directory, ou vous pouvez installer un certificat client à l’aide d’une autre méthode.<br /><br /> **Remarque :** la configuration requise pour ce certificat est la même que celle du certificat du serveur pour les systèmes de site ayant un point de distribution installé. Dans la mesure où les exigences sont les mêmes, vous pouvez utiliser le même fichier de certificat.| |Ordinateurs clients Mac|Authentification du client|Pour l’inscription System Center Configuration Manager : **Session authentifiée**<br /><br /> Pour une installation de certificat indépendante de System Center Configuration Manager : **Authentification de station de travail**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Si System Center Configuration Manager crée un certificat utilisateur, la valeur Objet du certificat est renseignée automatiquement en utilisant le nom d’utilisateur de la personne qui inscrit l’ordinateur Mac.<br /><br /> Dans le cas d’une installation de certificat qui n’utilise pas l’inscription System Center Configuration Manager, mais qui déploie un certificat d’ordinateur indépendamment de System Center Configuration Manager, la valeur Objet du certificat doit être unique. Indiquez par exemple le nom de domaine complet de l'ordinateur.<br /><br /> Le champ Autre nom de l'objet n'est pas pris en charge.<br /><br /> L’algorithme de hachage SHA-2 est pris en charge.<br /><br /> La longueur maximale de la clé est de 2 048 bits.|Ce certificat authentifie l’ordinateur client Mac auprès des serveurs de système de site avec lesquels il communique, tels que les points de gestion et les points de distribution.| |Ordinateurs clients Linux et UNIX|Authentification du client|**Authentification de station de travail**|**Utilisation avancée de la clé** : la valeur de ce paramètre doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Le champ Autre nom de l'objet n'est pas pris en charge.<br /><br /> La clé privée doit être exportable.<br /><br /> L'algorithme de hachage SHA-2 est pris en charge si le système d'exploitation du client prend en charge SHA-2. Pour plus d’informations, consultez la section [À propos des systèmes d’exploitation Linux et UNIX qui ne prennent pas en charge SHA-256](../../../core/clients/deploy/plan/planning-for-client-deployment-to-linux-and-unix-computers.md#BKMK_NoSHA-256) dans la rubrique [Planification du déploiement de clients sur des ordinateurs Linux et UNIX dans System Center Configuration Manager](../../../core/clients/deploy/plan/planning-for-client-deployment-to-linux-and-unix-computers.md).<br /><br /> Longueurs de clé prises en charge : 2 048 bits.<br /><br /> **Remarque :** ces certificats doivent être au format binaire encodé DER (Distinguished Encoding Rules) X.509. Le format X.509 codé en Base64 n'est pas pris en charge.|Ce certificat authentifie l’ordinateur client Linux ou UNIX auprès des serveurs de système de site avec lesquels il communique, tels que les points de gestion et les points de distribution. Le certificat doit être exporté au format Public Key Certificate Standard (PKCS#12), et le mot de passe doit être connu, de sorte que vous puissiez l'indiquer au client lors de la spécification du certificat PKI.<br /><br /> Pour plus d’informations, consultez la section [Planification de la sécurité et les certificats pour les serveurs Linux et UNIX](../../../core/clients/deploy/plan/planning-for-client-deployment-to-linux-and-unix-computers.md#BKMK_SecurityforLnU) dans la rubrique [Planification du déploiement de clients sur des ordinateurs Linux et UNIX dans System Center Configuration Manager](../../../core/clients/deploy/plan/planning-for-client-deployment-to-linux-and-unix-computers.md).| |Certificats de l'autorité de certification (CA) racine pour les scénarios suivants :<br /><br /><ul><li>Déploiement du système d'exploitation</li><li>Inscription d'appareil mobile</li><li>Authentification du serveur RADIUS pour les ordinateurs Intel basés sur AMT</li><li> Authentification du certificat du client</li></ul>|Chaîne de certificat pour une source approuvée|Non applicable.|Certificat d'autorité de certification racine standard.|Le certificat d'autorité de certification racine doit être fourni lorsque les clients doivent lier les certificats du serveur qui communique à une source fiable. Cela s'applique aux scénarios suivants :<br /><br /><ul><li>Lorsque vous déployez un système d’exploitation et pendant l’exécution de séquences de tâches qui connectent l’ordinateur client à un point de gestion configuré pour utiliser le protocole HTTPS.</li><li>Quand vous inscrivez un appareil mobile qui doit être géré par System Center Configuration Manager.</li><li>Quand vous utilisez l’authentification 802.1X pour les ordinateurs AMT et que vous souhaitez spécifier un fichier pour le certificat racine du serveur RADIUS.</li></ul> De plus, le certificat d'autorité de certification racine des clients doit être fourni si les certificats du client ont été émis par une hiérarchie d'autorité de certification différente de celle ayant émis le certificat du point de gestion.| |Appareils mobiles inscrits par Microsoft Intune|Authentification du client|Non applicable : Intune crée automatiquement ce certificat.|La valeur **Utilisation avancée de la clé** doit contenir **Authentification du client (1.3.6.1.5.5.7.3.2)**.<br /><br /> Trois extensions personnalisées identifient de manière unique l’abonnement Intune du client.<br /><br /> Les utilisateurs peuvent fournir la valeur Objet du certificat lors de l'inscription. Cependant, Intune n’utilise pas cette valeur pour identifier l’appareil.<br /><br /> La taille de la clé est de 2 048 bits et elle utilise l’algorithme de hachage SHA-1.<br /><br /> **Remarque :** vous ne pouvez pas modifier ces paramètres. ces informations sont fournies uniquement à titre d'information.|Ce certificat est demandé et installé automatiquement quand les utilisateurs authentifiés inscrivent leurs appareils mobiles à l’aide de Microsoft Intune. Le certificat obtenu sur l’appareil réside dans le magasin de l’ordinateur et authentifie l’appareil mobile inscrit auprès d’Intune pour qu’il puisse être géré.<br /><br /> Du fait des extensions personnalisées du certificat, l’authentification se limite à l’abonnement Intune établi pour l’organisation.|
337.011364
2,273
0.768925
fra_Latn
0.964714
e5b3502439b61198d061f2dc9e1feb4e2d5ae915
151
md
Markdown
translations/ru-RU/data/reusables/repositories/sidebar-code-scanning-alerts.md
nyanthanya/Cuma_Info
d519c49504fc3818c1294f14e63ee944d2f4bd89
[ "CC-BY-4.0", "MIT" ]
17
2021-01-05T16:29:05.000Z
2022-02-26T09:08:44.000Z
translations/ru-RU/data/reusables/repositories/sidebar-code-scanning-alerts.md
nyanthanya/Cuma_Info
d519c49504fc3818c1294f14e63ee944d2f4bd89
[ "CC-BY-4.0", "MIT" ]
222
2021-04-08T20:13:34.000Z
2022-03-18T22:37:27.000Z
translations/ru-RU/data/reusables/repositories/sidebar-code-scanning-alerts.md
nyanthanya/Cuma_Info
d519c49504fc3818c1294f14e63ee944d2f4bd89
[ "CC-BY-4.0", "MIT" ]
3
2021-08-31T03:18:06.000Z
2021-10-30T17:49:09.000Z
1. In the left sidebar, click **Code scanning alerts**. !["Code scanning alerts" tab](/assets/images/help/repository/sidebar-code-scanning-alerts.png)
75.5
150
0.761589
eng_Latn
0.205119
e5b3ec87e76c538f8d811e8d2f58d377d1d665a4
1,043
md
Markdown
README.md
shmorcel/Wiki-Fetch
4173aa7ec0029ad3baa155bc8fe7124d5a7429cf
[ "MIT" ]
null
null
null
README.md
shmorcel/Wiki-Fetch
4173aa7ec0029ad3baa155bc8fe7124d5a7429cf
[ "MIT" ]
null
null
null
README.md
shmorcel/Wiki-Fetch
4173aa7ec0029ad3baa155bc8fe7124d5a7429cf
[ "MIT" ]
null
null
null
# Wiki Fetch ## Description A Google Chrome Extension that grabs Wikipedia text and downloads it for you. It also traverses the website in either a breadth-first search or a depth-first search, check the options. ## To get started using it: 1. Clone the repository into some folder of your choosing. 2. Launch Google Chrome. 3. go to: chrome://extensions. 4. Turn on Developer mode using the switch at the top-right corner of the screen. 5. Click the "load unpacked" button at the top-left corner of the screen. 6. Browse to and select the inner-most folder before the files of the extension start to appear. ## Known issues: 1. On Windows, the downloaded file-names are not meaningful. A quick fix should remove this. 2. The depth-first search can run into a stub topic, which can stop the progression (it's not really depth-first search). ## Features: - ANSI to ASCII conversion: é becomes e, et cetera... - ASCII output: 1-byte per character. ## Features to add: - More options in the option page to change how the text is processed.
43.458333
184
0.760307
eng_Latn
0.996674
e5b46872d3f8c9ead436e27c360127d0eccc55b4
3,820
md
Markdown
vendor/proveyourskillz/lol-api/readme.md
Skumb/G-Esport
300c347c5a8db58b4b5b4d3a6515497d44ea8fa8
[ "MIT" ]
7
2018-06-26T09:41:04.000Z
2018-07-23T22:19:20.000Z
vendor/proveyourskillz/lol-api/readme.md
PYDeret/G-Esport
300c347c5a8db58b4b5b4d3a6515497d44ea8fa8
[ "MIT" ]
6
2018-06-28T13:56:29.000Z
2018-07-16T08:53:38.000Z
vendor/proveyourskillz/lol-api/readme.md
Skumb/G-Esport
300c347c5a8db58b4b5b4d3a6515497d44ea8fa8
[ "MIT" ]
5
2018-06-26T09:40:22.000Z
2018-10-03T22:05:42.000Z
# League of Legends PHP API wrapper Dead simple wrapper of Riot Games API (LoL) ## Requirements * PHP 7.1 * works better with ext-curl installed ## Installation `composer require proveyourskillz/lol-api` ### Creating API instance ```php $api = new PYS\LolApi\Api(API_KEY); ``` ### Laravel/Lumen service provider and Facade #### Laravel/Lumen 5.5 ServiceProvider and Facade are registered automatically through [package discovery](https://laravel.com/docs/5.5/packages#package-discovery) #### Laravel 5.4 In `config/app.php` add `PYS\LolApi\Laravel\ServiceProvider` as provider #### Lumen 5.4 Register ServiceProvider according [documentation](https://lumen.laravel.com/docs/5.5/providers#registering-providers) Optionally you can add facade to aliases `'LolApi' => PYS\LolApi\Laravel\Facade::class` After installation you can use API through facade or inject as dependency ## Usage You can find examples of usage in `examples` dir ### Region You can pass region to request as [2-3 characters code](https://developer.riotgames.com/regional-endpoints.html) but better use `Region` class constants ```php use PYS\LolApi\ApiRequest\Region; $summoner = $api->summoner(Region::EUW, $summonerId); ``` ### Summoner There are several ways to get Summoner: by account id, summoner id or by name ```php // You can get summoner in several ways by passing type in third argument // Default version: summoner, you can ommit it $summonerById = $api->summoner($region, $summonerId); $summonerByAccount = $api->makeSummoner($region, $accountId, 'account'); $summonerByName = $api->summoner($region, $name, 'name'); ``` For more information see [Summoner API reference](https://developer.riotgames.com/api-methods/#summoner-v3) ### Match List Recent ```php $matches = $api->matchList($region, $accountId); ``` Recent via Summoner ```php $matches = $api->summoner($region, $summonerId)->recentMatches(); ``` Using query (e.g. one last match) ```php $matches = $api->matchList( $region, $accountId, [ 'beginIndex' => 0, 'endIndex' => 1, ] ); ``` ### Match Match by Id ```php $match = $api->match($region, $matchId); ``` Match within Tournament ```php $match = $api->match($region, $matchId, $tournamentId); ``` For more information see [Match API reference](https://developer.riotgames.com/api-methods/#match-v3) ### Leagues Leagues and Positions of summoner by Summoner Id ```php $leaguesPositions = $api->leaguePosition($region, $summonerId); ``` Leagues and Positions of summoner via Summoner object ```php $leaguesPositions = $api ->summoner($region, $summonerId) ->leaguesPositions(); ``` Leagues by Summoner Id ```php $leagues = $api->league($region, $summonerId); ``` ## Reusable requests and queries Examples from above (e.g. match list request with query) are shows usage of syntax sugar methods and can be rewritten as ```php use PYS\LolApi\ApiRequest\MatchListRequest; use PYS\LolApi\ApiRequest\Query\MatchListQuery; $api = new PYS\LolApi\Api($API_KEY); $query = new MatchListQuery; $request = new MatchListRequest($region, $accountId); $request->setQuery($query->lastMatches(1)); $matchList = $api->make($request); ``` Query objects have fluent setters for easy setup some properties like dates etc. ```php // Setup query object to get last 5 matches in 24 hours $query = new MatchListQuery; $query ->fromDate(new DateTime('-24 hours')) ->lastMatches(5); ``` ## Contributing 1. Fork it! 2. Create your feature branch: `git checkout -b my-new-feature` 3. Commit your changes: `git commit -am 'Add some feature'` 4. Push to the branch: `git push origin my-new-feature` 5. Submit a pull request :D ## History Alpha version ## Credits - Anton Orlov <anton@proveyourskillz.com> - Pavel Dudko <pavel@proveyourskillz.com> ## License MIT
23.292683
152
0.718586
eng_Latn
0.853808
e5b4e908c55e01b833155fa0acd7cb539c9175ec
1,125
md
Markdown
docs/framework/additional-apis/microsoft.sqlserver.server.smiorderproperty.item.md
emagers/docs
d024454345ea75f0f867c67cea8961724b0c0cbc
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/additional-apis/microsoft.sqlserver.server.smiorderproperty.item.md
emagers/docs
d024454345ea75f0f867c67cea8961724b0c0cbc
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/additional-apis/microsoft.sqlserver.server.smiorderproperty.item.md
emagers/docs
d024454345ea75f0f867c67cea8961724b0c0cbc
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SmiOrderProperty.Item Property (Microsoft.SqlServer.Server) author: douglaslMS ms.author: douglasl ms.date: 12/20/2018 ms.technology: - "dotnet-data" api_name: - "Microsoft.SqlServer.Server.SmiOrderProperty.Item" - "Microsoft.SqlServer.Server.SmiOrderProperty.get_Item" api_location: - "System.Data.dll" api_type: - "Assembly" --- # SmiOrderProperty.Item Property Gets the column order for the entity. The assembly that contains this property has a friend relationship with SQLAccess.dll. It's intended for use by SQL Server. For other databases, use the hosting mechanism provided by that database. ## Syntax ```csharp internal SmiColumnOrder Item { get; } ``` ## Property value The column order. ## Remarks > [!WARNING] > The `SmiOrderProperty.Item` property is internal and is not meant to be used directly in your code. > > Microsoft does not support the use of this property in a production application under any circumstance. ## Requirements **Namespace:** <xref:Microsoft.SqlServer.Server> **Assembly:** System.Data (in System.Data.dll) **.NET Framework versions:** Available since 2.0.
26.162791
235
0.758222
eng_Latn
0.898004
e5b4f7f00bd1372be7bfa58650fed3740788fc44
37,862
md
Markdown
playlists/cumulative/37i9dQZF1DX0IlCGIUGBsA.md
masudissa0210/spotify-playlist-archive
a1a4a94af829378c9855040d905e04080c581acb
[ "MIT" ]
null
null
null
playlists/cumulative/37i9dQZF1DX0IlCGIUGBsA.md
masudissa0210/spotify-playlist-archive
a1a4a94af829378c9855040d905e04080c581acb
[ "MIT" ]
null
null
null
playlists/cumulative/37i9dQZF1DX0IlCGIUGBsA.md
masudissa0210/spotify-playlist-archive
a1a4a94af829378c9855040d905e04080c581acb
[ "MIT" ]
null
null
null
[pretty](/playlists/pretty/37i9dQZF1DX0IlCGIUGBsA.md) - cumulative - [plain](/playlists/plain/37i9dQZF1DX0IlCGIUGBsA) - [githistory](https://github.githistory.xyz/mackorone/spotify-playlist-archive/blob/main/playlists/plain/37i9dQZF1DX0IlCGIUGBsA) ### [Ultimate Party Classics](https://open.spotify.com/playlist/4tBOrClKcyYatqHpPJNoHl) > Get the party going with these classics from across the decades. | Title | Artist(s) | Album | Length | Added | Removed | |---|---|---|---|---|---| | [\(Your Love Keeps Lifting Me\) Higher & Higher](https://open.spotify.com/track/4TBBPZks71c60whhq0PgdP) | [Jackie Wilson](https://open.spotify.com/artist/4VnomLtKTm9Ahe1tZfmZju) | [The Ultimate Jackie Wilson](https://open.spotify.com/album/1NXxURGbIYbunQfXmChHAl) | 3:01 | 2021-12-16 | | | [9 To 5](https://open.spotify.com/track/5UWYJWoVXRpaLuzzXmHqg8) | [Dolly Parton](https://open.spotify.com/artist/32vWCbZh0xZ4o9gkz4PsEU) | [Dolly Parton Slipcase](https://open.spotify.com/album/4QrOzl1g2R19A8b2mjGHGx) | 2:42 | 2021-12-16 | | | [A Little Less Conversation \- JXL Radio Edit Remix](https://open.spotify.com/track/4l2hnfUx0esSbITQa7iJt0) | [Elvis Presley](https://open.spotify.com/artist/43ZHCT0cAZBISjO8DG9PnE), [JXL](https://open.spotify.com/artist/2GyakhIO8twSgCnUFfCzTN) | [Elvis 75 \- Good Rockin' Tonight](https://open.spotify.com/album/34EYk8vvJHCUlNrpGxepea) | 3:31 | 2021-12-16 | | | [A Little Respect](https://open.spotify.com/track/68J1DQoZvymFuaIOz3RdVK) | [Erasure](https://open.spotify.com/artist/0z5DFXmhT4ZNzWElsM7V89) | [Best of Erasure](https://open.spotify.com/album/1wZdovazzVok2gwd5Zu0Y2) | 3:33 | 2021-12-16 | | | [ABC](https://open.spotify.com/track/6cb0HzFQPN4BGADOmSzPCw) | [The Jackson 5](https://open.spotify.com/artist/2iE18Oxc8YSumAU232n4rW) | [ABC](https://open.spotify.com/album/4GuzZh2dtsOjG3sMkx52eR) | 2:54 | 2021-12-16 | 2022-01-08 | | [ABC](https://open.spotify.com/track/01gwPP2h3ajRnqiIphUtR7) | [The Jackson 5](https://open.spotify.com/artist/2iE18Oxc8YSumAU232n4rW) | [Anthology: Jackson 5](https://open.spotify.com/album/0EwhxzV0N61hu3S3PkB2Ku) | 2:57 | 2021-12-16 | 2022-01-06 | | [Ain't No Mountain High Enough](https://open.spotify.com/track/2H3ZUSE54pST4ubRd5FzFR) | [Marvin Gaye](https://open.spotify.com/artist/3koiLjNrgRTNbOwViDipeA), [Tammi Terrell](https://open.spotify.com/artist/75jNCko3SnEMI5gwGqrbb8) | [United](https://open.spotify.com/album/6sbZYwwQB15bt5TgkPFAdb) | 2:31 | 2021-12-16 | | | [Ain't No Stoppin' Us Now](https://open.spotify.com/track/6EOdY7I7Xm1vPP1cyaGbWZ) | [McFadden & Whitehead](https://open.spotify.com/artist/3iQM78Xg0wJnGZhgVNLPmY) | [Boogie Nights / Music From The Original Motion Picture](https://open.spotify.com/album/4HUntZg0YV0qCvRxmIhq2U) | 3:40 | 2021-12-16 | 2021-12-31 | | [Are You Ready For Love \- '79 Version Radio Edit](https://open.spotify.com/track/3YdJzolD4HFvWGioELW2pC) | [Elton John](https://open.spotify.com/artist/3PhoLpVuITZKcymswpck5b) | [Are You Ready For Love?](https://open.spotify.com/album/1l9jI5f9g2WntXhj84O8YD) | 3:30 | 2021-12-16 | | | [Bang a Gong \(Get It On\)](https://open.spotify.com/track/3u4LegCSV70aXLNiTgrxCH) | [T\. Rex](https://open.spotify.com/artist/3dBVyJ7JuOMt4GE9607Qin) | [Electric Warrior](https://open.spotify.com/album/4Yw5uS8at8GkWmH2gZmLY0) | 4:21 | 2021-12-16 | | | [Believe](https://open.spotify.com/track/2goLsvvODILDzeeiT4dAoR) | [Cher](https://open.spotify.com/artist/72OaDtakiy6yFqkt4TsiFt) | [Believe](https://open.spotify.com/album/0jZfbz0dNfDjPSg0hYJNth) | 3:59 | 2021-12-16 | | | [Black Magic](https://open.spotify.com/track/5y6pj7OeBFF0CVgZKhRbOG) | [Little Mix](https://open.spotify.com/artist/3e7awlrlDSwF3iM0WBjGMp) | [Get Weird \(Deluxe\)](https://open.spotify.com/album/4bzVI1FElc13HQagFR7S1W) | 3:31 | 2021-12-16 | 2022-01-01 | | [Black or White \- Single Version](https://open.spotify.com/track/2Cy7QY8HPLk925AyNAt6OG) | [Michael Jackson](https://open.spotify.com/artist/3fMbdgg4jU18AjLCKBhRSm) | [The Essential Michael Jackson](https://open.spotify.com/album/77dNyQA0z8dV33M4so4eRY) | 3:22 | 2021-12-16 | | | [Build Me up Buttercup](https://open.spotify.com/track/61ukXvkg6KwCZDiyexVDsD) | [The Foundations](https://open.spotify.com/artist/4GITZM5LCR2KcdlgEOrNLD) | [Doo Wop & Pop, Vol\. 1](https://open.spotify.com/album/2GRx6M5iaEWssVFYfzjEuL) | 2:59 | 2021-12-16 | | | [California Gurls](https://open.spotify.com/track/6tS3XVuOyu10897O3ae7bi) | [Katy Perry](https://open.spotify.com/artist/6jJ0s89eD6GaHleKKya26X), [Snoop Dogg](https://open.spotify.com/artist/7hJcb9fa4alzcOq3EaNPoG) | [Katy Perry \- Teenage Dream: The Complete Confection](https://open.spotify.com/album/5BvgP623rtvlc0HDcpzquz) | 3:54 | 2021-12-16 | 2021-12-25 | | [Call Me Maybe](https://open.spotify.com/track/20I6sIOMTCkB6w7ryavxtO) | [Carly Rae Jepsen](https://open.spotify.com/artist/6sFIWsNpZYqfjUpaCgueju) | [Kiss](https://open.spotify.com/album/6SSSF9Y6MiPdQoxqBptrR2) | 3:13 | 2021-12-16 | | | [CAN'T STOP THE FEELING! \- Film Version](https://open.spotify.com/track/4sQmCQUZcnBPaVm4dEUKv7) | [Justin Timberlake](https://open.spotify.com/artist/31TPClRtHm23RisEBtV3X7), [Anna Kendrick](https://open.spotify.com/artist/6xfqnpe2HnLVUaYXs2F8YS), [Gwen Stefani](https://open.spotify.com/artist/4yiQZ8tQPux8cPriYMWUFP), [James Corden](https://open.spotify.com/artist/5E17eRqSfn08FsmvNCds0P), [Zooey Deschanel](https://open.spotify.com/artist/2GEW6nJjHKAFyqnsE3TdWx), [Ron Funches](https://open.spotify.com/artist/5auLWD3XT6p3im19G2cLhP), [Caroline Hjelt](https://open.spotify.com/artist/0XF3yeiKSQF2zl5H05jfME), [Aino Jawo](https://open.spotify.com/artist/6aIcl5XVRwk32v6hc7lDyV), [Christopher Mintz\-Plasse](https://open.spotify.com/artist/32Y2h6dku6Q2wNpZjj0bHj), [Kunal Nayyar](https://open.spotify.com/artist/4po5m4plDQk01gLzTcCMfA) | [TROLLS \(Original Motion Picture Soundtrack\)](https://open.spotify.com/album/65ayND23IInUPHJKsaAqe7) | 3:57 | 2021-12-16 | | | [Car Wash](https://open.spotify.com/track/0LoZsvlBfEoNGyNX3HhYZk) | [Rose Royce](https://open.spotify.com/artist/1OxJzMLmR9l5zPLap9OxuO) | [Car Wash \(Soundtrack\)](https://open.spotify.com/album/7GMj7i2K3hh9Cl5Zdo4A73) | 5:09 | 2021-12-16 | | | [Celebration \- Single Version](https://open.spotify.com/track/6pc8xULSlsMdFB3OrqbvZ4) | [Kool & The Gang](https://open.spotify.com/artist/3VNITwohbvU5Wuy5PC6dsI) | [Celebration / Morning Star](https://open.spotify.com/album/5RyMdc2cQbURYCUDbzjR6w) | 3:35 | 2021-12-16 | | | [Come On Eileen](https://open.spotify.com/track/4jKojTEsrJtUU5rNp3gbGQ) | [Dexys Midnight Runners](https://open.spotify.com/artist/4QTVePrFu1xuGM9K0kNXkk) | [Too Rye Ay](https://open.spotify.com/album/0oiS0f0gbyZBKfdoCBgutN) | 4:33 | 2021-12-16 | | | [Come On Eileen](https://open.spotify.com/track/0EMmVUYs9ZZRHtlADB88uz) | [Dexys Midnight Runners](https://open.spotify.com/artist/4QTVePrFu1xuGM9K0kNXkk) | [Too Rye Ay](https://open.spotify.com/album/0QNluXFRHFyRVDiBHXmstK) | 4:47 | 2021-12-16 | 2022-01-09 | | [Crazy In Love \(feat\. Jay\-Z\)](https://open.spotify.com/track/5IVuqXILoxVWvWEPm82Jxr) | [Beyoncé](https://open.spotify.com/artist/6vWDO969PvNqNYHIOW5v0m), [JAY\-Z](https://open.spotify.com/artist/3nFkdlSjzX9mRTtwJOzDYB) | [Dangerously In Love](https://open.spotify.com/album/6oxVabMIqCMJRYN1GqR3Vf) | 3:56 | 2021-12-16 | 2021-12-29 | | [Dance Wiv Me](https://open.spotify.com/track/2Slm8U9Ql1TlzHMC7gOCec) | [Dizzee Rascal](https://open.spotify.com/artist/0gusqTJKxtU1UTmNRMHZcv), [Calvin Harris](https://open.spotify.com/artist/7CajNmpbOovFoOoasH2HaY), [Chrome](https://open.spotify.com/artist/6rRfU5PFMng3MXEU7xMwUW) | [Tongue N' Cheek \(Dirtee Deluxe Edition\)](https://open.spotify.com/album/3D8oReEDS5wROmx2DbxX0z) | 3:24 | 2021-12-16 | 2021-12-31 | | [Dancing Queen](https://open.spotify.com/track/01iyCAUm8EvOFqVWYJ3dVX) | [ABBA](https://open.spotify.com/artist/0LcJLqbBmaGUft1e9Mm8HV) | [Arrival](https://open.spotify.com/album/79ZX48114T8NH36MnOTtl7) | 3:50 | 2021-12-16 | | | [December, 1963 \(Oh, What a Night\)](https://open.spotify.com/track/7IFrc7EJpIAYRCqWEQWYHc) | [Frankie Valli & The Four Seasons](https://open.spotify.com/artist/6mcrZQmgzFGRWf7C0SObou) | [Jersey Boys: Music from the Motion Picture and Broadway Musical](https://open.spotify.com/album/7oWx4auBp2kCb54VkRCCUq) | 3:13 | 2021-12-16 | | | [Disco Inferno](https://open.spotify.com/track/3GXo1eWlT2flv4x01l5OTu) | [The Trammps](https://open.spotify.com/artist/1zgNpeHQe8GulzfVkYP2VK) | [Playlist: The Best Of The Trammps](https://open.spotify.com/album/2fbPRtAt1i5f68lZegx1iB) | 3:33 | 2021-12-16 | | | [Don't Go Breaking My Heart](https://open.spotify.com/track/5nPdMALTEd7HOjn16oNf2X) | [Elton John](https://open.spotify.com/artist/3PhoLpVuITZKcymswpck5b), [Kiki Dee](https://open.spotify.com/artist/4vjGlQWexbru6aOUCLTVir) | [Rock Of The Westies \(Remastered\)](https://open.spotify.com/album/6tKgjhjWDMVlgb3a6KoI1x) | 4:35 | 2021-12-16 | | | [Don't Leave Me This Way \- Single Version](https://open.spotify.com/track/76DaYoN2BXdu1dZFdw61qj) | [Thelma Houston](https://open.spotify.com/artist/3sgUnR8TF35euWEV07RPyO) | [Motown 1970s Vol\. 1](https://open.spotify.com/album/0iJiZjL6cmn1wxTc7OG0et) | 3:38 | 2021-12-16 | 2022-01-09 | | [Don't Stop Believin'](https://open.spotify.com/track/4bHsxqR3GMrXTxEPLuK5ue) | [Journey](https://open.spotify.com/artist/0rvjqX7ttXeg3mTy8Xscbt) | [Escape](https://open.spotify.com/album/43wpzak9OmQfrjyksuGwp0) | 4:10 | 2021-12-16 | | | [Don't Stop Me Now \- Remastered 2011](https://open.spotify.com/track/5T8EDUDqKcs6OSOwEsfqG7) | [Queen](https://open.spotify.com/artist/1dfeR4HaWDbWqFHLkxsg1d) | [Jazz \(2011 Remaster\)](https://open.spotify.com/album/2yuTRGIackbcReLUXOYBqU) | 3:29 | 2021-12-16 | 2022-01-09 | | [Don't Stop Me Now \- Remastered 2011](https://open.spotify.com/track/7hQJA50XrCWABAu5v6QZ4i) | [Queen](https://open.spotify.com/artist/1dfeR4HaWDbWqFHLkxsg1d) | [Jazz \(Deluxe Remastered Version\)](https://open.spotify.com/album/21HMAUrbbYSj9NiPPlGumy) | 3:29 | 2021-12-16 | | | [Don't Stop The Music](https://open.spotify.com/track/6tXjP6xgPJ7Xr1igrO6bOE) | [Rihanna](https://open.spotify.com/artist/5pKCCKE2ajJHZ9KAiaK11H) | [Good Girl Gone Bad: Reloaded](https://open.spotify.com/album/1G2ZRwcqN6warfakvcPgEs) | 4:27 | 2021-12-16 | 2021-12-25 | | [Don't You Want Me](https://open.spotify.com/track/3L7RtEcu1Hw3OXrpnthngx) | [The Human League](https://open.spotify.com/artist/1aX2dmV8XoHYCOQRxjPESG) | [Dare!](https://open.spotify.com/album/3ls7tE9D2SIvjTmRuEtsQY) | 3:56 | 2021-12-16 | | | [Dynamite](https://open.spotify.com/track/2CEgGE6aESpnmtfiZwYlbV) | [Taio Cruz](https://open.spotify.com/artist/6MF9fzBmfXghAz953czmBC) | [The Rokstarr Hits Collection](https://open.spotify.com/album/0eGvq1J5Ke7VlLLOYIlY4k) | 3:22 | 2021-12-16 | 2021-12-27 | | [Everybody \(Backstreet's Back\) \- Radio Edit](https://open.spotify.com/track/1di1BEgJYzPvXUuinsYJGP) | [Backstreet Boys](https://open.spotify.com/artist/5rSXSAkZ67PYJSvpUpkOr7) | [Backstreet's Back](https://open.spotify.com/album/2U9ONknz1iFEK9drEKLx8v) | 3:45 | 2021-12-16 | 2021-12-29 | | [Everyday People](https://open.spotify.com/track/4ZVZBc5xvMyV3WzWktn8i7) | [Sly & The Family Stone](https://open.spotify.com/artist/5m8H6zSadhu1j9Yi04VLqD) | [Stand](https://open.spotify.com/album/7iwS1r6JHYJe9xpPjzmWqD) | 2:21 | 2021-12-16 | 2022-01-09 | | [Everywhere](https://open.spotify.com/track/6i8ecOsx4J2Px1maiqzqoG) | [Fleetwood Mac](https://open.spotify.com/artist/08GQAI4eElDnROBrJRGE0X) | [Tango In The Night](https://open.spotify.com/album/1W5YP0TlKjFtb2UZJThLpV) | 3:42 | 2021-12-16 | | | [Feel So Close \- Radio Edit](https://open.spotify.com/track/1gihuPhrLraKYrJMAEONyc) | [Calvin Harris](https://open.spotify.com/artist/7CajNmpbOovFoOoasH2HaY) | [18 Months](https://open.spotify.com/album/7w19PFbxAjwZ7UVNp9z0uT) | 3:26 | 2021-12-16 | 2021-12-24 | | [Feel the Love \(feat\. John Newman\)](https://open.spotify.com/track/0k73nWaD6RPx2sHFEkGPcn) | [Rudimental](https://open.spotify.com/artist/4WN5naL3ofxrVBgFpguzKo), [John Newman](https://open.spotify.com/artist/34v5MVKeQnIo0CWYMbbrPf) | [Home](https://open.spotify.com/album/2AOpbitJNMvKhSbsi2YD4F) | 4:05 | 2021-12-16 | 2022-01-03 | | [Footloose](https://open.spotify.com/track/6ijK0byhkfjqMQcLrzSIbl) | [Kenny Loggins](https://open.spotify.com/artist/3Y3xIwWyq5wnNHPp5gPjOW) | [Clasicos Del Cine](https://open.spotify.com/album/1VWbR5cktfUGFnNGL1WdnM) | 3:39 | 2021-12-16 | 2022-01-09 | | [Footloose \- From "Footloose" Soundtrack](https://open.spotify.com/track/2vz1CsL5WBsbpBcwgboTAw) | [Kenny Loggins](https://open.spotify.com/artist/3Y3xIwWyq5wnNHPp5gPjOW) | [Footloose \(15th Anniversary Collectors' Edition\)](https://open.spotify.com/album/4FZ9s0pelFSliPWhVEWRcC) | 3:46 | 2021-12-16 | | | [Forget You](https://open.spotify.com/track/6zKMARmU2Ue50cucKnIUR6) | [CeeLo Green](https://open.spotify.com/artist/5nLYd9ST4Cnwy6NHaCxbj8) | [The Lady Killer \(The Platinum Edition\)](https://open.spotify.com/album/3MnOoZGbCDAjLf8jOImSbg) | 3:42 | 2021-12-16 | | | [Gimme! Gimme! Gimme! \(A Man After Midnight\)](https://open.spotify.com/track/7r5bS08R8d0jZuDZutVeHQ) | [ABBA](https://open.spotify.com/artist/0LcJLqbBmaGUft1e9Mm8HV) | [Voulez\-Vous](https://open.spotify.com/album/366R23DbfxqjZo6AUBrQjv) | 4:54 | 2021-12-16 | 2022-01-08 | | [Girls Just Want to Have Fun](https://open.spotify.com/track/7sMGwiS4vOMcz86ZY3vKYM) | [Cyndi Lauper](https://open.spotify.com/artist/2BTZIqw0ntH9MvilQ3ewNY) | [The Best Of The 80's](https://open.spotify.com/album/1WNKfSND9D7t30sBPDo1gr) | 3:48 | 2021-12-16 | | | [Give It Up](https://open.spotify.com/track/40QPoAAKKMwPms6I6FHJqy) | [KC & The Sunshine Band](https://open.spotify.com/artist/3mQBpAOMWYqAZyxtyeo4Lo) | [The Very Best of KC & the Sunshine Band](https://open.spotify.com/album/7swznakopP5J1aSOzCsalv) | 4:05 | 2021-12-16 | | | [Gold](https://open.spotify.com/track/0UYit2SshaoGD45cp21LEI) | [Spandau Ballet](https://open.spotify.com/artist/2urZrEdsq72kx0UzfYN8Yv) | [Spandau Ballet ''The Story'' The Very Best of \(Deluxe\)](https://open.spotify.com/album/08X2lSUAtVIoWTpG1G8Niu) | 3:51 | 2021-12-16 | | | [Good Vibrations \- Remastered 2001](https://open.spotify.com/track/5t9KYe0Fhd5cW6UYT4qP8f) | [The Beach Boys](https://open.spotify.com/artist/3oDbviiivRWhXwIE8hxkVV) | [Smiley Smile \(Remastered\)](https://open.spotify.com/album/37rNuexqEXWeSIOiJtn3A9) | 3:39 | 2021-12-16 | | | [Groove Is in the Heart](https://open.spotify.com/track/2He3NOyqtLNE3RQPpeDdSb) | [Deee\-Lite](https://open.spotify.com/artist/4eQJIXFEujzhTVVS1gIfu5) | [World Clique](https://open.spotify.com/album/4sTAgYLZy5zwqR3kT1g0oh) | 3:51 | 2021-12-16 | | | [Happy \- From "Despicable Me 2"](https://open.spotify.com/track/5b88tNINg4Q4nrRbrCXUmg) | [Pharrell Williams](https://open.spotify.com/artist/2RdwBSPQiwcmiDo9kixcl8) | [G I R L](https://open.spotify.com/album/2lkQd5T32QHDOfFkEIPJKz) | 3:52 | 2021-12-16 | 2022-01-03 | | [He's the Greatest Dancer](https://open.spotify.com/track/5MuNxNox3zTanAFIO5KcTl) | [Sister Sledge](https://open.spotify.com/artist/6gkWznnJkdkwRPVcmnrays) | [Atlantic 60th: On The Dance Floor Vol\. 2](https://open.spotify.com/album/7liGswqymvHdcDREn3FQDz) | 6:15 | 2021-12-16 | 2021-12-30 | | [He's the Greatest Dancer](https://open.spotify.com/track/5WwRKYnVy9dekqXAGPbAvU) | [Sister Sledge](https://open.spotify.com/artist/6gkWznnJkdkwRPVcmnrays) | [We Are Family](https://open.spotify.com/album/4GSidaoqyGNwaG5mNKmuLT) | 6:15 | 2021-12-16 | 2022-01-01 | | [He's the Greatest Dancer \- 2006 Remaster](https://open.spotify.com/track/35xPnZxqS9epbCaw8MOxFr) | [Sister Sledge](https://open.spotify.com/artist/6gkWznnJkdkwRPVcmnrays) | [Definitive Groove: Sister Sledge](https://open.spotify.com/album/6Wx25BBKR9mTNnndloRRWs) | 6:14 | 2021-12-16 | 2022-01-09 | | [Hey Ya!](https://open.spotify.com/track/3AszgPDZd9q0DpDFt4HFBy) | [Outkast](https://open.spotify.com/artist/1G9G7WwrXka3Z1r7aIDjI7) | [The Way You Move / Hey Ya!](https://open.spotify.com/album/7etFl9qeqjD0luC40JAg8l) | 3:59 | 2021-12-16 | 2021-12-29 | | [Higher Love \- Single Version](https://open.spotify.com/track/7yDmjiDuIlGTaWgmvSK9FJ) | [Steve Winwood](https://open.spotify.com/artist/5gxynDEKwNDgxGJmJjZyte) | [80's Pop Number 1's](https://open.spotify.com/album/3bl2ynZYqr4l37MQ2Wense) | 4:12 | 2021-12-16 | | | [Hold My Hand](https://open.spotify.com/track/58jx3tTuDuzHysC77c0AQd) | [Jess Glynne](https://open.spotify.com/artist/4ScCswdRlyA23odg9thgIO) | [I Cry When I Laugh](https://open.spotify.com/album/7BEPVoBcHuTLWpcdj8FhM8) | 3:47 | 2021-12-16 | | | [I Believe in a Thing Called Love](https://open.spotify.com/track/756CJtQRFSxEx9jV4P9hpA) | [The Darkness](https://open.spotify.com/artist/5r1bdqzhgRoHC3YcCV6N5a) | [Permission to Land](https://open.spotify.com/album/6vW9ZDllNv87WHXS3XTjlM) | 3:37 | 2021-12-16 | | | [I Don't Feel Like Dancin' \- Radio Edit](https://open.spotify.com/track/1qEHgdFqUxFebMPk8s2HLY) | [Scissor Sisters](https://open.spotify.com/artist/3Y10boYzeuFCJ4Qgp53w6o) | [I Don't Feel Like Dancin'](https://open.spotify.com/album/6LPpLYrjQYKtMfDUT1qjOz) | 4:08 | 2021-12-16 | | | [I Gotta Feeling](https://open.spotify.com/track/1u0aIMrEBvFkRtgcg264gW) | [Black Eyed Peas](https://open.spotify.com/artist/1yxSLGMDHlW21z4YXirZDS) | [The Beginning & The Best Of The E.N.D\. \(International Mega\-Deluxe Version\)](https://open.spotify.com/album/5GJayigLJNxEvuDoCq0wVz) | 4:49 | 2021-12-16 | 2021-12-31 | | [I Gotta Feeling \- Edit](https://open.spotify.com/track/4vL7vh5gE925teuKC3d5wb) | [Black Eyed Peas](https://open.spotify.com/artist/1yxSLGMDHlW21z4YXirZDS) | [I Gotta Feeling](https://open.spotify.com/album/1WYWdMDRVwgW87AhuBusMY) | 4:05 | 2021-12-16 | 2022-01-01 | | [I Wanna Dance with Somebody \(Who Loves Me\)](https://open.spotify.com/track/2k1np6GRFvKjgjYfo2g39B) | [Whitney Houston](https://open.spotify.com/artist/6XpaIBNiVzIetEPCWDvAFP) | [Whitney](https://open.spotify.com/album/6JDG7sbzX4uwQFFo1CHDLi) | 4:51 | 2021-12-16 | | | [I Want You Back](https://open.spotify.com/track/5LN9F6c1Okx7Yrwd6GO8tu) | [The Jackson 5](https://open.spotify.com/artist/2iE18Oxc8YSumAU232n4rW) | [The Ultimate Collection: Jackson 5](https://open.spotify.com/album/2vC3VCReQ7aXIhuHmgc78u) | 2:59 | 2021-12-16 | 2022-01-09 | | [I'm a Believer \- 2006 Remaster](https://open.spotify.com/track/3G7tRC24Uh09Hmp1KZ7LQ2) | [The Monkees](https://open.spotify.com/artist/320EPCSEezHt1rtbfwH6Ck) | [More of The Monkees \(Deluxe Edition\)](https://open.spotify.com/album/50zHjIiTOZM232gnWvOydX) | 2:47 | 2021-12-16 | | | [I'm Coming Out](https://open.spotify.com/track/66eYZZfUgBCulq7tsiBbXB) | [Diana Ross](https://open.spotify.com/artist/3MdG05syQeRYPPcClLaUGl) | [Gold \- '80s Soul](https://open.spotify.com/album/30zetsnd9mENuaA47D6wcr) | 5:20 | 2021-12-16 | 2022-01-09 | | [I'm Coming Out](https://open.spotify.com/track/0ew27xRdxSexrWbODuLfeE) | [Diana Ross](https://open.spotify.com/artist/3MdG05syQeRYPPcClLaUGl) | [Diana](https://open.spotify.com/album/0Gy4phN6Xwx4uyskE7Wkls) | 5:25 | 2021-12-16 | | | [I'm Every Woman](https://open.spotify.com/track/1oFiPGBafH9Woo9AMwgBSl) | [Chaka Khan](https://open.spotify.com/artist/6mQfAAqZGBzIfrmlZCeaYT) | [Chaka](https://open.spotify.com/album/2lvaLIoEg3hwL2dybu6zTC) | 4:09 | 2021-12-16 | | | [I'm So Excited](https://open.spotify.com/track/2u8MGAiS2hBVE7GZzTZLQI) | [The Pointer Sisters](https://open.spotify.com/artist/2kreKea2n96dXjcyAU9j5N) | [Best Of](https://open.spotify.com/album/4ZcnXch8ZI9zlizDVTea1X) | 3:49 | 2021-12-16 | 2022-01-09 | | [Into the Groove](https://open.spotify.com/track/2m0M7YqCy4lXfedh18qd8N) | [Madonna](https://open.spotify.com/artist/6tbjWDEIzxoDsBA1FuhfPW) | [Celebration \(double disc version\)](https://open.spotify.com/album/43lok9zd7BW5CoYkXZs7S0) | 4:45 | 2021-12-16 | | | [It's Raining Men \- Single Version](https://open.spotify.com/track/5kErKPSPv8EgdZ04R3el1K) | [The Weather Girls](https://open.spotify.com/artist/19xz1vcuKNjniGEftTOSSH) | [It's Raining Men](https://open.spotify.com/album/6KrLqH25xsKvJwyiSRis5A) | 3:32 | 2021-12-16 | | | [Jessie's Girl](https://open.spotify.com/track/1sgfjgLRCqSfndpofU8T2C) | [Rick Springfield](https://open.spotify.com/artist/6IFXsrXBpwbIqtOUOiAa3p) | [Playlist: The Very Best Of Rick Springfield](https://open.spotify.com/album/50XTekofXct0JnY8DmqXdk) | 3:14 | 2021-12-16 | 2022-01-04 | | [Just Dance](https://open.spotify.com/track/1dzQoRqT5ucxXVaAhTcT0J) | [Lady Gaga](https://open.spotify.com/artist/1HY2Jd0NmPuamShAr6KMms), [Colby O'Donis](https://open.spotify.com/artist/7fObcBw9VM3x7ntWKCYl0z) | [The Fame](https://open.spotify.com/album/1qwlxZTNLe1jq3b0iidlue) | 4:01 | 2021-12-16 | 2021-12-31 | | [Karma Chameleon](https://open.spotify.com/track/48O0GrGJWml3DzHhC5sJ7a) | [Culture Club](https://open.spotify.com/artist/6kz53iCdBSqhQCZ21CoLcc) | [At Worst...The Best Of Boy George And Culture Club](https://open.spotify.com/album/7gdwk8zdee8ghIq94Z9ck3) | 4:01 | 2021-12-16 | | | [Keep On Movin'](https://open.spotify.com/track/0mrU1w2UMIZnR2I6oguwGz) | [Five](https://open.spotify.com/artist/6rEzedK7cKWjeQWdAYvWVG), [Steve Mac](https://open.spotify.com/artist/4HQPu8xlD0YTKmUhCsty3a) | [Invincible](https://open.spotify.com/album/72qAXkZ8keSUHe55hhEVQG) | 3:17 | 2021-12-16 | | | [Kiss](https://open.spotify.com/track/7ttxAFobCmQKOJtyw2IKfJ) | [Prince](https://open.spotify.com/artist/5a2EaR3hamoenG9rDuVn8j) | [The Hits 2](https://open.spotify.com/album/2DjypBL7mpKmOCHL2MdrVx) | 3:45 | 2021-12-16 | 2022-01-02 | | [Kiss](https://open.spotify.com/track/62LJFaYihsdVrrkgUOJC05) | [Prince](https://open.spotify.com/artist/5a2EaR3hamoenG9rDuVn8j) | [Parade \- Music from the Motion Picture Under the Cherry Moon](https://open.spotify.com/album/54DjkEN3wdCQgfCTZ9WjdB) | 3:46 | 2021-12-16 | 2022-01-09 | | [Let's Dance \- 1999 Remaster](https://open.spotify.com/track/0F0MA0ns8oXwGw66B2BSXm) | [David Bowie](https://open.spotify.com/artist/0oSGxfWSnnOXhD2fKuz2Gy) | [Let's Dance \(1999 Remaster\)](https://open.spotify.com/album/37KYBt1Lzn4eJ4KoCFZcnR) | 7:37 | 2021-12-16 | | | [Let's Dance \- 2002 Remaster](https://open.spotify.com/track/3NWUDziFW8uFfcYNXmrRNH) | [David Bowie](https://open.spotify.com/artist/0oSGxfWSnnOXhD2fKuz2Gy) | [Best of Bowie](https://open.spotify.com/album/1jdQFC3s8PZUc5i7vovZTv) | 4:05 | 2021-12-16 | | | [Livin' On A Prayer](https://open.spotify.com/track/2tyW1uBUnYMKAFEfKDKi9B) | [Bon Jovi](https://open.spotify.com/artist/58lV9VcRSjABbAbfWS6skp) | [Slippery When Wet](https://open.spotify.com/album/3gORsZp3xSbkN1ymRNonp1) | 4:10 | 2021-12-16 | 2022-01-09 | | [Long Train Runnin' \- 2006 Remaster](https://open.spotify.com/track/27YtiJUAxOWlc4BossxNea) | [The Doobie Brothers](https://open.spotify.com/artist/39T6qqI0jDtSWWioX8eGJz) | [Best of The Doobies](https://open.spotify.com/album/32xyhzHlGGsDvs1E7qihRA) | 3:26 | 2021-12-16 | 2022-01-06 | | [Love Train](https://open.spotify.com/track/3rCNLeHuSDsEPJshmo5OuR) | [The O'Jays](https://open.spotify.com/artist/38h03gA85YYPeDPd9ER9rT) | [Dead Presidents Vol\. 1/Music From The Motion Picture](https://open.spotify.com/album/09yaGU6AaYLHAONWqhWmMk) | 2:58 | 2021-12-16 | | | [More Than A Woman \(2007 Remastered Saturday Night Fever LP Version\)](https://open.spotify.com/track/0vDeTWoAooavG2zcCKHmI4) | [Bee Gees](https://open.spotify.com/artist/1LZEQNv7sE11VDY3SdxQeN) | [Saturday Night Fever \[The Original Movie Soundtrack\]](https://open.spotify.com/album/0taUwU7qjtc9lvwmd7FKac) | 3:17 | 2021-12-16 | 2021-12-28 | | [Moves Like Jagger \- Studio Recording From "The Voice" Performance](https://open.spotify.com/track/7pYfyrMNPn3wtoCyqcTVoI) | [Maroon 5](https://open.spotify.com/artist/04gDigrS5kc9YWfZHwBETP), [Christina Aguilera](https://open.spotify.com/artist/1l7ZsJRRS8wlW3WfJfPfNS) | [Hands All Over \(Revised International Standard version\)](https://open.spotify.com/album/1snrPQMoTrBsKl73wzSxbn) | 3:21 | 2021-12-16 | 2021-12-27 | | [Mr\. Brightside](https://open.spotify.com/track/3n3Ppam7vgaVa1iaRUc9Lp) | [The Killers](https://open.spotify.com/artist/0C0XlULifJtAgn6ZNCW2eu) | [Hot Fuss](https://open.spotify.com/album/4OHNH3sDzIxnmUADXzv2kT) | 3:42 | 2021-12-16 | | | [Murder On The Dancefloor](https://open.spotify.com/track/2Za2mUwmQoSxWPscaY2vxl) | [Sophie Ellis\-Bextor](https://open.spotify.com/artist/2cBh5lVMg222FFuRU7EfDE) | [Read My Lips](https://open.spotify.com/album/0Mf0uNttnZvaQOKiECOBSn) | 3:50 | 2021-12-16 | | | [My Girl](https://open.spotify.com/track/6RrXd9Hph4hYR4bf3dbM6H) | [The Temptations](https://open.spotify.com/artist/3RwQ26hR2tJtA8F9p2n7jG) | [The Temptations Sing Smokey](https://open.spotify.com/album/5H2rffw6MNKGWVKBNWNA4S) | 2:45 | 2021-12-16 | | | [Oh, Pretty Woman](https://open.spotify.com/track/52HAHV1j93s5B8GoTNI7DJ) | [Roy Orbison](https://open.spotify.com/artist/0JDkhL4rjiPNEp92jAgJnS) | [The Essential Roy Orbison](https://open.spotify.com/album/48CvRZSBT0FbOHKLFfHy0n) | 2:56 | 2021-12-16 | 2022-01-09 | | [Play That Funky Music](https://open.spotify.com/track/5uuJruktM9fMdN9Va0DUMl) | [Wild Cherry](https://open.spotify.com/artist/4apX9tIeHb85yPyy4F6FJG) | [Wild Cherry](https://open.spotify.com/album/27ompw8zlrCkWMacS21ysX) | 5:00 | 2021-12-16 | | | [Push The Button](https://open.spotify.com/track/4KktZd9BGHZjW3sK03O4zo) | [Sugababes](https://open.spotify.com/artist/7rZNSLWMjTbwdLNskFbzFf) | [Overloaded](https://open.spotify.com/album/1s9BRhmwSGLc71q1j5JY5e) | 3:38 | 2021-12-16 | | | [Raspberry Beret](https://open.spotify.com/track/5jSz894ljfWE0IcHBSM39i) | [Prince](https://open.spotify.com/artist/5a2EaR3hamoenG9rDuVn8j) | [Around the World in a Day](https://open.spotify.com/album/5FbrTPPlaNSOsChhKUZxcu) | 3:35 | 2021-12-16 | | | [Rehab](https://open.spotify.com/track/4osg3vT6sXv6wNxm9Z6ucQ) | [Amy Winehouse](https://open.spotify.com/artist/6Q192DXotxtaysaqNPy5yR) | [Back To Black](https://open.spotify.com/album/6GJCGWfI95aeRsdtVB52vc) | 3:33 | 2021-12-16 | | | [Respect](https://open.spotify.com/track/3XIobZuLDpWea6Ig36ItRw) | [Aretha Franklin](https://open.spotify.com/artist/7nwUJBm0HE4ZxD3f5cy5ok) | [The Very Best Of Aretha Franklin \- The 60's](https://open.spotify.com/album/1R11sCrAEB6FkvuWBia8cT) | 2:25 | 2021-12-16 | | | [Respect \- 2003 Remaster](https://open.spotify.com/track/5AoTuHE5P5bvC7BBppYnja) | [Aretha Franklin](https://open.spotify.com/artist/7nwUJBm0HE4ZxD3f5cy5ok) | [Atlantic 60th: Soul, Sweat And Strut](https://open.spotify.com/album/1LBWNRMsbEWb17KmDD4jfD) | 2:22 | 2021-12-16 | 2022-01-08 | | [River Deep \- Mountain High](https://open.spotify.com/track/19jo0UT2vqD4pNVfIqTy4R) | [Ike & Tina Turner](https://open.spotify.com/artist/1ZikppG9dPedbIgMfnfx8k) | [Tina!](https://open.spotify.com/album/6FkWiSUX7YAdxOlHPrIzMj) | 4:04 | 2021-12-16 | | | [Rolling in the Deep](https://open.spotify.com/track/1eq1wUnLVLg4pdEfx9kajC) | [Adele](https://open.spotify.com/artist/4dpARuHxo51G3z768sgnrY) | [21](https://open.spotify.com/album/7n3QJc7TBOxXtlYh4Ssll8) | 3:48 | 2021-12-16 | | | [September](https://open.spotify.com/track/6IWkdrBY2O0vfQdfEKGUdF) | [Earth, Wind & Fire](https://open.spotify.com/artist/4QQgXkCYTt3BlENzhyNETg) | [The Best Of Earth, Wind & Fire Vol.1](https://open.spotify.com/album/3IdGf5JgMYUrO9gBaD0Oqw) | 3:35 | 2021-12-16 | 2022-01-04 | | [September](https://open.spotify.com/track/06rPFgQsKS607RhYIqCmGq) | [Earth, Wind & Fire](https://open.spotify.com/artist/4QQgXkCYTt3BlENzhyNETg) | [Now, Then & Forever](https://open.spotify.com/album/1mzir7lA3b0Zsig6wYCQKN) | 3:36 | 2021-12-16 | 2022-01-09 | | [Shout](https://open.spotify.com/track/317IZhdCGvInrl3vcmxOlq) | [Lulu](https://open.spotify.com/artist/0jYKX08u1XxmHrl5TdM2QZ) | [The EP Collection](https://open.spotify.com/album/1bCXn9wG3R40gzR5hwRWmP) | 2:52 | 2021-12-16 | | | [Shut Up and Dance](https://open.spotify.com/track/0kzw2tRyuL9rzipi5ntlIy) | [WALK THE MOON](https://open.spotify.com/artist/6DIS6PRrLS3wbnZsf7vYic) | [TALKING IS HARD \(Expanded Edition\)](https://open.spotify.com/album/2bVVESepVYULITlO6mtmoy) | 3:19 | 2021-12-16 | | | [Sir Duke](https://open.spotify.com/track/3dTNNizDOulTTV5uwtEGct) | [Stevie Wonder](https://open.spotify.com/artist/7guDJrEfX3qb6FEbdPA5qi) | [Original Musiquarium](https://open.spotify.com/album/6QnqUBcQocB0U3nl8eBVjm) | 3:51 | 2021-12-16 | | | [Sir Duke](https://open.spotify.com/track/2udw7RDkldLFIPG9WYdVtT) | [Stevie Wonder](https://open.spotify.com/artist/7guDJrEfX3qb6FEbdPA5qi) | [Songs In The Key Of Life \(Reissue\)](https://open.spotify.com/album/0BBWJ3L9fhkmNLdt4zs4fu) | 3:52 | 2021-12-16 | 2022-01-03 | | [Sound Of The Underground](https://open.spotify.com/track/0SKjqIViHaXWhmaKuJbMrq) | [Girls Aloud](https://open.spotify.com/artist/12EtLdLfJ41vUOoVzPZIUy) | [Sound Of The Underground](https://open.spotify.com/album/5lruCC2nlwy21JwWLpjrrS) | 3:41 | 2021-12-16 | | | [Stayin' Alive \- 2007 Remastered Version Saturday Night Fever](https://open.spotify.com/track/5cP52DlDN9yryuZVQDg3iq) | [Bee Gees](https://open.spotify.com/artist/1LZEQNv7sE11VDY3SdxQeN) | [Saturday Night Fever \[The Original Movie Soundtrack\]](https://open.spotify.com/album/0taUwU7qjtc9lvwmd7FKac) | 4:45 | 2021-12-16 | 2022-01-05 | | [Superstition](https://open.spotify.com/track/4dwrL3Z5U2RZ6MZiKE2PgL) | [Stevie Wonder](https://open.spotify.com/artist/7guDJrEfX3qb6FEbdPA5qi) | [Number 1's](https://open.spotify.com/album/5x7vXXWapy8cUmdSuwpUy1) | 4:04 | 2021-12-16 | 2022-01-02 | | [Superstition \- Single Version](https://open.spotify.com/track/300RfAPZ57B0y6YYj9n6DN) | [Stevie Wonder](https://open.spotify.com/artist/7guDJrEfX3qb6FEbdPA5qi) | [Number Ones](https://open.spotify.com/album/4Gnhm7AGwlXf0UxC2yxJtz) | 4:04 | 2021-12-16 | 2022-01-09 | | [Suspicious Minds](https://open.spotify.com/track/1OtWwtGFPXVhdAVKZHwrNF) | [Elvis Presley](https://open.spotify.com/artist/43ZHCT0cAZBISjO8DG9PnE) | [Back In Memphis](https://open.spotify.com/album/38lhaWsw8PImY1pIIlKyDJ) | 4:23 | 2021-12-16 | | | [Tainted Love](https://open.spotify.com/track/0cGG2EouYCEEC3xfa0tDFV) | [Soft Cell](https://open.spotify.com/artist/6aq8T2RcspxVOGgMrTzjWc) | [Non\-Stop Erotic Cabaret](https://open.spotify.com/album/3KFWViJ1wIHAdOVLFTVzjD) | 2:33 | 2021-12-16 | 2022-01-07 | | [Take A Chance On Me](https://open.spotify.com/track/4bykJp7dORR4GoLCZiQbU0) | [ABBA](https://open.spotify.com/artist/0LcJLqbBmaGUft1e9Mm8HV) | [The Album](https://open.spotify.com/album/5rHIYv9tgmZjvvMOMhho2x) | 4:03 | 2021-12-16 | | | [Teenage Dirtbag](https://open.spotify.com/track/25FTMokYEbEWHEdss5JLZS) | [Wheatus](https://open.spotify.com/artist/4mYFgEjpQT4IKOrjOUKyXu) | [Wheatus](https://open.spotify.com/album/3xmKWmqJFoXS22tePO3mgd) | 4:01 | 2021-12-16 | 2022-01-08 | | [The Best \- Edit](https://open.spotify.com/track/2W2ieVidLIx9TDvxu0ZT6F) | [Tina Turner](https://open.spotify.com/artist/1zuJe6b1roixEKMOtyrEak) | [Simply the Best](https://open.spotify.com/album/1ZFC0iOKUp4M16eHXVaeG4) | 4:13 | 2021-12-16 | | | [The Locomotion](https://open.spotify.com/track/53UXqZaarqFPAFEy7dT1m1) | [Little Eva](https://open.spotify.com/artist/4S76LQXJD6N2uPcLhKejG8) | [Playlist: The Best Of Little Eva](https://open.spotify.com/album/4hiDQjQltfdWHqAdiQqe7n) | 2:19 | 2021-12-16 | | | [The Lovecats](https://open.spotify.com/track/2gOaGuy7ZlfVDSnTfPkxpH) | [The Cure](https://open.spotify.com/artist/7bu3H8JO7d0UbMoVzbo70s) | [Greatest Hits](https://open.spotify.com/album/6NII6OnGzd07nDw5o7Secq) | 3:40 | 2021-12-16 | 2022-01-09 | | [Think](https://open.spotify.com/track/4yQw7FR9lcvL6RHtegbJBh) | [Aretha Franklin](https://open.spotify.com/artist/7nwUJBm0HE4ZxD3f5cy5ok) | [Aretha Now](https://open.spotify.com/album/55HZ2ectg1mMTEKDqIq3kC) | 2:19 | 2021-12-16 | | | [Toxic](https://open.spotify.com/track/6I9VzXrHxO9rA9A5euc8Ak) | [Britney Spears](https://open.spotify.com/artist/26dSoYclwsYLMAKD3tpOr4) | [In The Zone](https://open.spotify.com/album/0z7pVBGOD7HCIB7S8eLkLI) | 3:18 | 2021-12-16 | 2021-12-24 | | [Twist And Shout \- Remastered](https://open.spotify.com/track/4Z1fbYp0HuxLBje4MOZcSD) | [The Beatles](https://open.spotify.com/artist/3WrFJ7ztbogyGnTHbHJFl2) | [Please Please Me \(Remastered\)](https://open.spotify.com/album/7gDXyW16byCQOgK965BRzn) | 2:35 | 2021-12-16 | | | [Under Pressure \- Remastered 2011](https://open.spotify.com/track/2fuCquhmrzHpu5xcA1ci9x) | [Queen](https://open.spotify.com/artist/1dfeR4HaWDbWqFHLkxsg1d), [David Bowie](https://open.spotify.com/artist/0oSGxfWSnnOXhD2fKuz2Gy) | [Hot Space \(2011 Remaster\)](https://open.spotify.com/album/6reTSIf5MoBco62rk8T7Q1) | 4:08 | 2021-12-16 | | | [Uptown Girl](https://open.spotify.com/track/3CSpzkoL1XgDBZ1q9aDCUV) | [Billy Joel](https://open.spotify.com/artist/6zFYqv1mOsgBRQbae3JJ9e) | [The Essential Billy Joel](https://open.spotify.com/album/7r36rel1M4gyBavfcJP6Yz) | 3:14 | 2021-12-16 | | | [Valerie \(feat\. Amy Winehouse\) \- Version Revisited](https://open.spotify.com/track/1UxisKljksGRNwJBPp4Tri) | [Mark Ronson](https://open.spotify.com/artist/3hv9jJF3adDNsBSIQDqcjp), [Amy Winehouse](https://open.spotify.com/artist/6Q192DXotxtaysaqNPy5yR) | [Version Digital Edition](https://open.spotify.com/album/1nojrwBYMmq5jY1gJYtywa) | 3:39 | 2021-12-16 | | | [Wake Me Up Before You Go\-Go](https://open.spotify.com/track/6ZEAXknmx2mrO3KgcDNpFI) | [Wham!](https://open.spotify.com/artist/5lpH0xAS4fVfLkACg9DAuM) | [80s 100 Hits](https://open.spotify.com/album/0pvhletDH7CphbKErUtPCF) | 3:43 | 2021-12-16 | | | [Wake Me Up Before You Go\-Go](https://open.spotify.com/track/5XsMz0YfEaHZE0MTb1aujs) | [Wham!](https://open.spotify.com/artist/5lpH0xAS4fVfLkACg9DAuM) | [Make It Big](https://open.spotify.com/album/0CpBTGH3Eewlbw35IclPdm) | 3:50 | 2021-12-16 | 2022-01-09 | | [Walk Of Life](https://open.spotify.com/track/4tyl9OMKMG8F2L0RUYQMH3) | [Dire Straits](https://open.spotify.com/artist/0WwSkZ7LtFUFjGjMZBMt6T) | [Brothers In Arms \(Remastered\)](https://open.spotify.com/album/1NF8WUbdC632SIwixiWrLh) | 4:09 | 2021-12-16 | | | [Walking On Sunshine](https://open.spotify.com/track/05wIrZSwuaVWhcv5FfqeH0) | [Katrina & The Waves](https://open.spotify.com/artist/2TzHIUhVpeeDxyJPpQfnV3) | [Katrina & The Waves](https://open.spotify.com/album/1UQG78YJjaBySRMh0A8Uw7) | 3:58 | 2021-12-16 | | | [Wannabe](https://open.spotify.com/track/1Je1IMUlBXcx1Fz0WE7oPT) | [Spice Girls](https://open.spotify.com/artist/0uq5PttqEjj3IH1bzwcrXF) | [Spice](https://open.spotify.com/album/3x2jF7blR6bFHtk4MccsyJ) | 2:53 | 2021-12-16 | | | [We Are Family](https://open.spotify.com/track/5IKLwqBQG6KU6MP2zP80Nu) | [Sister Sledge](https://open.spotify.com/artist/6gkWznnJkdkwRPVcmnrays) | [We Are Family](https://open.spotify.com/album/4GSidaoqyGNwaG5mNKmuLT) | 3:36 | 2021-12-16 | | | [What Makes You Beautiful](https://open.spotify.com/track/1QxlG2lGedOmKgSMEDcR8D) | [One Direction](https://open.spotify.com/artist/4AK6F7OLvEQ5QYCBNiQWHq) | [What Makes You Beautiful](https://open.spotify.com/album/3GkCmaJqWkEvlUU3gOqdSe) | 3:18 | 2021-12-16 | 2022-01-01 | | [When Love Takes Over \(feat\. Kelly Rowland\)](https://open.spotify.com/track/1hRFVIy9As8OVRk8B7CrD5) | [David Guetta](https://open.spotify.com/artist/1Cs0zKBU1kc0i8ypK3B9ai) | [One More Love](https://open.spotify.com/album/5DJc5qCdB5pPrDO97LXjeW) | 3:11 | 2021-12-16 | 2022-01-01 | | [Wouldn't It Be Nice \- 2000 Remaster](https://open.spotify.com/track/2Gy7qnDwt8Z3MNxqat4CsK) | [The Beach Boys](https://open.spotify.com/artist/3oDbviiivRWhXwIE8hxkVV) | [Pet Sounds \(Original Mono & Stereo Mix Versions\)](https://open.spotify.com/album/6GphKx2QAPRoVGWE9D7ou8) | 2:33 | 2021-12-16 | | | [Y.M.C.A.](https://open.spotify.com/track/4YOJFyjqh8eAcbKFfv88mV) | [Village People](https://open.spotify.com/artist/0dCKce6tJJdHvlWnDMwzPW) | [Cruisin'](https://open.spotify.com/album/3kdp1PnxkKlshMP3qG2CUG) | 4:46 | 2021-12-16 | 2021-12-27 | | [YMCA \- Original Version 1978](https://open.spotify.com/track/54OR1VDpfkBuOY5zZjhZAY) | [Village People](https://open.spotify.com/artist/0dCKce6tJJdHvlWnDMwzPW) | [YMCA](https://open.spotify.com/album/3I3YSq7ArvlwK3l49pq4oE) | 4:46 | 2021-12-16 | | | [You Can Call Me Al](https://open.spotify.com/track/0qxYx4F3vm1AOnfux6dDxP) | [Paul Simon](https://open.spotify.com/artist/2CvCyf1gEVhI0mX6aFXmVI) | [Graceland \(25th Anniversary Deluxe Edition\)](https://open.spotify.com/album/6WgGWYw6XXQyLTsWt7tXky) | 4:40 | 2021-12-16 | | | [You Can't Hurry Love](https://open.spotify.com/track/7bDxiHrazDoD3sOt9JjSDD) | [The Supremes](https://open.spotify.com/artist/57bUPid8xztkieZfS7OlEV) | [Gold](https://open.spotify.com/album/1UzB92nXoERL1yoTjiJ9SU) | 2:45 | 2021-12-16 | 2022-01-06 | | [You Can't Hurry Love](https://open.spotify.com/track/69Qa7czzqraPWZgxpQN405) | [The Supremes](https://open.spotify.com/artist/57bUPid8xztkieZfS7OlEV) | [Favorites](https://open.spotify.com/album/5u4oZQY6eYzH7ZpydPoUN3) | 2:45 | 2021-12-16 | | | [You Make My Dreams \(Come True\)](https://open.spotify.com/track/1ZBXN4gljtuHvPLqsEQ1VM) | [Daryl Hall & John Oates](https://open.spotify.com/artist/77tT1kLj6mCWtFNqiOmP9H) | [Hall & Oates](https://open.spotify.com/album/6Qk6OnOuvXwdDHLO6koIHQ) | 3:03 | 2021-12-16 | | | [You Sexy Thing](https://open.spotify.com/track/714hERk9U1W8FMYkoC83CO) | [Hot Chocolate](https://open.spotify.com/artist/72VzFto8DYvKHocaHYNWSi) | [Hot Chocolate](https://open.spotify.com/album/10oMdAuUD0Tcc4BowCWUni) | 4:04 | 2021-12-16 | | | [You Shook Me All Night Long](https://open.spotify.com/track/6yl8Es1tCYD9WdSkeVLFw4) | [AC/DC](https://open.spotify.com/artist/711MCceyCBcFnzjGY4Q7Un) | [Who Made Who](https://open.spotify.com/album/07EFoHHspqSwsmkbnWaB4A) | 3:30 | 2021-12-16 | 2022-01-08 | | [Young Hearts Run Free](https://open.spotify.com/track/3MFa9idQuY4iJLWsZl3tIQ) | [Candi Staton](https://open.spotify.com/artist/3S34Unhn5yRcaH5K9aU5Et) | [Young Hearts Run Free \(US Internet Release\)](https://open.spotify.com/album/39ntuIhbcC8rsmRV2qXkmZ) | 4:08 | 2021-12-16 | | \*This playlist was first scraped on 2021-12-21. Prior content cannot be recovered.
262.930556
969
0.752073
yue_Hant
0.58884
e5b571815ffc115da3754c2cefd097c1f3d62b6d
28,882
md
Markdown
content/reference/services/SoftLayer_User_Customer_OpenIdConnect/_index.md
edsonarios/githubio_source
8d92ebf5c49a3ba0d18702062f5744b5c308b646
[ "Apache-2.0" ]
null
null
null
content/reference/services/SoftLayer_User_Customer_OpenIdConnect/_index.md
edsonarios/githubio_source
8d92ebf5c49a3ba0d18702062f5744b5c308b646
[ "Apache-2.0" ]
null
null
null
content/reference/services/SoftLayer_User_Customer_OpenIdConnect/_index.md
edsonarios/githubio_source
8d92ebf5c49a3ba0d18702062f5744b5c308b646
[ "Apache-2.0" ]
null
null
null
--- title: "SoftLayer_User_Customer_OpenIdConnect" description: "" date: "2018-02-12" layout: "service" tags: - "service" - "sldn" - "User" classes: - "SoftLayer_User_Customer_OpenIdConnect" --- # SoftLayer_User_Customer_OpenIdConnect <div id='service-datatype'> <ul id='sldn-reference-tabs'> <li id='service'> <a href='/reference/services/SoftLayer_User_Customer_OpenIdConnect' >Service</a></li> <li id='datatype'> <a href='/reference/datatypes/SoftLayer_User_Customer_OpenIdConnect' >Datatype</a></li> </ul> </div> ## Description <div id="properties" class="content service-content"> ## Methods <div class="view-filters"> <div class="clearfix"> <div class="search-input-box"> <input placeholder="Method Filter" onkeyup="titleSearch(inputId='edit-combine', divId='method-div', elementClass='method-row')" type="text" id="edit-combine" value="" size="30" maxlength="128" class="form-text"> </div> </div> </div> <div id="method-div"> <div class="method-row"> #### [acknowledgeSupportPolicy](/reference/services/SoftLayer_User_Customer_OpenIdConnect/acknowledgeSupportPolicy) </div> <div class="method-row"> #### [activateOpenIdConnectUser](/reference/services/SoftLayer_User_Customer_OpenIdConnect/activateOpenIdConnectUser) Completes invitation process for an OIDC user initiated by the </div> <div class="method-row"> #### [addApiAuthenticationKey](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addApiAuthenticationKey) Create a user's API authentication key. </div> <div class="method-row"> #### [addBulkDedicatedHostAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addBulkDedicatedHostAccess) Grant access to the user for one or more dedicated hosts devices. </div> <div class="method-row"> #### [addBulkHardwareAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addBulkHardwareAccess) Add multiple hardware to a portal user's hardware access list. </div> <div class="method-row"> #### [addBulkPortalPermission](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addBulkPortalPermission) Add multiple permissions to a portal user's permission set. </div> <div class="method-row"> #### [addBulkRoles](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addBulkRoles) </div> <div class="method-row"> #### [addBulkVirtualGuestAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addBulkVirtualGuestAccess) Add multiple CloudLayer Computing Instances to a portal user's access list. </div> <div class="method-row"> #### [addDedicatedHostAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addDedicatedHostAccess) Grant access to the user for a single dedicated host device. </div> <div class="method-row"> #### [addExternalBinding](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addExternalBinding) </div> <div class="method-row"> #### [addHardwareAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addHardwareAccess) Add hardware to a portal user's hardware access list. </div> <div class="method-row"> #### [addNotificationSubscriber](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addNotificationSubscriber) Create a notification subscription record for the user. </div> <div class="method-row"> #### [addPortalPermission](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addPortalPermission) Add a permission to a portal user's permission set. </div> <div class="method-row"> #### [addRole](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addRole) </div> <div class="method-row"> #### [addVirtualGuestAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/addVirtualGuestAccess) Add a CloudLayer Computing Instance to a portal user's access list. </div> <div class="method-row"> #### [assignNewParentId](/reference/services/SoftLayer_User_Customer_OpenIdConnect/assignNewParentId) Assign a different parent to this user. </div> <div class="method-row"> #### [changePreference](/reference/services/SoftLayer_User_Customer_OpenIdConnect/changePreference) Change preference values for the current user </div> <div class="method-row"> #### [checkExternalAuthenticationStatus](/reference/services/SoftLayer_User_Customer_OpenIdConnect/checkExternalAuthenticationStatus) Checks if an external authentication is complete or not </div> <div class="method-row"> #### [checkPhoneFactorAuthenticationForPasswordSet](/reference/services/SoftLayer_User_Customer_OpenIdConnect/checkPhoneFactorAuthenticationForPasswordSet) Check the status of an outstanding Phone Factor Authentication for Password Set </div> <div class="method-row"> #### [completeInvitationAfterLogin](/reference/services/SoftLayer_User_Customer_OpenIdConnect/completeInvitationAfterLogin) Completes invitation processing after logging on an existing OpenIdConnect user identity and return an access token </div> <div class="method-row"> #### [createNotificationSubscriber](/reference/services/SoftLayer_User_Customer_OpenIdConnect/createNotificationSubscriber) Create a new subscriber for a given resource. </div> <div class="method-row"> #### [createObject](/reference/services/SoftLayer_User_Customer_OpenIdConnect/createObject) Create a new user record. </div> <div class="method-row"> #### [createOpenIdConnectUserAndCompleteInvitation](/reference/services/SoftLayer_User_Customer_OpenIdConnect/createOpenIdConnectUserAndCompleteInvitation) Completes invitation processing when a new OpenIdConnect user must be created. </div> <div class="method-row"> #### [createSubscriberDeliveryMethods](/reference/services/SoftLayer_User_Customer_OpenIdConnect/createSubscriberDeliveryMethods) Create delivery methods for the subscriber. </div> <div class="method-row"> #### [deactivateNotificationSubscriber](/reference/services/SoftLayer_User_Customer_OpenIdConnect/deactivateNotificationSubscriber) Delete a subscriber for a given resource. </div> <div class="method-row"> #### [declineInvitation](/reference/services/SoftLayer_User_Customer_OpenIdConnect/declineInvitation) Sets a customer invitation as declined. </div> <div class="method-row"> #### [editObject](/reference/services/SoftLayer_User_Customer_OpenIdConnect/editObject) Update a user's information. </div> <div class="method-row"> #### [editObjects](/reference/services/SoftLayer_User_Customer_OpenIdConnect/editObjects) Update a collection of users' information </div> <div class="method-row"> #### [findUserPreference](/reference/services/SoftLayer_User_Customer_OpenIdConnect/findUserPreference) </div> <div class="method-row"> #### [getAccount](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getAccount) Retrieve the customer account that a user belongs to. </div> <div class="method-row"> #### [getActions](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getActions) </div> <div class="method-row"> #### [getActiveExternalAuthenticationVendors](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getActiveExternalAuthenticationVendors) Get a list of active external authentication vendors for a SoftLayer user. </div> <div class="method-row"> #### [getAdditionalEmails](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getAdditionalEmails) Retrieve a portal user's additional email addresses. These email addresses are contacted when updates are made to support tickets. </div> <div class="method-row"> #### [getAgentImpersonationToken](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getAgentImpersonationToken) </div> <div class="method-row"> #### [getAllowedDedicatedHostIds](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getAllowedDedicatedHostIds) </div> <div class="method-row"> #### [getAllowedHardwareIds](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getAllowedHardwareIds) </div> <div class="method-row"> #### [getAllowedVirtualGuestIds](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getAllowedVirtualGuestIds) </div> <div class="method-row"> #### [getApiAuthenticationKeys](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getApiAuthenticationKeys) Retrieve a portal user's API Authentication keys. There is a max limit of one API key per user. </div> <div class="method-row"> #### [getAuthenticationToken](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getAuthenticationToken) </div> <div class="method-row"> #### [getChildUsers](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getChildUsers) Retrieve a portal user's child users. Some portal users may not have child users. </div> <div class="method-row"> #### [getClosedTickets](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getClosedTickets) Retrieve an user's associated closed tickets. </div> <div class="method-row"> #### [getDedicatedHosts](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getDedicatedHosts) Retrieve the dedicated hosts to which the user has been granted access. </div> <div class="method-row"> #### [getDefaultAccount](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getDefaultAccount) Retrieve the default account for the OpenIdConnect identity that is linked to the current active SoftLayer user identity. </div> <div class="method-row"> #### [getExternalBindings](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getExternalBindings) Retrieve the external authentication bindings that link an external identifier to a SoftLayer user. </div> <div class="method-row"> #### [getHardware](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getHardware) Retrieve a portal user's accessible hardware. These permissions control which hardware a user has access to in the SoftLayer customer portal. </div> <div class="method-row"> #### [getHardwareCount](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getHardwareCount) Retrieve the current number of servers a portal user has access to. </div> <div class="method-row"> #### [getHardwareNotifications](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getHardwareNotifications) Retrieve hardware notifications associated with this user. A hardware notification links a user to a piece of hardware, and that user will be notified if any monitors on that hardware fail, if the monitors have a status of 'Notify User'. </div> <div class="method-row"> #### [getHasAcknowledgedSupportPolicyFlag](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getHasAcknowledgedSupportPolicyFlag) Retrieve whether or not a user has acknowledged the support policy. </div> <div class="method-row"> #### [getHasFullDedicatedHostAccessFlag](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getHasFullDedicatedHostAccessFlag) Retrieve permission granting the user access to all Dedicated Host devices on the account. </div> <div class="method-row"> #### [getHasFullHardwareAccessFlag](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getHasFullHardwareAccessFlag) Retrieve whether or not a portal user has access to all hardware on their account. </div> <div class="method-row"> #### [getHasFullVirtualGuestAccessFlag](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getHasFullVirtualGuestAccessFlag) Retrieve whether or not a portal user has access to all virtual guests on their account. </div> <div class="method-row"> #### [getIbmIdLink](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getIbmIdLink) Retrieve specifically relating the Customer instance to an IBMid. A Customer instance may or may not have an IBMid link. </div> <div class="method-row"> #### [getImpersonationToken](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getImpersonationToken) </div> <div class="method-row"> #### [getLayoutProfiles](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getLayoutProfiles) Retrieve contains the definition of the layout profile. </div> <div class="method-row"> #### [getLocale](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getLocale) Retrieve a user's locale. Locale holds user's language and region information. </div> <div class="method-row"> #### [getLoginAccountInfoOpenIdConnect](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getLoginAccountInfoOpenIdConnect) Get account for an active user logging into the SoftLayer customer portal </div> <div class="method-row"> #### [getLoginAttempts](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getLoginAttempts) Retrieve a user's attempts to log into the SoftLayer customer portal. </div> <div class="method-row"> #### [getLoginToken](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getLoginToken) Authenticate a user for the SoftLayer customer portal </div> <div class="method-row"> #### [getMappedAccounts](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getMappedAccounts) Retrieve a list of all active accounts that belong to this customer. </div> <div class="method-row"> #### [getMobileDevices](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getMobileDevices) Retrieve a portal user's associated mobile device profiles. </div> <div class="method-row"> #### [getNotificationSubscribers](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getNotificationSubscribers) Retrieve notification subscription records for the user. </div> <div class="method-row"> #### [getObject](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getObject) Retrieve a SoftLayer_User_Customer_OpenIdConnect record. </div> <div class="method-row"> #### [getOpenIdConnectMigrationState](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getOpenIdConnectMigrationState) Get the OpenId migration state </div> <div class="method-row"> #### [getOpenIdRegistrationInfoFromCode](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getOpenIdRegistrationInfoFromCode) Get OpenId User Registration details from the provided email code </div> <div class="method-row"> #### [getOpenTickets](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getOpenTickets) Retrieve an user's associated open tickets. </div> <div class="method-row"> #### [getOverrides](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getOverrides) Retrieve a portal user's vpn accessible subnets. </div> <div class="method-row"> #### [getParent](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getParent) Retrieve a portal user's parent user. If a SoftLayer_User_Customer has a null parentId property then it doesn't have a parent user. </div> <div class="method-row"> #### [getPasswordRequirements](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getPasswordRequirements) </div> <div class="method-row"> #### [getPermissions](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getPermissions) Retrieve a portal user's permissions. These permissions control that user's access to functions within the SoftLayer customer portal and API. </div> <div class="method-row"> #### [getPortalLoginToken](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getPortalLoginToken) Authenticate a user for the SoftLayer customer portal </div> <div class="method-row deprecated"> #### [getPortalLoginTokenOpenIdConnect](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getPortalLoginTokenOpenIdConnect) Authenticate a user for the SoftLayer customer portal via an openIdConnect provider. <span class="deprecation-label">Deprecated </span> </div> <div class="method-row"> #### [getPreference](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getPreference) Get a preference value for the current user </div> <div class="method-row"> #### [getPreferenceTypes](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getPreferenceTypes) Get all available preference types </div> <div class="method-row"> #### [getPreferences](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getPreferences) Retrieve data type contains a single user preference to a specific preference type. </div> <div class="method-row"> #### [getRequirementsForPasswordSet](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getRequirementsForPasswordSet) Retrieve the authentication requirements for a user when attempting </div> <div class="method-row"> #### [getRoles](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getRoles) </div> <div class="method-row"> #### [getSalesforceUserLink](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSalesforceUserLink) Retrieve [DEPRECATED] </div> <div class="method-row"> #### [getSecurityAnswers](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSecurityAnswers) Retrieve a portal user's security question answers. Some portal users may not have security answers or may not be configured to require answering a security question on login. </div> <div class="method-row"> #### [getSubscribers](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSubscribers) Retrieve a user's notification subscription records. </div> <div class="method-row"> #### [getSuccessfulLogins](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSuccessfulLogins) Retrieve a user's successful attempts to log into the SoftLayer customer portal. </div> <div class="method-row"> #### [getSupportPolicyAcknowledgementRequiredFlag](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSupportPolicyAcknowledgementRequiredFlag) Retrieve whether or not a user is required to acknowledge the support policy for portal access. </div> <div class="method-row"> #### [getSupportPolicyDocument](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSupportPolicyDocument) </div> <div class="method-row"> #### [getSupportPolicyName](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSupportPolicyName) </div> <div class="method-row"> #### [getSupportedLocales](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSupportedLocales) Returns all supported locales for the current user </div> <div class="method-row"> #### [getSurveyRequiredFlag](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSurveyRequiredFlag) Retrieve whether or not a user must take a brief survey the next time they log into the SoftLayer customer portal. </div> <div class="method-row"> #### [getSurveys](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getSurveys) Retrieve the surveys that a user has taken in the SoftLayer customer portal. </div> <div class="method-row"> #### [getTickets](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getTickets) Retrieve an user's associated tickets. </div> <div class="method-row"> #### [getTimezone](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getTimezone) Retrieve a portal user's time zone. </div> <div class="method-row"> #### [getUnsuccessfulLogins](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getUnsuccessfulLogins) Retrieve a user's unsuccessful attempts to log into the SoftLayer customer portal. </div> <div class="method-row"> #### [getUserForUnifiedInvitation](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getUserForUnifiedInvitation) Get the IMS User Object for the provided OpenIdConnect User ID, or (Optional) IBMid Unique Identifier. </div> <div class="method-row"> #### [getUserIdForPasswordSet](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getUserIdForPasswordSet) Retrieve a user id using a password request key </div> <div class="method-row"> #### [getUserLinks](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getUserLinks) Retrieve user customer link with IBMid and IAMid. </div> <div class="method-row"> #### [getUserPreferences](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getUserPreferences) </div> <div class="method-row"> #### [getUserStatus](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getUserStatus) Retrieve a portal user's status, which controls overall access to the SoftLayer customer portal and VPN access to the private network. </div> <div class="method-row"> #### [getVirtualGuestCount](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getVirtualGuestCount) Retrieve the current number of CloudLayer Computing Instances a portal user has access to. </div> <div class="method-row"> #### [getVirtualGuests](/reference/services/SoftLayer_User_Customer_OpenIdConnect/getVirtualGuests) Retrieve a portal user's accessible CloudLayer Computing Instances. These permissions control which CloudLayer Computing Instances a user has access to in the SoftLayer customer portal. </div> <div class="method-row"> #### [inTerminalStatus](/reference/services/SoftLayer_User_Customer_OpenIdConnect/inTerminalStatus) </div> <div class="method-row"> #### [initiateExternalAuthentication](/reference/services/SoftLayer_User_Customer_OpenIdConnect/initiateExternalAuthentication) Initiates an external authentication using the given authentication container. </div> <div class="method-row"> #### [initiatePortalPasswordChange](/reference/services/SoftLayer_User_Customer_OpenIdConnect/initiatePortalPasswordChange) Request email to allow user to change their password </div> <div class="method-row"> #### [initiatePortalPasswordChangeByBrandAgent](/reference/services/SoftLayer_User_Customer_OpenIdConnect/initiatePortalPasswordChangeByBrandAgent) Allows a Brand Agent to request password reset email to be sent to </div> <div class="method-row"> #### [inviteUserToLinkOpenIdConnect](/reference/services/SoftLayer_User_Customer_OpenIdConnect/inviteUserToLinkOpenIdConnect) Send email invitation to a user to join a SoftLayer account and authenticate with OpenIdConnect. </div> <div class="method-row deprecated"> #### [isMasterUser](/reference/services/SoftLayer_User_Customer_OpenIdConnect/isMasterUser) Determine if a portal user is a master user. <span class="deprecation-label">Deprecated </span> </div> <div class="method-row"> #### [isValidPortalPassword](/reference/services/SoftLayer_User_Customer_OpenIdConnect/isValidPortalPassword) Determine if a string is a user's portal password. </div> <div class="method-row"> #### [performExternalAuthentication](/reference/services/SoftLayer_User_Customer_OpenIdConnect/performExternalAuthentication) Perform an external authentication using the given authentication container. </div> <div class="method-row"> #### [processPasswordSetRequest](/reference/services/SoftLayer_User_Customer_OpenIdConnect/processPasswordSetRequest) Set the password for a user who has a valid password request key </div> <div class="method-row"> #### [removeAllDedicatedHostAccessForThisUser](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeAllDedicatedHostAccessForThisUser) Revoke access to all dedicated hosts on the account for this user. </div> <div class="method-row"> #### [removeAllHardwareAccessForThisUser](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeAllHardwareAccessForThisUser) Remove all hardware from a portal user's hardware access list. </div> <div class="method-row"> #### [removeAllVirtualAccessForThisUser](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeAllVirtualAccessForThisUser) Remove all cloud computing instances from a portal user's instance access list. </div> <div class="method-row"> #### [removeApiAuthenticationKey](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeApiAuthenticationKey) Remove a user's API authentication key. </div> <div class="method-row"> #### [removeBulkDedicatedHostAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeBulkDedicatedHostAccess) Revoke access for the user for one or more dedicated hosts devices. </div> <div class="method-row"> #### [removeBulkHardwareAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeBulkHardwareAccess) Remove multiple hardware from a portal user's hardware access list. </div> <div class="method-row"> #### [removeBulkPortalPermission](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeBulkPortalPermission) Remove multiple permissions from a portal user's permission set. </div> <div class="method-row"> #### [removeBulkRoles](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeBulkRoles) </div> <div class="method-row"> #### [removeBulkVirtualGuestAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeBulkVirtualGuestAccess) Remove multiple CloudLayer Computing Instances from a portal user's access list. </div> <div class="method-row"> #### [removeDedicatedHostAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeDedicatedHostAccess) Revoke access for the user to a single dedicated hosts device. </div> <div class="method-row"> #### [removeExternalBinding](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeExternalBinding) Remove an external binding from this user. </div> <div class="method-row"> #### [removeHardwareAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeHardwareAccess) Remove hardware from a portal user's hardware access list. </div> <div class="method-row"> #### [removePortalPermission](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removePortalPermission) Remove a permission from a portal user's permission set. </div> <div class="method-row"> #### [removeRole](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeRole) </div> <div class="method-row"> #### [removeSecurityAnswers](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeSecurityAnswers) </div> <div class="method-row"> #### [removeVirtualGuestAccess](/reference/services/SoftLayer_User_Customer_OpenIdConnect/removeVirtualGuestAccess) Remove a CloudLayer Computing Instance from a portal user's access list. </div> <div class="method-row"> #### [resetOpenIdConnectLink](/reference/services/SoftLayer_User_Customer_OpenIdConnect/resetOpenIdConnectLink) Change the link of a user for OpenIdConnect managed accounts, provided the </div> <div class="method-row"> #### [resetOpenIdConnectLinkUnifiedUserManagementMode](/reference/services/SoftLayer_User_Customer_OpenIdConnect/resetOpenIdConnectLinkUnifiedUserManagementMode) Change the link of a master user for OpenIdConnect managed accounts, </div> <div class="method-row"> #### [samlAuthenticate](/reference/services/SoftLayer_User_Customer_OpenIdConnect/samlAuthenticate) </div> <div class="method-row"> #### [samlBeginAuthentication](/reference/services/SoftLayer_User_Customer_OpenIdConnect/samlBeginAuthentication) </div> <div class="method-row"> #### [samlBeginLogout](/reference/services/SoftLayer_User_Customer_OpenIdConnect/samlBeginLogout) </div> <div class="method-row"> #### [samlLogout](/reference/services/SoftLayer_User_Customer_OpenIdConnect/samlLogout) </div> <div class="method-row"> #### [selfPasswordChange](/reference/services/SoftLayer_User_Customer_OpenIdConnect/selfPasswordChange) </div> <div class="method-row"> #### [setDefaultAccount](/reference/services/SoftLayer_User_Customer_OpenIdConnect/setDefaultAccount) Sets the default account for the OpenIdConnect identity that is linked to the current SoftLayer user identity. </div> <div class="method-row"> #### [silentlyMigrateUserOpenIdConnect](/reference/services/SoftLayer_User_Customer_OpenIdConnect/silentlyMigrateUserOpenIdConnect) This api is used to migrate a user to IBMid without sending an invitation. </div> <div class="method-row"> #### [updateNotificationSubscriber](/reference/services/SoftLayer_User_Customer_OpenIdConnect/updateNotificationSubscriber) Update the active status for a notification subscription. </div> <div class="method-row"> #### [updateSecurityAnswers](/reference/services/SoftLayer_User_Customer_OpenIdConnect/updateSecurityAnswers) Update portal login security questions and answers. </div> <div class="method-row"> #### [updateSubscriberDeliveryMethod](/reference/services/SoftLayer_User_Customer_OpenIdConnect/updateSubscriberDeliveryMethod) Update a delivery method for the subscriber. </div> <div class="method-row"> #### [updateVpnPassword](/reference/services/SoftLayer_User_Customer_OpenIdConnect/updateVpnPassword) Update a user's VPN password </div> <div class="method-row"> #### [updateVpnUser](/reference/services/SoftLayer_User_Customer_OpenIdConnect/updateVpnUser) Creates or updates a user's VPN access privileges. </div> <div class="method-row"> #### [validateAuthenticationToken](/reference/services/SoftLayer_User_Customer_OpenIdConnect/validateAuthenticationToken) </div> </div> </div>
28.766932
237
0.801399
eng_Latn
0.435242
e5b59e8ce0e3517a6dd7c36d9bdefe384eda444e
885
md
Markdown
Windows/InstallService/README.md
colindcli/COLINBLOG
b28158910930d5692f78af8669ea1aea1186b780
[ "MIT" ]
5
2017-08-04T02:49:22.000Z
2021-02-22T20:34:04.000Z
Windows/InstallService/README.md
colindcli/CodeG
b28158910930d5692f78af8669ea1aea1186b780
[ "MIT" ]
35
2017-08-04T06:43:08.000Z
2019-09-09T07:02:36.000Z
Windows/InstallService/README.md
colindcli/COLINBLOG
b28158910930d5692f78af8669ea1aea1186b780
[ "MIT" ]
1
2020-06-20T15:15:35.000Z
2020-06-20T15:15:35.000Z
## exe安装为服务 - [doc](README.png) / [doc](https://blog.csdn.net/github_39376455/article/details/78882600) - 下载安装文件:(InstallService)(http://dl.fzxgj.top/Files/Server/InstallService.zip) - 放置到目录如:C:\InstallService - 管理员运行cmd:C:\InstallService\instsrv.exe IdeaRegisterServer C:\InstallService\srvany.exe - 开始 - 运行(或按下键盘上的Windows+R)输入regedit,点击确定或按回车,可以打开注册表编辑器。 - 然后进入注册表在HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services下找到刚刚注册的服务名IdeaRegisterServer,在IdeaRegisterServer新建一个项,名称为“Parameters” - 单击选中它然后在右侧的窗口新建一个字符串值名称为“Application”,将其值设置为你针要做为服务运行的程序的路径,例如我的路径为“C:\InstallService\IntelliJIDEALicenseServer_windows_amd64.exe”。 - 然后可以再建立一个AppDirectory指定程序运行的初始目录C:\InstallService ## 注册服务软件 - [添加删除服务工具](http://dl.fzxgj.top/Files/Windows/%E6%9C%8D%E5%8A%A1%E7%AE%A1%E7%90%86%E5%B7%A5%E5%85%B7.zip) ## bat转exe - [bat转exe](http://dl.fzxgj.top/Files/Windows/bat%E8%BD%ACexe.zip)
31.607143
133
0.79548
yue_Hant
0.934298
e5b5c71a75bbc1e8bbd76d964f0e9c8b4b103122
4,813
md
Markdown
README.md
DeKaDeNcE/WoWDiscord
0245cf9497b59da35eb1d066029add7cf67ed5fe
[ "MIT" ]
null
null
null
README.md
DeKaDeNcE/WoWDiscord
0245cf9497b59da35eb1d066029add7cf67ed5fe
[ "MIT" ]
null
null
null
README.md
DeKaDeNcE/WoWDiscord
0245cf9497b59da35eb1d066029add7cf67ed5fe
[ "MIT" ]
null
null
null
## <sub><img loading="lazy" width="38" height="38" alt="" src="https://wowdb.dekadence.ro/assets/images/logos/dekadence/dekadence-logo.svg" /></sub> WowBot [![Contribute](https://img.shields.io/badge/contributions-welcome-brightgreen.svg)](https://github.com/DeKaDeNcE/WoWBot/pulls) [![Discord](https://img.shields.io/discord/577080623227863040.svg?logo=discord)](https://discord.gg/uNX4SX4) [![GitHub last commit](https://img.shields.io/github/last-commit/DeKaDeNcE/WoWBot.svg)](#-wowbot-----) [![GitHub repo size](https://img.shields.io/github/repo-size/DeKaDeNcE/WoWBot.svg)](#-wowbot-----) --- WoWBot is a smart bot doing smart stuff. --- ## 💬 <sub>Join us on Discord</sub> <sub><img width="18" height="18" alt="" src="https://wowdb.dekadence.ro/assets/images/discord-48.png" /></sub> [Discord](https://discord.gg/uNX4SX4) ## 👀 <sub>View a Live Demo</sub> [![This is currently under active development, and you must understand that some features are not finished.](https://wowdb.dekadence.ro/assets/images/under-development.svg)](#-view-a-live-demo) * <sub><img loading="lazy" width="18" height="18" alt="" src="https://wowdb.dekadence.ro/assets/images/logos/dekadence/dekadence-logo.svg" /></sub> [https://wowbot.dekadence.ro/](https://wowbot.dekadence.ro/) ## 🚀 <sub>Getting started</sub> <details> <summary>Click to reveal</summary> --- Install the dependencies... ### `npm install` ...then start the bot: ### `npm start` Enjoy! --- </details> ## 📚 <sub>Libraries used</sub> <details> <summary>Click to reveal</summary> --- | Name | Website | Repository | License | | :--- | :--- | :--- | :--- | | chalk | - | [github.com/chalk/chalk](https://github.com/chalk/chalk) | [MIT](https://github.com/chalk/chalk/blob/master/license) | | Discord.js | [discord.js.org](https://discord.js.org/) | [github.com/discordjs/discord.js](https://github.com/discordjs/discord.js) | [Apache](https://github.com/discordjs/discord.js/blob/master/LICENSE) | | mysql | - | [github.com/mysqljs/mysql](https://github.com/mysqljs/mysql) | [MIT](https://github.com/mysqljs/mysql/blob/master/License) | | rivescript-js | [rivescript.com](https://www.rivescript.com/) | [github.com/aichaos/rivescript-js](https://github.com/aichaos/rivescript-js) | [MIT](https://github.com/aichaos/rivescript-js/blob/master/LICENSE) | | ssh2 | - | [github.com/mscdex/ssh2](https://github.com/mscdex/ssh2) | [MIT](https://github.com/mscdex/ssh2/blob/master/LICENSE) | | telnet-client | - | [github.com/mkozjak/node-telnet-client](https://github.com/mkozjak/node-telnet-client) | [LGPLv3](https://github.com/mkozjak/node-telnet-client/blob/master/LICENSE) | | tunnel-ssh | - | [github.com/agebrock/tunnel-ssh](https://github.com/agebrock/tunnel-ssh) | [MIT](https://github.com/agebrock/tunnel-ssh/blob/master/LICENSE) | --- </details> ## 📝 <sub>License</sub> <details> <summary>Click to reveal</summary> --- MIT License Copyright © 2020 ÐeKaÐeNcE Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. --- </details>
59.419753
593
0.599003
yue_Hant
0.542202
e5b5f3fd12b76d8d2cf7b5f4608090ae7bef9002
763
md
Markdown
learn-pr/data-ai-cert/build-a-faq-chat-bot-with-qna-maker-and-azure-bot-service/includes/4-exercise-publish-your-knowledge-base.md
OpenLocalizationTestOrg/learn-pr
cd97037f22993f37c5c4da22a029cce056897142
[ "CC-BY-4.0", "MIT" ]
4
2019-10-17T06:12:55.000Z
2020-10-07T20:55:31.000Z
learn-pr/data-ai-cert/build-a-faq-chat-bot-with-qna-maker-and-azure-bot-service/includes/4-exercise-publish-your-knowledge-base.md
OpenLocalizationTestOrg/learn-pr
cd97037f22993f37c5c4da22a029cce056897142
[ "CC-BY-4.0", "MIT" ]
null
null
null
learn-pr/data-ai-cert/build-a-faq-chat-bot-with-qna-maker-and-azure-bot-service/includes/4-exercise-publish-your-knowledge-base.md
OpenLocalizationTestOrg/learn-pr
cd97037f22993f37c5c4da22a029cce056897142
[ "CC-BY-4.0", "MIT" ]
3
2019-10-17T06:13:32.000Z
2022-01-27T10:27:49.000Z
Now that you've created a QnA knowledge base, it's time to publish it so you can access it from a client application. 1. On the QnA Maker Knowledge base page, where you were testing in the previous exercise, select **PUBLISH** in the menu at the top of the page. 1. Read the message on the next page. It indicates that your KB will move from test to production. It also points out that your KB will be available as an endpoint that you can use in apps and bots. 1. Select **Publish**. 1. After a short time, a success message will appear (if no errors occur). 1. Note the URL information that appears. You can use the information provided to test the KB with Postman or curl. If you need to, you can select **Edit Service** to go back to the KB and make edits.
84.777778
199
0.760157
eng_Latn
0.999897
e5b6c95fd8516a12db80afcd3a84eb02570b68f3
2,399
md
Markdown
README.md
davidread/matrix-booking
ccd0ad6e4c0c7703b91f1fa4211333ce8d50d7c6
[ "MIT" ]
null
null
null
README.md
davidread/matrix-booking
ccd0ad6e4c0c7703b91f1fa4211333ce8d50d7c6
[ "MIT" ]
null
null
null
README.md
davidread/matrix-booking
ccd0ad6e4c0c7703b91f1fa4211333ce8d50d7c6
[ "MIT" ]
null
null
null
# Matrix Booking client Python client for interacting with the Matric Booking API. Unofficial. TODO: compare this client with this other one, and bring in any advantages it has: https://github.com/moj-analytical-services/matrixbooking/blob/master/api_requests.py ## Getting started First create your config file with your Matrix Booking username and password ```bash cp matrix.ini.template matrix.ini # now edit matrix.ini to insert your matrix username & password ``` Now in python you can make calls ```python import matrix # initializing the client logs you in matrix = MatrixClient() # get a list of my bookings bookings = matrix.get_my_bookings() print(bookings) # check availability for a particular resource availability = matrix.get('availability', 'bc=1&f=2019-11-13T11:00&include=locations&include=bookingSettings&include=timeslots&l=718321&status=available') from pprint import pprint pprint(availability).json() # or you can use a full URL all_bookings = matrix.get_with_full_url('https://app.matrixbooking.com/api/v1/user/current/bookings?include=locations&include=visit&include=facilities&include=extras&include=bookingSettings&include=layouts') pprint(all_bookings) ``` ## API calls API documentation: https://developers.matrixbooking.com/#introduction Also you can work out syntax by trying an action manually on the Matrix website, and look at the XHR request it creates. ## Auth This client uses username/password, as you would if logging into the website, which seems fine for low key personal use. Alternatively, with the blessing of the administrators, you could [get an API key](https://developers.matrixbooking.com/#authentication). Support for this could be added to this client with a line or two of code. ## Warning This client is intended for good purposes - e.g. analysis of booking patterns to maximize best use of resources. However it can be abused for less good purposes. Please don't. Danger areas: * Getting an unfair advantage when booking resources * Use of personal data Relevant rules: * corporate codes of conduct * ethical data science Remember you are using your own credentials, so your actions are logged against you. The Matrix Booking API is not officially documented, and whilst it's clear how to use it, and there are no technical barriers, if there is abuse then we can expect the company apply barriers to API use. So don't.
35.279412
212
0.789496
eng_Latn
0.994316
e5b6cd455ff61459aaa3ac144b3684c8101c9104
62
md
Markdown
README.md
RC-Diztl/pure-bootstrap
21401d9a59af76921a74fe880a08ffcd0437af99
[ "MIT" ]
6
2016-08-13T12:12:41.000Z
2022-03-15T23:29:29.000Z
README.md
corndogcomputers/pure-bootstrap
517b128cf532028f31c82c8f1c933d2c812f9586
[ "MIT" ]
1
2021-12-24T04:59:50.000Z
2021-12-24T05:00:14.000Z
README.md
corndogcomputers/pure-bootstrap
517b128cf532028f31c82c8f1c933d2c812f9586
[ "MIT" ]
4
2016-05-10T21:04:48.000Z
2022-01-21T08:43:37.000Z
pure-bootstrap ============== Pure Bootstrap Wordpress Theme
12.4
30
0.645161
kor_Hang
0.254844
e5b777121d1ccaf29e948d8be602f671cf78fed1
9,282
md
Markdown
_posts/2017-10-30-documentando-a-historia-com-sphinx.md
afucher/jtemporal.github.io
81c0d3dbaddcdffd99c4be06a0597b375f02e009
[ "MIT" ]
1
2019-12-13T12:54:38.000Z
2019-12-13T12:54:38.000Z
_posts/2017-10-30-documentando-a-historia-com-sphinx.md
afucher/jtemporal.github.io
81c0d3dbaddcdffd99c4be06a0597b375f02e009
[ "MIT" ]
null
null
null
_posts/2017-10-30-documentando-a-historia-com-sphinx.md
afucher/jtemporal.github.io
81c0d3dbaddcdffd99c4be06a0597b375f02e009
[ "MIT" ]
null
null
null
--- title: "Documentando a história com Sphinx" layout: post image: "/images/tutorial.png" date: '2017-10-30 10:00:00' tags: - tutorial - python - sphinx - readthedocs - read the docs - documentação - português comments: true --- Sabe aquelas documentações bonitas de bibliotecas que você encontra por aí? Por exemplo, a documentação do [Bottery](https://docs.bottery.io/en/latest/) ou a do [Flask](http://flask.pocoo.org/docs/0.12/)? Todas são construídas com uma biblioteca Python chamada [Sphinx](http://www.sphinx-doc.org/en/stable/#). Sphinx foi criada para gerar a própria documentação do Python e hoje é muito utilizada por facilitar a construção automatizada de documentações de bibliotecas. Uma coisa legal que dá para fazer é utilizar Sphinx para construir um histórico de acontecimentos como é feito no [Manual do Big Kahuna da Python Brasil](http://manual-do-big-kahuna.readthedocs.io/en/latest/). Com o objetivo de fazer o mesmo tipo de histórico com eventos regionais comecei um [repositório para a PythonSudeste](https://github.com/pythonsudeste/pythonsudeste_documentacao) e agora também um para PyConAmazônia que, graças ao Nilo Menezes, [possui um _post mortem_ da organização completo disponibilizado para a comunidade](https://www.dropbox.com/s/tr83g5j5amdkyxt/Pycon%20Amaz%C3%B4nia%202017%20-%20Memorial%20da%20Organiza%C3%A7%C3%A3o%20do%20Evento.pdf?dl=0). Vou narrar aqui os passos que fiz na esperança que mais organizadores de eventos regionais possam disponibilizar o mesmo tipo de levantamento histórico de seus eventos. ## Tudo começa com repositório git Comecei adicionando dois arquivos: - `requirements.txt`: esse contém as bibliotecas que vamos usar para gerar a documentação <center> <script src="https://gist.github.com/jtemporal/7e6a99f4245407367dc07740b04f925e.js"></script> <small> <i>requirements.txt</i> </small> </center> - `README.md`: esse contém informações de como rodar o projeto O projeto Sphinx que vamos usar aqui roda num ambiente virtual com Python 3. Eu particularmente gosto de usar [`virtualenv`](https://virtualenv.pypa.io/en/stable/) para criar meu ambiente: ``` console $ virtualenv .env $ source .env/bin/activate # pode variar depedendo do seu sistema ``` E para instalar as dependencias: ``` console (.env) $ pip install -r requirements.txt ``` Depois de instaladas, começamos rodando o quickstart do Sphinx: ``` console (.env) $ sphinx-quickstart ``` Esse _quickstart_ vai te fazer inúmeras perguntas de como montar a sua documentação. Eu [copiei o _output_ em um Gist](https://gist.github.com/jtemporal/e30f156e6444ca20fe07f65e0c6215bf) para que você possa estudar o que responder para cada uma das perguntas e também para que veja as repostas que eu dei. Em sua maioria, eu segui com o padrão já oferecido pelo Sphinx. Alguns pontos importantes dessas perguntas: - Rodando o `sphinx-quickstart` você precisará responder coisas como “Qual o nome do projeto?”, “Qual o nome do autor?” e “Qual a língua do conteúdo?”, então é importante responder com calma cada uma das perguntas ;) - Em algum momento desse questionário é tomada a decisão sobre separar o _build_ e o _source_ e aqui a dica é: Se for colocar no [ReadTheDocs](https://readthedocs.org/) **não** precisa _commitar_ o _build_ \o/ (mas eu vou falar disso melhor num outro post) Ao final desse longo questionário que acabamos de responder, você vai encontrar uma estrutura pronta para ser usada: <center> <script src="https://gist.github.com/jtemporal/e4ef18051ec0d627678ad658826dc362.js"></script> </center> ### build/ Inicialmente vazia, mas quando rodarmos o comando de construção do site lá vai ficar cheio de coisa ;P ### source/ É lá que vamos colocar todos nossos arquivos que vão virar páginas do nosso projeto. ### conf.py As respostas que demos durante o _quickstart_ ficam armazenadas dentro desse arquivo de configurações e é ele que o sphinx usa para gerar os arquivos `.html` a partir dos arquivos de texto. Aqui note que a extensão preferida do Sphinx é `.rst` de _reStructuredText_ e desconfio que escolheram `.rst` por ser uma forma de escrita baseada em identação. 👀 ### index.rst É a partir do `index.rst` que o Sphinx vai construir o `index.html` da sua documentação. Se você abrir o `index.rst` no GitHub você vai ver que ele é relativamente simples: <center> <script src="https://gist.github.com/jtemporal/39028b49f8c0b851b4bfccf2b4a149fc.js"></script> </center> Essa é a “versão de fábrica” do `index.rst` que é criada ao rodar o _quickstart_. Com ela já é possível rodar um _build_ inicial do site. Para fazer o _build_ usamos o `make`, ele se encarrega de buscar no diretório do projeto os arquivos `.rst` e “traduzi-los” para `.html`. Vejamos: ``` console (.env) $ make html ``` se rodar sem erros, o resultado na tela deve ser parecido com isso: <center> <script src="https://gist.github.com/jtemporal/123389890312d764ec16bcea64e06178.js"></script> </center> _Mas Jessica, pra quê build se você mesma disse lá em cima que o ReadTheDocs não precisa dele?_ É verdade, mas o jeito mais fácil de verificar o resultado do seu trabalho é localmente e para isso você precisa ter as páginas `.html`. Outra coisa que você vai precisar é uma forma de visualizar essas páginas, claro que você pode apenas abrir os arquivos `.html` no seu navegador favorito, mas outra opção é usar iniciar um _server_ (servidor). Servidores foram adicionados como parte [built-in do módulo `http` no Python 3](https://docs.python.org/3/library/http.server.html#module-http.server) e são muito úteis em casos como esse. Para iniciar um processo servidor basta rodar: ``` console (.env) $ python -m "http.server" ``` E usando o navegador acessar o caminho `localhost:8000`. Rodando o processo a partir do root do projeto como fizemos você deve ver uma listagem de todos arquivos e diretórios no seu navegador: <img src="https://i.imgur.com/cLzKN77.png" style="max-width: 60%;"> E aí é só seguir pelo caminho até a pasta onde estão os `.html`: <img src="https://i.imgur.com/1XNPT8Q.png" style="max-width: 60%;"> Ao clicar em `html/`, por conter um arquivo `index.html`, seu navegador irá mostrar o resultado do `build` que fica mais ou menos assim usando o `index.rst` gerado de fábrica: <center> <img src="https://i.imgur.com/X0VyLbU.png"> <small> <i>Resultado do primeiro build com o index.rst de fábrica</i> </small> </center> ## Introdução de conteúdo \o/ Todos esses passos até agora foram para preparar nosso projeto para chegar na parte que realmente queremos. Vamos começar criando uma página de conteúdo chamada i`pyconamazonia2017.rst` apenas com um título e criar a conexão entre ela e nosso `index.rst`, veja: <center> <script src="https://gist.github.com/jtemporal/8d6a0aea5efe3dd251e4787b876863df.js"></script> </center> <center> <script src="https://gist.github.com/jtemporal/b604f5ea85b0240cf2466a91b3726e23.js"></script> </center> Ao rodar o _build_ teremos o seguinte resultado: <center> <img src="https://i.imgur.com/nA3IG1u.png"> <small> <i>resultado do build para pyconamazonia.rst</i> </small> </center> <center> <img src="https://i.imgur.com/7ReRbwJ.png"> <small> <i>resultado do build para index.rst</i> </small> </center> Mudando um pouco o conteudo desse `index.rst` para colocar uma capa por exemplo temos: <center> <script src="https://gist.github.com/jtemporal/5d026f71e9bad58e1ce064551cf49615.js"></script> <small> <i>index.rst com link para imagem de capa</i> </small> </center> Nota: a imagem não renderizou aqui pois o link só faz sentido dentro do projeto já que este possui uma pasta que contém a imagem. Depois do _build_ a página fica assim: ![renderizando o projeto com a capa](https://i.imgur.com/skq9ygN.png) A partir daí é só continuar preenchendo e conectando novas páginas do jeito que achar mais interessante ;) --- Depois de tudo isso você pode achar que o tema padrão para as páginas não tá legal e querer mudar. O Sphinx possui vários temas embutidos mas aqui vamos usar o tema do ReadTheDocs. Primeiro começamos adicionando ele ao `requirements.txt`: <center> <script src="https://gist.github.com/jtemporal/32648f3777c33ff2feb8961c49be9173.js"></script> <small> <i>requirements.txt com o tema do ReadTheDocs</i> </small> </center> E depois instalando da seguinte forma: ``` console (.env) $ pip install -r requirements.txt ``` E agora fica só faltando alterar o valor da variável que indica o tema no arquivo de configuração (`conf.py`) para o tema do ReadTheDocs: ``` console html_theme = ‘sphinx_rtd_theme’ ``` E ao fazer novo _build_, a sua página inicial vai ter essa carinha: ![projeto renderizado com o tema do Read The Docs](https://i.imgur.com/fVXB8YJ.png) --- ## Considerações O projeto do memorial da PyCon Amazônia foi estruturado para receber **o maior número de contribuições possíveis**. Se quiser [corre lá no GitHub pra ver como tá sendo isso](https://github.com/pythonbrasil/pycon-amazonia-memorial) 🎉 E uma dica sobre `.rst` é [usar esse cheatsheet aqui](https://github.com/ralsina/rst-cheatsheet/blob/master/rst-cheatsheet.rst) quando não souber a forma de fazer algo em restructured text 🙃 Agora, a parte de deixar tudo isso online fica pro próximo post, por hoje é isso pessoal ;) ## Agradecimentos Marco Rougeth e Silvia Benza pelas revisões.
47.845361
468
0.767615
por_Latn
0.998914
e5b8051785fcd20fb76b6a7e926f9298059f5763
25
md
Markdown
README.md
hfu/kasumi
33fcb2678c586edcd537f7b2466c92a0fd986836
[ "CC0-1.0" ]
null
null
null
README.md
hfu/kasumi
33fcb2678c586edcd537f7b2466c92a0fd986836
[ "CC0-1.0" ]
null
null
null
README.md
hfu/kasumi
33fcb2678c586edcd537f7b2466c92a0fd986836
[ "CC0-1.0" ]
null
null
null
# kasumi Example GeoJSON
8.333333
15
0.8
eng_Latn
0.419348
e5b82f2e88fed99406a2a98b76a312767d489995
7,792
md
Markdown
_posts/2022-01-02-redux-saga.md
Seongil-Shin/Seongil-Shin.github.io
505ad61cefa9767d10c92862c8cf5cd91f6f2912
[ "MIT" ]
null
null
null
_posts/2022-01-02-redux-saga.md
Seongil-Shin/Seongil-Shin.github.io
505ad61cefa9767d10c92862c8cf5cd91f6f2912
[ "MIT" ]
null
null
null
_posts/2022-01-02-redux-saga.md
Seongil-Shin/Seongil-Shin.github.io
505ad61cefa9767d10c92862c8cf5cd91f6f2912
[ "MIT" ]
null
null
null
--- title: redux-saga concepts author: 신성일 date: 2022-01-02 18:19:26 +0900 categories: [study, react] tags: [react] --- ## Declarative Effects ```javascript import { takeEvery } from "redux-saga/effects"; import Api from "./path/to/api"; function* watchFetchProducts() { yield takeEvery("PRODUCTS_REQUESTED", fetchProducts); } function* fetchProducts() { const products = yield Api.fetch("/products"); console.log(products); } ``` ```javascript const iterator = fetchProducts() assert.deepEqual(iterator.next().value, ??) ``` 1. iterator 에는 제너레이터 객체가 반환됨. 2. next() 를 하면, Api.fetch 가 실행되면서 Promise 객체가 반환됨. 이때 위 featchProducts()를 테스트하기 위해선, API를 호출해야하는데, 실제로 호출할 순 없으므로 다른 방법을 통해 API를 호출하지 않고 테스트를 진행해야한다. ### effects 사용 ```javascript import { call } from "redux-saga/effects"; function* fetchProducts() { const products = yield call(Api.fetch, "/products"); // ... } ``` ```javascript import { call } from "redux-saga/effects"; import Api from "..."; const iterator = fetchProducts(); // expects a call instruction assert.deepEqual( iterator.next().value, call(Api.fetch, "/products"), "fetchProducts should yield an Effect call(Api.fetch, './products')" ); ``` - 위에서와는 달리, iterator.next() 는 call 함수를 반환하게 됨. - 실제 API를 호출하지 않으므로, 테스트하기가 훨씬 용이해진다. ### api 호출 effects ```javascript yield call([obj, obj.method], arg1, arg2, ...); // 주어진 object의 주어진 method를 호출 yield call(fn, arg1, arg2, ...); ``` ```javascript yield apply(obj, obj.method, [arg1, arg2, ...]) // call과 같고 arguments 형식만 반대 ``` ```javascript yield cps(fn, arg1, arg2, ...) ``` - node 스타일 함수를 호출할때 사용 ((error, result) => ()) ## Dispatching Actions 제너레이터 함수 안에서, dispatch를 하기 위해 아래와 같이 직접 dispatch를 호출할 수 있다. ```javascript function* fetchProducts(dispatch) { const products = yield call(Api.fetch, "/products"); dispatch({ type: "PRODUCTS_RECEIVED", products }); } ``` 하지만 이는 위에서와 같이, 테스트를 하기 위해선 dispatch 함수를 mock 해야할 필요가 있다. 따라서 redux-saga에서는 아래와 같이 "put" effect를 통해 dispatch를 할 수 있다. ```javascript import { call, put } from "redux-saga/effects"; // ... function* fetchProducts() { const products = yield call(Api.fetch, "/products"); // create and yield a dispatch Effect yield put({ type: "PRODUCTS_RECEIVED", products }); } ``` ## Error handling 사가에서는 아래와 같은 방식으로 error handling 가능 ```javascript import Api from "./path/to/api"; import { call, put } from "redux-saga/effects"; // ... function* fetchProducts() { try { const products = yield call(Api.fetch, "/products"); yield put({ type: "PRODUCTS_RECEIVED", products }); } catch (error) { yield put({ type: "PRODUCTS_REQUEST_FAILED", error }); } } ``` 아래와 같은 형태로도 가능하다 ```javascript import Api from "./path/to/api"; import { call, put } from "redux-saga/effects"; function fetchProductsApi() { return Api.fetch("/products") .then((response) => ({ response })) .catch((error) => ({ error })); } function* fetchProducts() { const { response, error } = yield call(fetchProductsApi); if (response) yield put({ type: "PRODUCTS_RECEIVED", products: response }); else yield put({ type: "PRODUCTS_REQUEST_FAILED", error }); } ``` ## Saga helpers - takeEvery : 모든 dispatch 액션을 수용함 - takeLatest : 가장 나중에 들어온 액션만 수용함 - 이전에 들어와 수행중인 task는 중단되고, 나중에 들어온 task 가 수행되는 방식 ```javascript import { takeEverym, takeLatest } from "redux-saga/effects"; function* watchFetchData() { yield takeEvery("FETCH_REQUESTED", fetchData); } function* watchFetchData() { yield takeLatest("FETCH_REQUESTED", fetchData); } ``` ## channel 아래와 같이 take/fork 를 통해 여러 요청을 동시적으로 수행할 수 있다. ```javascript import { take, fork, ... } from 'redux-saga/effects' function* watchRequests() { while (true) { const {payload} = yield take('REQUEST') yield fork(handleRequest, payload) } } function* handleRequest(payload) { ... } ``` 이때 REQUEST 요청을 순차적으로 첫번째부터 수행하고 싶을때는, queue를 만들어서 하나가 끝나면 다음 것을 꺼내어 수행하는 방식으로 하면 된다. 이때 redux-sage에서는 actionChannel(pattern)이라는 effect를 사용하여 구현할 수 있다. actionChannel은 saga가 API call 등으로 막혀있을때도 메세지를 받아 큐에 넣을 수 있다. ```javascript import { take, actionChannel, call, ... } from 'redux-saga/effects' function* watchRequests() { // 1- Create a channel for request actions const requestChan = yield actionChannel('REQUEST') while (true) { // 2- take from the channel const {payload} = yield take(requestChan) // 3- Note that we're using a blocking call yield call(handleRequest, payload) } } function* handleRequest(payload) { ... } ``` ## composing sagas watch에서 call을 yield하면 saga는 call이 끝날때까지 기다린다. ```javascript function* fetchPosts() { yield put(actions.requestPosts()); const products = yield call(fetchApi, "/products"); yield put(actions.receivePosts(products)); } function* watchFetch() { while (yield take("FETCH_POSTS")) { yield call(fetchPosts); // waits for the fetchPosts task to terminate } } ``` 다음과 같이 all 이펙트를 사용하여 여러 제너레이터를 동시에 수행하고, 모두 끝날때까지 기다릴 수도 있다. ```javascript function* mainSaga(getState) { const results = yield all([call(task1), call(task2), ...]) yield put(showResults(results)) } ``` ## concurrency takeEvery와 takeLatest가 어떻게 동시성을 제어하는가? ### takeEvery ```javascript import { fork, take } from "redux-saga/effects"; const takeEvery = (pattern, saga, ...args) => fork(function* () { while (true) { const action = yield take(pattern); yield fork(saga, ...args.concat(action)); } }); ``` 해당하는 pattern이 들어오면 fork를 통해 분리하여 처리한다. ### takeLatest ```javascript import { cancel, fork, take } from "redux-saga/effects"; const takeLatest = (pattern, saga, ...args) => fork(function* () { let lastTask; while (true) { const action = yield take(pattern); if (lastTask) { yield cancel(lastTask); // cancel is no-op if the task has already terminated } lastTask = yield fork(saga, ...args.concat(action)); } }); ``` 새로운 것이 들어오면 이전에 들어온 것을 취소하고, 새로운 것을 수행한다. ## Fork model redux-saga에서는 다음 두 이펙트를 통해 fork를 다룬다 - fork : attached fork를 만듦 - spawn : datached fork를 만듦 ### attached fork Saga는 자기 안의 모든 명령어가 수행되고, 모든 attached forks가 종료되면 종료된다. ```javascript import { fork, call, put, delay } from "redux-saga/effects"; import api from "./somewhere/api"; // app specific import { receiveData } from "./somewhere/actions"; // app specific function* fetchAll() { const task1 = yield fork(fetchResource, "users"); const task2 = yield fork(fetchResource, "comments"); yield delay(1000); } function* fetchResource(resource) { const { data } = yield call(api.fetch, resource); yield put(receiveData(data)); } function* main() { yield call(fetchAll); } ``` 따라서 위 call(fetchAll)은 delay(1000)이 끝나고, fork 된 task1, task2 가 끝나야 종료된다. 이는 아래와 같이 parallel effect와 같은 동작을 한다. ```javascript function* fetchAll() { yield all([ call(fetchResource, "users"), // task1 call(fetchResource, "comments"), // task2, delay(1000), ]); } ``` 이때 위 코드는 3개의 child effect 중 하나가 실패하면, Error가 넘어가면서 3개의 child가 전부 실패하고(완료되지 않은 작업들만), Error는 call로 fetchAll을 호출한 함수로 throw된다. 이는 fork를 이용한 모델에서도 똑같이 동작하며 아래와 같이 main에서 try/catch를 통해 에러를 받을 수 있다. ```javascript //... imports function* fetchAll() { const task1 = yield fork(fetchResource, "users"); const task2 = yield fork(fetchResource, "comments"); yield delay(1000); } function* fetchResource(resource) { const { data } = yield call(api.fetch, resource); yield put(receiveData(data)); } function* main() { try { yield call(fetchAll); } catch (e) { // handle fetchAll errors } } ``` ### Cancellation Saga를 cancel하면, 아래 것들이 cancel된다 - saga가 현재하고 있는 effect부터 cancel됨 - 모든 attached fork 가 cancle됨 ### Detached forks detached fork는 그 자체러 하나의 수행환경이다. 따라서, 부모는 detached fork의 종료를 기다리지 않고, 에러는 상위 부모로 올라가지 않는다. 또 cancle을 해도 detached fork는 영향을 받지 않는다.
22.136364
126
0.676206
kor_Hang
0.990906
e5b84a902667b09cb5c1a7d69580327685d7c117
151
md
Markdown
README.md
RondineleG/CursoEFCore
3798018274f3834eed782fc6b3cfe0fa4acd5670
[ "MIT" ]
null
null
null
README.md
RondineleG/CursoEFCore
3798018274f3834eed782fc6b3cfe0fa4acd5670
[ "MIT" ]
null
null
null
README.md
RondineleG/CursoEFCore
3798018274f3834eed782fc6b3cfe0fa4acd5670
[ "MIT" ]
null
null
null
# Curso de Entity Framework Core usando Entity Framework Core para Mapear suas entidades e API fluente para Configurar e mapear Propriedades e tipos.
37.75
58
0.821192
por_Latn
0.93923
e5b84c91548f32ed039a0a50ac4b50f5b36ae42b
9,513
md
Markdown
docs/2014/database-engine/install-windows/rename-a-computer-that-hosts-a-stand-alone-instance-of-sql-server.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/database-engine/install-windows/rename-a-computer-that-hosts-a-stand-alone-instance-of-sql-server.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/database-engine/install-windows/rename-a-computer-that-hosts-a-stand-alone-instance-of-sql-server.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
1
2020-03-04T05:50:54.000Z
2020-03-04T05:50:54.000Z
--- title: Renommer un ordinateur qui héberge une instance autonome de SQL Server | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: install ms.topic: conceptual helpviewer_keywords: - remote login errors [SQL Server] - standalone computer names [SQL Server] - names [SQL Server], standalone instances of SQL Server - renaming standalone instances of SQL Server - sysservers system table - removing remote logins - deleting remote logins - dropping remote logins ms.assetid: bbaf1445-b8a2-4ebf-babe-17d8cf20b037 author: MashaMSFT ms.author: mathoma manager: craigg ms.openlocfilehash: 1bd9e18d1dfe7226d043a7c8c968999da680da08 ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 06/15/2019 ms.locfileid: "62775006" --- # <a name="rename-a-computer-that-hosts-a-stand-alone-instance-of-sql-server"></a>Renommer un ordinateur qui héberge une instance autonome de SQL Server Lorsque vous modifiez le nom de l'ordinateur qui exécute [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], le nouveau nom est reconnu au démarrage de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . Vous n'avez pas besoin de réexécuter le programme d'installation pour réinitialiser le nom d'ordinateur. À la place, utilisez la procédure suivante pour mettre à jour les métadonnées système qui sont stockées dans sys.servers et signalées par la fonction système @@SERVERNAME. Mettez à jour les métadonnées système pour refléter les modifications opérées dans les noms d’ordinateurs pour les connexions à distance et les applications qui utilisent @@SERVERNAME, ou qui interrogent le nom du serveur à partir de sys.servers. Les procédures suivantes ne vous permettent pas de renommer une instance de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Elles ne peuvent être utilisées que pour renommer la partie du nom de l'instance qui correspond au nom de l'ordinateur. Par exemple, vous pouvez remplacer le nom d'un ordinateur nommé MB1 qui héberge une instance de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] nommée Instance1 par un autre nom, tel que MB2. Cependant, la partie d'instance du nom, Instance1, restera inchangée. Dans cet exemple, la partie \\\\*Nom_ordinateur*\\*Nom_instance* nommée \\\MB1\Instance1 sera remplacée par \\\MB2\Instance1. **Avant de commencer** Avant d'entamer la procédure consistant à attribuer un nouveau nom, prenez connaissance des informations suivantes : - Lorsqu'une instance de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] fait partie d'un cluster de basculement [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] , le processus permettant de renommer l'ordinateur diffère du processus permettant de renommer un ordinateur qui héberge une instance autonome. - [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] ne prend pas en charge la modification de nom des ordinateurs impliqués dans la réplication, excepté lorsque vous utilisez la copie des journaux de transaction avec la réplication. L'ordinateur secondaire pour la copie des journaux de transaction peut être renommé si l'ordinateur principal est définitivement perdu. Pour plus d’informations, consultez [Copie des journaux de transaction et réplication &#40;SQL Server&#41;](../log-shipping/log-shipping-and-replication-sql-server.md). - Lorsque vous renommez un ordinateur configuré pour utiliser [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)], [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] peut ne pas être disponible après la modification du nom d'ordinateur. Pour plus d’informations, consultez [Changement de nom d’un ordinateur serveur de rapports](../../reporting-services/report-server/rename-a-report-server-computer.md). - Lorsque vous renommez un ordinateur configuré pour utiliser la mise en miroir de bases de données, vous devez désactiver cette fonction avant de modifier le nom. Ensuite, vous devez la réactiver avec le nouveau nom de l'ordinateur. Les métadonnées de la mise en miroir de la base de données ne seront pas mises à jour automatiquement de façon à refléter le nouveau nom de l'ordinateur. Procédez comme suit pour mettre à jour les métadonnées système : - Les utilisateurs qui se connectent à [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] par le biais d'un groupe Windows utilisant une référence codée en dur au nom de l'ordinateur risquent de ne pas pouvoir se connecter à [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Cela peut se produire après l'attribution du nouveau nom si le groupe Windows spécifie l'ancien nom d'ordinateur. Pour vous assurer que ces groupes Windows bénéficient de la connectivité [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] après l'opération de changement de nom, mettez à jour le groupe Windows pour spécifier le nouveau nom de l'ordinateur. Vous pouvez vous connecter à [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] à l'aide du nouveau nom d'ordinateur après avoir redémarré [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Pour vous assurer que @@SERVERNAME retourne le nom mis à jour de l’instance de serveur local, vous devez exécuter manuellement la procédure suivante qui s’applique à votre scénario. La procédure à utiliser varie selon que vous mettez à jour un ordinateur qui héberge une instance par défaut ou une instance nommée de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. ### <a name="to-rename-a-computer-that-hosts-a-stand-alone-instance-of-includessnoversionincludesssnoversion-mdmd"></a>Pour renommer un ordinateur qui héberge une instance autonome de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] - Dans le cas d'un ordinateur renommé qui héberge une instance par défaut de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], exécutez les procédures ci-dessous : ``` sp_dropserver <old_name>; GO sp_addserver <new_name>, local; GO ``` Redémarrez l'instance de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. - Dans le cas d'un ordinateur renommé qui héberge une instance nommée de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], exécutez les procédures ci-dessous : ``` sp_dropserver <old_name\instancename>; GO sp_addserver <new_name\instancename>, local; GO ``` Redémarrez l'instance de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. ## <a name="after-the-renaming-operation"></a>Après l'opération d'attribution d'un nouveau nom Une fois l'ordinateur redémarré, toutes les connexions qui utilisaient l'ancien nom d'ordinateur doivent se connecter à l'aide du nouveau nom. #### <a name="to-verify-that-the-renaming-operation-has-completed-successfully"></a>Pour vérifier si la modification du nom s'est effectuée correctement - Sélectionnez des informations à partir de @@SERVERNAME ou sys.servers. La fonction @@SERVERNAME renvoie le nouveau nom et la table sys.servers affiche le nouveau nom. L’exemple suivant illustre l’utilisation de @@SERVERNAME. ``` SELECT @@SERVERNAME AS 'Server Name'; ``` ## <a name="additional-considerations"></a>Considérations supplémentaires **Ouvertures de sessions distantes** - Si des sessions à distance sont ouvertes sur l’ordinateur, l’exécution de **sp_dropserver** risque de générer une erreur semblable à la suivante : `Server: Msg 15190, Level 16, State 1, Procedure sp_dropserver, Line 44 There are still remote logins for the server 'SERVER1'.` Pour résoudre l'erreur, vous devez supprimer les ouvertures de session à distance du serveur. #### <a name="to-drop-remote-logins"></a>Pour supprimer les ouvertures de session à distance - Dans le cas d'une instance par défaut, suivez la procédure ci-dessous : ``` sp_dropremotelogin old_name; GO ``` - Dans le cas d'une instance nommée, suivez la procédure ci-dessous : ``` sp_dropremotelogin old_name\instancename; GO ``` **Configurations de serveur lié** - Les configurations de serveur lié seront affectées par l’opération d’attribution d’un nouveau nom à l’ordinateur. Utilisez `sp_addlinkedserver` ou `sp_setnetname` pour mettre à jour les références de nom d'ordinateur. Pour plus d’informations, consultez [sp_addlinkedserver &#40;Transact-SQL&#41;](/sql/relational-databases/system-stored-procedures/sp-addlinkedserver-transact-sql) ou [sp_setnetname &#40;Transact-SQL&#41;](/sql/relational-databases/system-stored-procedures/sp-setnetname-transact-sql). **Noms d’alias client** - Les alias client qui utilisent des canaux nommés seront affectés par l’opération d’attribution d’un nouveau nom à l’ordinateur. Par exemple, si un alias « PROD_SRVR » a été créé pour désigner SRVR1 et utilise le protocole de canaux nommés, le nom de canal ressemblera à `\\SRVR1\pipe\sql\query`. Après avoir renommé l'ordinateur, le chemin d'accès du canal nommé ne sera plus valide. Pour plus d’informations sur les canaux nommés, consultez [Création d’une chaîne de connexion valide à l’aide de canaux nommés](https://go.microsoft.com/fwlink/?LinkId=111063). ## <a name="see-also"></a>Voir aussi [Installer SQL Server 2014](../../database-engine/install-windows/install-sql-server.md)
82.008621
746
0.75791
fra_Latn
0.94956
e5b8852b08cbdec3fecdf6292e630ec86980a2a9
218
md
Markdown
0274.h-index/README.md
RequireSun/road-of-leetcode
4a33d0b5a409109456035e82cd9e0b7409ebc472
[ "Apache-2.0" ]
null
null
null
0274.h-index/README.md
RequireSun/road-of-leetcode
4a33d0b5a409109456035e82cd9e0b7409ebc472
[ "Apache-2.0" ]
6
2021-03-09T22:25:52.000Z
2022-02-26T19:54:00.000Z
0274.h-index/README.md
RequireSun/road-of-leetcode
4a33d0b5a409109456035e82cd9e0b7409ebc472
[ "Apache-2.0" ]
1
2019-11-08T03:11:53.000Z
2019-11-08T03:11:53.000Z
# 0274. H 指数 题目太难读懂了. 它的意思就是在一个纯数字数组中, 存在一个数字 h, 数组中大于等于 h 的数字的个数有 h 个, 求这个数字. ## 解法 1 ([map.js](./map.js)) 建立一个 map, 遍历整个数组, 下标为被引用数, value 为该引用次数出现的次数. 倒序向前遍历, 并累加篇数, 第一次篇数大于等于引用数时, 返回结果. ![成绩](assets/map.png)
15.571429
56
0.683486
yue_Hant
0.413107
e5b88ea9d1f5910f82e606c0ca228c9a1509d006
437
md
Markdown
EffectiveJava/catalog/Chapter8.md
msldxy/NoteOfSomeBooks
b2093ee1bd5ad6e3a47076ad7ff77ff5002baffd
[ "MIT" ]
null
null
null
EffectiveJava/catalog/Chapter8.md
msldxy/NoteOfSomeBooks
b2093ee1bd5ad6e3a47076ad7ff77ff5002baffd
[ "MIT" ]
null
null
null
EffectiveJava/catalog/Chapter8.md
msldxy/NoteOfSomeBooks
b2093ee1bd5ad6e3a47076ad7ff77ff5002baffd
[ "MIT" ]
null
null
null
# Chapter8 Methods focus on: usability,robustness and flexibility ## Item49 Check parameters for validity ## Item50 Make defensive copies when needed ## Item51 Design method signatures carefully ## Item52 Use overloading judiciously ## Item53 Use varargs judiciously ## Item54 Return empty collections or arrays,not nulls ## Item55 Return optioals judiciously ## Item56 Write doc comments for all exposed API elements
16.185185
57
0.773455
eng_Latn
0.971854
e5b8cbe81beff26bd517a5ea2799aef7422fab75
1,458
md
Markdown
th/02.8.md
CFFPhil/build-web-application-with-golang
606abd586a7270d0e48762cf0454ba0fac330698
[ "BSD-3-Clause" ]
39,872
2015-01-01T09:09:00.000Z
2022-03-31T15:36:24.000Z
th/02.8.md
CFFPhil/build-web-application-with-golang
606abd586a7270d0e48762cf0454ba0fac330698
[ "BSD-3-Clause" ]
272
2015-01-03T14:03:30.000Z
2022-02-09T03:06:20.000Z
th/02.8.md
CFFPhil/build-web-application-with-golang
606abd586a7270d0e48762cf0454ba0fac330698
[ "BSD-3-Clause" ]
11,462
2015-01-01T05:28:30.000Z
2022-03-31T08:24:28.000Z
# 2.8 Summary In this chapter, we mainly introduced the 25 Go keywords. Let's review what they are and what they do. ```Go break default func interface select case defer go map struct chan else goto package switch const fallthrough if range type continue for import return var ``` - `var` and `const` are used to define variables and constants. - `package` and `import` are for package use. - `func` is used to define functions and methods. - `return` is used to return values in functions or methods. - `defer` is used to define defer functions. - `go` is used to start a new goroutine. - `select` is used to switch over multiple channels for communication. - `interface` is used to define interfaces. - `struct` is used to define special customized types. - `break`, `case`, `continue`, `for`, `fallthrough`, `else`, `if`, `switch`, `goto` and `default` were introduced in section 2.3. - `chan` is the type of channel for communication among goroutines. - `type` is used to define customized types. - `map` is used to define map which is similar to hash tables in other languages. - `range` is used for reading data from `slice`, `map` and `channel`. If you understand how to use these 25 keywords, you've learned a lot of Go already. ## Links - [Directory](preface.md) - Previous section: [Concurrency](02.7.md) - Next chapter: [Web foundation](03.0.md)
44.181818
129
0.685185
eng_Latn
0.999331
e5b95fdcc515d352b83cbda38cb931b11f054326
5,239
md
Markdown
docs/architecture/grpc-for-wcf-developers/protobuf-messages.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/architecture/grpc-for-wcf-developers/protobuf-messages.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/architecture/grpc-for-wcf-developers/protobuf-messages.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Protobuf-Nachrichten-GrpC für WCF-Entwickler description: Erfahren Sie, wie protobuf-Nachrichten in der IDL definiert und in c# generiert werden. ms.date: 12/15/2020 ms.openlocfilehash: c1f2a3071d45dcbe4b98d747f19fed508bad102f ms.sourcegitcommit: 655f8a16c488567dfa696fc0b293b34d3c81e3df ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 01/06/2021 ms.locfileid: "97938103" --- # <a name="protobuf-messages"></a>Protobuf-Nachrichten In diesem Abschnitt wird erläutert, wie Protokollpuffer (protobuf)-Nachrichten in Dateien deklariert werden `.proto` . Es werden die grundlegenden Konzepte von Feldnummern und-Typen erläutert, und es wird der c#-Code untersucht, den der `protoc` Compiler generiert. Im restlichen Abschnitt wird ausführlicher erläutert, wie unterschiedliche Datentypen in protobuf dargestellt werden. ## <a name="declaring-a-message"></a>Deklarieren einer Nachricht In Windows Communication Foundation (WCF) kann eine `Stock` Klasse für eine Börsenhandels Anwendung wie im folgenden Beispiel definiert werden: ```csharp namespace TraderSys { [DataContract] public class Stock { [DataMember] public int Id { get; set;} [DataMember] public string Symbol { get; set;} [DataMember] public string DisplayName { get; set;} [DataMember] public int MarketId { get; set; } } } ``` Um die entsprechende Klasse in protobuf zu implementieren, müssen Sie Sie in der `.proto` Datei deklarieren. Der `protoc` Compiler generiert dann die .NET-Klasse als Teil des Buildprozesses. ```protobuf syntax = "proto3"; option csharp_namespace = "TraderSys"; message Stock { int32 id = 1; string symbol = 2; string display_name = 3; int32 market_id = 4; } ``` Die erste Zeile deklariert die verwendete Syntax Version. Version 3 der Sprache wurde in 2016 veröffentlicht. Es handelt sich um die Version, die für GrpC-Dienste empfohlen wird. Die `option csharp_namespace` Zeile gibt den Namespace an, der für die generierten c#-Typen verwendet werden soll. Diese Option wird ignoriert, wenn die `.proto` Datei für andere Sprachen kompiliert wird. Protobuf-Dateien enthalten häufig sprachspezifische Optionen für mehrere Sprachen. Die `Stock` Nachrichten Definition gibt vier Felder an. Jede verfügt über einen Typ, einen Namen und eine Feldnummer. ## <a name="field-numbers"></a>Feldnummern Feldnummern sind ein wichtiger Bestandteil von protobuf. Sie werden verwendet, um Felder in den binär codierten Daten zu identifizieren, d. h., Sie können nicht von Version zu Version Ihres Dienstanbieter wechseln. Der Vorteil besteht darin, dass Abwärtskompatibilität und Vorwärtskompatibilität möglich sind. Von Clients und Diensten werden Feldnummern ignoriert, über die Sie nicht Bescheid wissen, solange fehlende Werte behandelt werden. Im Binärformat wird die Feldzahl mit einem Typbezeichner kombiniert. Feldzahlen von 1 bis 15 können mit Ihrem Typ als einzelnes Byte codiert werden. Zahlen zwischen 16 und 2.047 nehmen 2 Bytes in Anspruch. Sie können den Wechsel erhöhen, wenn Sie mehr als 2.047 Felder für eine Nachricht aus irgendeinem Grund benötigen. Die Einzel Byte-IDs für die feldzahlen 1 bis 15 bieten eine bessere Leistung. Sie sollten Sie daher für die grundlegendsten, häufig verwendeten Felder verwenden. ## <a name="types"></a>Typen Die Typdeklarationen verwenden die systemeigenen skalaren Datentypen von protobuf, die im [nächsten Abschnitt](protobuf-data-types.md)ausführlicher erläutert werden. Im weiteren Verlauf dieses Kapitels werden die integrierten Typen von protobuf behandelt, und es wird gezeigt, wie Sie sich auf allgemeine .NET-Typen beziehen. > [!NOTE] > Protobuf unterstützt einen- `decimal` Typ nicht. daher `double` wird stattdessen verwendet. Informationen zu Anwendungen, die eine vollständige Dezimal Genauigkeit erfordern, finden Sie im [Abschnitt zu dezimal](protobuf-data-types.md#decimals) stellen im nächsten Teil dieses Kapitels. ## <a name="the-generated-code"></a>Der generierte Code Wenn Sie Ihre Anwendung erstellen, erstellt protobuf Klassen für jede Ihrer Nachrichten, wobei die systemeigenen Typen den c#-Typen entspricht. Der generierte `Stock` Typ hat die folgende Signatur: ```csharp public class Stock { public int Id { get; set; } public string Symbol { get; set; } public string DisplayName { get; set; } public int MarketId { get; set; } } ``` Der tatsächlich generierte Code ist weitaus komplizierter als dieser. Der Grund hierfür ist, dass jede Klasse den gesamten Code enthält, der zum Serialisieren und Deserialisieren in das binäre Wire-Format erforderlich ist. ### <a name="property-names"></a>Eigenschaftsnamen Beachten Sie, dass der protobuf-compiler `PascalCase` auf die Eigenschaftsnamen angewendet hat, obwohl Sie sich `snake_case` in der `.proto` Datei befinden. Im [protobuf-Stil Handbuch](https://developers.google.com/protocol-buffers/docs/style) `snake_case` wird empfohlen, in ihren Nachrichten Definitionen zu verwenden, damit die Codegenerierung für andere Plattformen den erwarteten Fall für ihre Konventionen erzeugt. >[!div class="step-by-step"] >[Zurück](protocol-buffers.md) >[Weiter](protobuf-data-types.md)
52.919192
482
0.778202
deu_Latn
0.995605
e5ba094c7eb16697b0fb1609e980a0f8eeedd63c
4,415
md
Markdown
_wiki/_appnotes/modularity-maturity-model.md
timothyjward/v2archive.osgi.enroute.site
4b9fde94f26f4ec27e69e8be3e0c1fd5a14d12f4
[ "Apache-2.0" ]
null
null
null
_wiki/_appnotes/modularity-maturity-model.md
timothyjward/v2archive.osgi.enroute.site
4b9fde94f26f4ec27e69e8be3e0c1fd5a14d12f4
[ "Apache-2.0" ]
null
null
null
_wiki/_appnotes/modularity-maturity-model.md
timothyjward/v2archive.osgi.enroute.site
4b9fde94f26f4ec27e69e8be3e0c1fd5a14d12f4
[ "Apache-2.0" ]
2
2018-07-06T13:34:19.000Z
2021-11-06T10:23:41.000Z
--- title: Modulariy Maturiy Model (MMM) summary: he <em>Modularity Maturity Model</em> was proposed by Dr Graham Charters at the OSGi Community Event 2011, as a way of describing how far down the modularity path an organisation or project is. --- The <em>Modularity Maturity Model</em> was proposed by Dr Graham Charters at the OSGi Community Event 2011, as a way of describing how far down the modularity path an organisation or project is. It is named for the [Capability Maturity Model](http://en.wikipedia.org/wiki/Capability_Maturity_Model), which allows organisations or projects to measure their improvements on a software development process. As a work in progress, these levels may change over time. Once it has reached stability this paragraph will be removed and a link to the normative specification will be added. Note that these terms are largely OSGi agnostic and that it can be applied to any modularity model. It is also intended as a guide rather than prescriptive. Level 1: Ad Hoc --------------- At the Ad-hoc level, there isn't any formal modularity. A flat class path issued with a bunch of classes with no, or limited, structure. They may use 'library JARs' to access functionality but typically results in a monolithic application. ### Benefits The main benefit of this is the low cost and low barrier to entry. Level 2: Modules ---------------- Modules are explicitly versioned and have formal module identities, instead of merely classes (or JARs of classes). In particular, and a key point of this, is that dependencies are done against the module identity (including version) rather than the units themselves. Maven, Ivy, RPM and OSGi are all examples of where dependencies are managed at the versioned identity level instead of at the JAR level. ### Benefits - Decouple module from artefact - Clearer view of module assembly - Enables version awareness through build, development and operations - Enables module categories Level 3: Modularity ------------------- Module identity is not the same as modularity. `"(Desirable) property of a system, such that individual components an be examined, ` `modified and maintained independently of the remainder of the system. ` `Objective is that changes in one part of a system should not lead to unexpected behaviour in other parts" ` `- www.maths.bath.ac.uk/~jap/MATH0015/glossary.html (correct link unknown, may be a reference to Jianjun Hu PhD 2004` Modules are declared via module contracts, not via artefacts. The requirements can take the general form of the capabilities and requirements, but could be a specific realisation (e.g. a particular package). The private parts of the modules are an implementation detail. In this level, dependency resolution comes first and module identity is of lesser importance. ### Benefits - Fine-grained impact awareness (for bug fixes, implementation or client breaking changes) - System structure awareness - Client/provider independence - Requirement-based dependency checking Level 4: Loose coupling ----------------------- There is a separation of interface from implementation; they are not acquired via factories or use constructors to access the implementations. This provides a services-based module collaboration (seen in OSGi, but also present in some other frameworks like Spring or JNDI to hide the construction of the services from the user of those services). In addition, the dependencies must be semantically versioned. ### Benefits - Implements a client/provider independence Level 5: Devolution ------------------- At the devolved level, artefact ownerships are devolved to modularity-aware repositories. They may support collaboration or governance for accessing the assets by relation to the services and capabilities required. ### Benefits - Greater awareness of existing modules - Reduced duplication and increased quality - Collaboration and empowerment - Quality and operational control Level 6: Dynamism ----------------- Provides a dynamic module life-cycle, which allows modules to participate in the life cycle events (or initiate them). Will have operational support for module addition/removal/replacement. ### Benefits - No brittle ordering dependencies - Ability to dynamically update - Can allow fixes to be hot-deployed and to extend capabilities without needing to restart the system
35.604839
202
0.771461
eng_Latn
0.999121
e5ba5a4c5f37805ad8539c214b6e62711d2f2c7e
12,641
md
Markdown
master/getting-started/openstack/installation/fuel.md
Parkhyom/calico
70ec1bdf3b9ed48614da200b4477d922b72bd268
[ "Apache-2.0" ]
1
2020-06-23T07:15:51.000Z
2020-06-23T07:15:51.000Z
master/getting-started/openstack/installation/fuel.md
Parkhyom/calico
70ec1bdf3b9ed48614da200b4477d922b72bd268
[ "Apache-2.0" ]
null
null
null
master/getting-started/openstack/installation/fuel.md
Parkhyom/calico
70ec1bdf3b9ed48614da200b4477d922b72bd268
[ "Apache-2.0" ]
null
null
null
--- title: Integration with Fuel --- {{site.prodname}} plugins are available for Fuel 6.1 and 7.0, and work is in progress for Fuel 9. Fuel plugin code for {{site.prodname}} is at [http://git.openstack.org/cgit/openstack/fuel-plugin-calico](http://git.openstack.org/cgit/openstack/fuel-plugin-calico). ## Fuel 7.0 The plugin for Fuel 7.0 is currently undergoing final review and certification; you can find the plugin code at git.openstack.org, and its documentation in pending changes on review.openstack.org: - Code: <https://git.openstack.org/cgit/openstack/fuel-plugin-calico/log/?h=7.0> - User Guide: <https://review.openstack.org/#/c/281239/> - Test Plan and Report: <https://review.openstack.org/#/c/282362/> ## Fuel 6.1 The rest of this document describes our integration of {{site.prodname}} with Mirantis Fuel 6.1. It is presented in sections covering the following aspects of our integration work. - Objective: The system that we are aiming to deploy. - Cluster Deployment: The procedure to follow to deploy such a system. - {{site.prodname}} Demonstration: What we recommend for demonstrating {{site.prodname}} networking function. - Detailed Observations: Some further detailed observations about the elements of the deployed system. ### Objective We will deploy an OpenStack cluster with a controller node and *n* compute nodes, with {{site.prodname}} providing the network connectivity between VMs that are launched on the compute nodes. The key components on the controller node will be: - the standard OpenStack components for a controller, including the Neutron service - the {{site.prodname}}/OpenStack Plugin, as a Neutron/ML2 mechanism driver - a BIRD instance, acting as BGP route reflector for the compute nodes. The key components on each compute node will be: - the standard OpenStack components for a compute node - the {{site.prodname}} Felix agent - a BIRD instance, running BGP for that compute node. IP connectivity between VMs that are launched within the cluster will be established by the {{site.prodname}} components, according to the security groups and rules that are configured for those VMs through OpenStack. Cluster Deployment ------------------ The procedure for deploying such a cluster consists of the following steps. - Prepare a Fuel master (aka admin) node in the usual way. - Install the {{site.prodname}} plugin for Fuel on the master node. - Deploy an OpenStack cluster in the usual way, using the Fuel web UI. The following subsections flesh out these steps. ### Prepare a Fuel master node Follow Mirantis's instructions for preparing a Fuel master node, from the [Fuel 6.1 User Guide](https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#download-and-install-fuel). You will need to download a Fuel 6.1 ISO image from the [Mirantis website](https://www.mirantis.com/products/mirantis-openstack-software/). ### Install the {{site.prodname}} plugin for Fuel on the master node The {{site.prodname}} plugin has been certified by Mirantis and is available for download from the [Fuel Plugin Catalog](https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/). Alternatively, you can build a copy of the plugin yourself, following the instructions on the plugin's [GitHub](https://github.com/openstack/fuel-plugin-calico) page. However you obtain a copy of the {{site.prodname}} plugin, you will need to copy it onto the master node and install it with: fuel plugins --install calico-fuel-plugin-<version>.noarch.rpm You can check that the plugin was successfully installed using: fuel plugins --list Deploy an OpenStack cluster --------------------------- Use the Fuel web UI to deploy an OpenStack cluster in the [usual way](https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#create-a-new-openstack-environment), with the following guidelines. - Create a new OpenStack environment, selecting: - Juno on Ubuntu Trusty (14.x) - "Neutron with VLAN segmentation" as the networking setup - Under the settings tab, make sure the following options are checked: - "Assign public network to all nodes" - "Use Calico Virtual Networking" - Network settings as advised in the following subsections. - Add nodes (for meaningful testing, you will need at least two compute nodes in addition to the controller). - Deploy changes. ### Public Fuel assigns a 'public' IP address, from the range that you specify here, to each node that it deploys. It also creates an OpenStack network with this subnet, and uses that for allocating floating IPs. Therefore these IP addresses exist to allow access from within the cluster to the outside world, and vice versa, and should probably be routable from the wider network where the cluster is deployed. For the purposes of this document we'll use the 172.18.203.0/24 range of public addresses: feel free to change this to match your own local network. - IP Range: 172.18.203.40 - 172.18.203.49 - CIDR: 172.18.203.0/24 - Use VLAN tagging: No - Gateway: 172.18.203.1 - Floating IP ranges: 172.18.203.50 - 172.18.203.59 By default, Fuel associates the public IP address with the second NIC (i.e. `eth1`) on each node. ### Management Fuel assigns a 'management' IP address, from the range that you specify here, to each node that it deploys. These are the addresses that the nodes *within* the cluster use to communicate with each other. For example, nova-compute on each compute node communicates with the Neutron server on the controller node by using the controller node's management address. - CIDR: 192.168.0.0/24 - Use VLAN tagging: Yes, 101 By default, Fuel associates the management IP address with the first NIC (i.e. `eth0`) on each node. With {{site.prodname}} networking, in addition: - BGP sessions are established, between BIRD instances on the compute nodes and on the route reflector, using these management IP addresses - Data between VMs on different compute nodes is routed using these management IP addresses, which means that it flows via the compute nodes' `eth0` interfaces. ### Storage Storage networking is not needed for a simple OpenStack cluster. We left the following settings as shown, and addresses from the specified range are assigned, but are not used in practice. - CIDR: 192.168.1.0/24 - Use VLAN tagging: Yes, 102 ### Neutron L2 Configuration Neutron L2 Configuration is not needed in a {{site.prodname}} system, but we have left the following settings as shown, as we have not yet had time to simplify the web UI for {{site.prodname}} networking. - VLAN ID range: 1000 - 1030 - Base MAC address: fa:16:3e:00:00:00 ### Neutron L3 Configuration Neutron L3 Configuration is not needed in a {{site.prodname}} system, but we have left the following settings as shown, as we have not yet had time to simplify the web UI for {{site.prodname}} networking. - Internal network CIDR: 192.168.111.0/24 - Internal network gateway: 192.168.111.1 - DNS servers: 8.8.4.4, 8.8.8.8 Check BGP connectivity on the controller ---------------------------------------- Once the deployment is complete, you may wish to verify that the route reflector running on the controller node has established BGP sessions to all of the compute nodes. To do this, log onto the controller node and run: birdc show protocols all {{site.prodname}} Demonstration -------------------- To demonstrate {{site.prodname}} networking, please run through the following steps. In the OpenStack web UI, under Project, Network, Networks, create a network and subnet from which instance IP addresses will be allocated. We use the following values. - Name: 'demo' - IP subnet: 10.65.0.0/24 - Gateway: 10.65.0.1 - DHCP-enabled: Yes. Under Project, Compute, Access & Security, create two new security groups. For each security group, select 'Manage Rules' and add two new rules: - Allow incoming ICMP (ping) traffic only if it originates from other instances in this security group: - Rule: ALL ICMP - Direction: Ingress - Remote: Security Group - Security Group: Current Group - Ether Type: IPv4 - Enable SSH onto instances in this security group: - Rule: SSH - Remote: CIDR - CIDR: 0.0.0.0/0 Under Project, Instances, launch a batch of VMs -- enough of them to ensure that there will be at least one VM on each compute node -- with the following details. - Flavor: m1.tiny - Boot from image: TestVM - Under the Access & Security tab, select one of your new security groups (split your instances roughly 50:50 between the two security groups). - Under the Networking tab, drag 'demo' into the 'Selected Networks' box. Under Admin, Instances, verify that: - the requested number of VMs (aka instances) has been launched - they are distributed roughly evenly across the available compute hosts - they have each been assigned an IP address from the range that you configured above (e.g. 10.65.0/24) - they reach Active status within about a minute. Log on to one of the VMs, e.g. by clicking on one of the instances and then on its Console tab, and use 'ping' to verify connectivity is as expected from the security group configuration, i.e. that you can ping the IP addresses of all of the other VMs in the same security group, but you cannot ping the VMs in the other security group. Note that whilst the VMs should be able to reach other (security group configuration permitting), they are not expected to have external connectivity unless appropriate routing has been set up: - For outbound access, you need to ensure that your VMs can send traffic to your border gateway router (typically this will be the case, because usually your compute hosts will be able to do so). The border gateway can then perform SNAT. - For inbound connections, you need assign a publically routable IP address to your VM - that is, attach it to a network with a public IP address. You will also need to make sure that your border router (and any intermediate routers between the border router and the compute host) can route to that address too. The simplest way to do that is to peer the border router with the route reflector on the controller. Detailed Observations --------------------- This section records some more detailed notes about the state of the cluster that results from following the above procedure. Reading this section should not be required in order to demonstrate or understand OpenStack and {{site.prodname}} function, but it may be useful as a reference if a newly deployed system does not appear to be behaving correctly. ### Elements required for {{site.prodname}} function This subsection records elements that *are* required for {{site.prodname}} function, and that we have observed to be configured and operating correctly in the cluster. On the controller: - The BIRD BGP route reflector has established sessions to all the compute nodes. - The Neutron service is running and has initialized the {{site.prodname}} ML2 mechanism driver. On each compute node: - The {{site.prodname}} Felix agent is correctly configured, and running. - There is an established BGP session to the route reflector on the controller. ### Elements not required for {{site.prodname}} function, but benign This subsection records elements that are *not* required for {{site.prodname}} function, but that we have observed to be operating in the cluster. These all result from the fact that the procedure first deploys a traditional Neutron/ML2/OVS cluster, and then modifies that to use {{site.prodname}} instead of OVS, but does not clean up all of the OVS-related elements. We believe that all of these elements are benign, in that they don't obstruct or fundamentally change the {{site.prodname}} networking behavior. However it would be better to remove them so as to clarify the overall picture, and maybe to improve networking performance. We plan to continue working on this. On the controller: - Various Neutron agents are running that {{site.prodname}} does not require. - neutron-metadata-agent - neutron-dhcp-agent - neutron-openvswitch-agent - neutron-l3-agent On each compute node: - Two Neutron agents are running that {{site.prodname}} does not require. - neutron-metadata-agent - neutron-openvswitch-agent - There is a complex set of OVS bridges present, that {{site.prodname}} does not require.
38.075301
172
0.744561
eng_Latn
0.997164
e5bb053bc992ca053e9710a496b719293670f82f
12,960
md
Markdown
articles/hdinsight/storm/migrate-storm-to-spark.md
KevinRohn/azure-docs.de-de
d1de2e14da9561d76f950faca887a3bab6f5fd2a
[ "CC-BY-4.0", "MIT" ]
1
2020-04-03T08:58:02.000Z
2020-04-03T08:58:02.000Z
articles/hdinsight/storm/migrate-storm-to-spark.md
KevinRohn/azure-docs.de-de
d1de2e14da9561d76f950faca887a3bab6f5fd2a
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/hdinsight/storm/migrate-storm-to-spark.md
KevinRohn/azure-docs.de-de
d1de2e14da9561d76f950faca887a3bab6f5fd2a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Migrieren von Azure HDInsight 3.6 Apache Storm zu HDInsight 4.0 Apache Spark description: Die Unterschiede und der Ablauf beim Migrieren von Apache Storm-Workloads zu Spark Streaming oder Spark Structured Streaming author: hrasheed-msft ms.author: hrasheed ms.reviewer: jasonh ms.service: hdinsight ms.topic: conceptual ms.date: 01/16/2019 ms.openlocfilehash: 916c54c3739d1164e4e9c1db67aa1f4e0dbd0c6c ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/27/2020 ms.locfileid: "76157659" --- # <a name="migrate-azure-hdinsight-36-apache-storm-to-hdinsight-40-apache-spark"></a>Migrieren von Azure HDInsight 3.6 Apache Storm zu HDInsight 4.0 Apache Spark In diesem Dokument wird beschrieben, wie Sie Apache Storm-Workloads in HDInsight 3.6 zu HDInsight 4.0 migrieren. Der Apache Storm-Clustertyp wird von HDInsight 4.0 nicht unterstützt, sodass Sie zu einer anderen Datenstreaming-Plattform wechseln müssen. Zwei geeignete Optionen sind Apache Spark Streaming und Spark Structured Streaming. In diesem Dokument werden die Unterschiede zwischen diesen Plattformen beschrieben. Außerdem wird ein Workflow zum Migrieren von Apache Storm-Workloads empfohlen. ## <a name="storm-migration-paths-in-hdinsight"></a>Storm-Migrationspfade in HDInsight Wenn Sie von Apache Storm in HDInsight 3.6 migrieren möchten, haben Sie mehrere Möglichkeiten: * Spark Streaming in HDInsight 4.0 * Spark Structured Streaming in HDInsight 4.0 * Azure Stream Analytics Dieses Dokument enthält eine Anleitung für die Migration von Apache Storm zu Spark Streaming und zu Spark Structured Streaming. > [!div class="mx-imgBorder"] > ![HDInsight Storm-Migrationspfad](./media/migrate-storm-to-spark/storm-migration-path.png) ## <a name="comparison-between-apache-storm-and-spark-streaming-spark-structured-streaming"></a>Vergleich zwischen Apache Storm und Spark Streaming, Spark Structured Streaming Mit Apache Storm lassen sich verschiedene Stufen der garantierten Nachrichtenverarbeitung implementieren. Eine einfache Storm-Anwendung kann beispielsweise die Verarbeitung nach dem „At-Least-Once“-Prinzip garantieren, während [Trident](https://storm.apache.org/releases/current/Trident-API-Overview.html) Nachrichten nach dem „Exactly-Once“-Prinzip garantiert. Spark Streaming und Spark Structured Streaming gewährleisten, dass alle Eingabeereignisse genau einmal verarbeitet werden, und zwar selbst dann, wenn ein Knotenfehler auftritt. Storm verfügt über ein Modell, durch das jedes einzelne Ereignis verarbeitet wird. Sie können auch das Microbatchmodell mit Trident verwenden. Spark Streaming und Spark Structured Streaming bieten das Microbatch-Verarbeitungsmodell. | |Storm |Spark-Streaming | Spark Structured Streaming| |---|---|---|---| |**Garantierte Ereignisverarbeitung**|Mindestens einmal <br> „Exactly Once“ (Trident) |[Exactly Once](https://spark.apache.org/docs/latest/streaming-programming-guide.html)|[Exactly Once](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html)| |**Verarbeitungsmodell**|Echtzeit <br> Microbatch (Trident) |Microbatch |Microbatch | |**Unterstützung der Ereigniszeit**|[Ja](https://storm.apache.org/releases/2.0.0/Windowing.html)|Nein|[Ja](https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html)| |**Sprachen**|Java usw.|Scala, Java, Python|Python, R, Scala, Java, SQL| ### <a name="spark-streaming-vs-spark-structured-streaming"></a>Spark Streaming und Spark Structured Streaming im Vergleich Spark Structured Streaming löst Spark Streaming (DStreams) ab. Structured Streaming wird weiterhin erweitert und gewartet, während für DStreams nur der Wartungsmodus aktiviert ist. **Hinweis: Links erforderlich, um diesen Aspekt zu unterstreichen**. Structured Streaming bietet nicht so viele Features für die direkte Unterstützung von Quellen und Senken wie DStreams. Daher sollten Sie Ihre Anforderungen genau einschätzen, um die geeignete Option für die Spark-Streamverarbeitung auszuwählen. ## <a name="streaming-single-event-processing-vs-micro-batch-processing"></a>Streamingverarbeitung (einzelnes Ereignis) und Microbatchverarbeitung im Vergleich Storm stellt ein Modell bereit, bei dem jedes einzelne Ereignis verarbeitet wird. Das bedeutet, dass alle eingehenden Datensätze verarbeitet werden, sobald sie empfangen werden. Spark Streaming-Anwendungen müssen einige Sekundenbruchteile warten, bis jeder Microbatch von Ereignissen erfasst ist und bevor dieser Batch zur Verarbeitung übermittelt wird. Im Gegensatz dazu verarbeitet eine ereignisgesteuerte Anwendung jedes Ereignis sofort. Die Spark-Streaming-Latenzzeit liegt in der Regel unter wenigen Sekunden. Die Vorteile des Mikrobatchansatzes sind eine effizientere Datenverarbeitung und eine Vereinfachung aggregierter Berechnungen. > [!div class="mx-imgBorder"] > ![Streaming- und Microbatchverarbeitung](./media/migrate-storm-to-spark/streaming-and-micro-batch-processing.png) ## <a name="storm-architecture-and-components"></a>Storm-Architektur und -Komponenten Storm-Topologien setzen sich aus mehreren Komponenten zusammen, die in einem gerichteten azyklischen Graph (DAG) angeordnet sind. Daten werden zwischen den Komponenten im Diagramm übertragen. Jede Komponente nutzt einen oder mehrere Datenströme und kann optional einen oder mehrere Datenströme ausgeben kann. |Komponente |BESCHREIBUNG | |---|---| |Spout|Überführt Daten in eine Topologie. Sie geben mindestens einen Datenstrom in die Topologie aus.| |Bolt|Nutzt Streams, die von Spouts oder anderen Bolts ausgegeben werden. Bolts können optional auch Datenströme in die Topologie ausgeben. Außerdem sind Bolts für das Schreiben von Daten in externe Dienste oder externen Speicher (wie etwa HDFS, Kafka oder HBase) zuständig.| > [!div class="mx-imgBorder"] > ![Interaktion von Storm-Komponenten](./media/migrate-storm-to-spark/apache-storm-components.png) Storm umfasst die folgenden drei Daemons, die für die Funktionsfähigkeit des Storm-Clusters sorgen. |Daemon |BESCHREIBUNG | |---|---| |Nimbus|Ist mit dem Hadoop JobTracker vergleichbar und für die Verteilung des Codes im Cluster und die Zuweisung von Tasks zu Computern und die Fehlerüberwachung zuständig.| |Zookeeper|Wird für die Clusterkoordination verwendet.| |Supervisor|Lauscht auf Arbeit, die dem zugehörigen Computer zugewiesen wird, und startet und beendet Workerprozesse, wenn er von Nimbus dazu angewiesen wird. Jeder Workerprozess ist für die Ausführung eines Teils der Topologie zuständig. Hier wird die Anwendungslogik des Benutzers (Spouts und Bolt) ausgeführt.| > [!div class="mx-imgBorder"] > ![Nimbus-, ZooKeeper- und Supervisor-Daemons](./media/migrate-storm-to-spark/nimbus-zookeeper-supervisor.png) ## <a name="spark-streaming-architecture-and-components"></a>Spark Streaming-Architektur und -Komponenten In den folgenden Schritten wird zusammengefasst, wie die Komponenten in Spark Streaming (DStreams) und Spark Structured Streaming zusammenarbeiten: * Beim Start von Spark Streaming startet der Treiber den Task im Executor. * Der Executor empfängt einen Stream von einer Streamingdatenquelle. * Wenn der Executor Datenstreams empfängt, teilt er den Stream in Blöcke auf und hält diese im Arbeitsspeicher vor. * Datenblöcke werden auf andere Executors repliziert. * Die verarbeiteten Daten werden dann im Zieldatenspeicher gespeichert. > [!div class="mx-imgBorder"] > ![Spark Streaming-Pfad zur Ausgabe](./media/migrate-storm-to-spark/spark-streaming-to-output.png) ## <a name="spark-streaming-dstream-workflow"></a>Spark Streaming (DStream)-Workflow Wenn ein Batchintervall verstreicht, wird ein neues RDD erzeugt, das alle Daten dieses Intervalls enthält. Die kontinuierlichen RDD-Sätze werden in einem DStream gesammelt. Wenn das Batchintervall z.B. eine Sekunde lang ist, gibt Ihr DStream jede Sekunde einen Batch aus, der ein RDD mit allen während dieser Sekunde erfassten Daten enthält. Bei der DStream-Verarbeitung tritt das Temperaturereignis in einem dieser Batches auf. Eine Spark-Streaming-Anwendung verarbeitet die Batches, die die Ereignisse enthalten, und führt die endgültigen Aktionen an den Daten durch, die in den einzelnen RDDs gespeichert werden. > [!div class="mx-imgBorder"] > ![Spark Streaming-Verarbeitungsbatches](./media/migrate-storm-to-spark/spark-streaming-batches.png) Ausführliche Informationen zu den verschiedenen Transformationen, die mit Spark Streaming verfügbar sind, finden Sie unter [Transformations on DStreams](https://spark.apache.org/docs/latest/streaming-programming-guide.html#transformations-on-dstreams) (DStreams-Transformationen). ## <a name="spark-structured-streaming"></a>Spark Structured Streaming Bei Spark Structured Streaming wird ein Datenstream in Form einer Tabelle mit unbegrenzter Tiefe dargestellt. Sobald neue Daten eingehen, wird die Tabelle weiter vergrößert. Diese Eingabetabelle wird durch eine Abfrage mit langer Ausführungszeit kontinuierlich verarbeitet, und die Ergebnisse werden an eine Ausgabetabelle gesendet. In Structured Streaming werden die im System eingehenden Daten sofort in einer Eingabetabelle erfasst. Sie schreiben Abfragen (mit Datenrahmen- und Dataset-API), mit denen Vorgänge für diese Eingabetabelle ausgeführt werden. Aus der Ausgabe der Abfrage wird eine *Ergebnistabelle* erzeugt, die die Ergebnisse Ihrer Abfrage enthält. Aus der Ergebnistabelle können Daten für einen externen Datenspeicher, z. B. eine relationale Datenbank, abgerufen werden. Wann Daten aus der Eingabetabelle verarbeitet werden, wird durch das Triggerintervall gesteuert. Standardmäßig ist das Triggerintervall null (0), also versucht Structured Streaming, die Daten sofort bei Eintreffen zu verarbeiten. In der Praxis bedeutet dies, dass Structured Streaming sofort nach Verarbeitung einer Abfrage mit einer weiteren Verarbeitung neu empfangener Daten beginnt. Sie können den Trigger zur Ausführung in einem Intervall konfigurieren, sodass die Streamingdaten in zeitbasierten Batches verarbeitet werden. > [!div class="mx-imgBorder"] > ![Verarbeiten von Daten in Structured Streaming](./media/migrate-storm-to-spark/structured-streaming-data-processing.png) > [!div class="mx-imgBorder"] > ![Programmiermodell für Structured Streaming](./media/migrate-storm-to-spark/structured-streaming-model.png) ## <a name="general-migration-flow"></a>Allgemeiner Ablauf der Migration Bei der empfohlenen Migration von Storm zu Spark wird die folgende Anfangsarchitektur vorausgesetzt: * Kafka wird als Streamingdatenquelle verwendet. * Kafka und Storm werden im selben virtuellen Netzwerk bereitgestellt. * Die von Storm verarbeiteten Daten werden in eine Datensenke geschrieben, z. B. Azure Storage oder Azure Data Lake Storage Gen2. > [!div class="mx-imgBorder"] > ![Diagramm der angenommenen aktuellen Umgebung](./media/migrate-storm-to-spark/presumed-current-environment.png) Gehen Sie folgendermaßen vor, um Ihre Anwendung von Storm zu einer der Spark Streaming-APIs zu migrieren: 1. **Stellen Sie einen neuen Cluster bereit**. Stellen Sie einen neuen HDInsight 4.0 Spark-Cluster im selben virtuellen Netzwerk bereit. Stellen Sie dann die Spark Streaming- oder Spark Structured Streaming-Anwendung auf dem Cluster bereit, und testen Sie diese gründlich. > [!div class="mx-imgBorder"] > ![Neue Spark-Bereitstellung in HDInsight](./media/migrate-storm-to-spark/new-spark-deployment.png) 1. **Stellen Sie die Datennutzung auf dem alten Storm-Cluster ein**. Sorgen Sie dafür, dass auf dem vorhandenen Storm-Cluster keine Daten mehr aus der Streamingdatenquelle genutzt werden, und warten Sie, bis die Daten vollständig in die Zielsenke geschrieben wurden. > [!div class="mx-imgBorder"] > ![Datennutzung auf dem aktuellen Cluster einstellen](./media/migrate-storm-to-spark/stop-consuming-current-cluster.png) 1. **Starten Sie die Datennutzung auf dem neuen Spark-Cluster**. Beginnen Sie, Daten von einem neu bereitgestellten HDInsight 4.0 Spark-Cluster zu streamen. Zu diesem Zeitpunkt geht die Datennutzung auf das aktuelle Kafka-Offset über. > [!div class="mx-imgBorder"] > ![Datennutzung auf dem neuen Cluster starten](./media/migrate-storm-to-spark/start-consuming-new-cluster.png) 1. **Entfernen Sie ggf. den alten Cluster**. Sobald die Umstellung abgeschlossen ist und alles ordnungsgemäß funktioniert, entfernen Sie ggf. den alten HDInsight 3.6 Storm-Cluster. > [!div class="mx-imgBorder"] > ![Alte HDInsight-Cluster nach Bedarf entfernen](./media/migrate-storm-to-spark/remove-old-clusters1.png) ## <a name="next-steps"></a>Nächste Schritte Weitere Informationen zu Storm, Spark Streaming und Spark Structured Streaming finden Sie in den folgenden Dokumenten: * [Übersicht über Apache Spark Streaming](../spark/apache-spark-streaming-overview.md) * [Übersicht über Apache Spark Structured Streaming](../spark/apache-spark-structured-streaming-overview.md)
83.612903
771
0.807793
deu_Latn
0.988952
e5bbf52512056139138bcbd414f5e9dc3254e49c
1,266
md
Markdown
docs/querydb.md
simonhou/YaSQL
16a9b92e158e3dd99d1d0a3989cdd7b2f0fd67c7
[ "Apache-2.0" ]
443
2018-02-08T02:53:48.000Z
2020-10-13T10:01:55.000Z
docs/querydb.md
simonhou/YaSQL
16a9b92e158e3dd99d1d0a3989cdd7b2f0fd67c7
[ "Apache-2.0" ]
27
2018-03-06T03:50:07.000Z
2020-08-18T08:09:49.000Z
docs/querydb.md
simonhou/YaSQL
16a9b92e158e3dd99d1d0a3989cdd7b2f0fd67c7
[ "Apache-2.0" ]
148
2018-03-15T06:07:25.000Z
2020-08-17T14:58:45.000Z
# 查询功能 1. 提供给开发快速检索各个环境下数据库的数据 2. 支持LIMIT规制改写 3. 支持基于用户或组级别的权限配置 4. 支持库或表级别的权限分配 5. 支持安全审计 #### 配置只读账号 `vim yasql/config.py` ```python # 作为开发查询数据库使用 # 该账户仅设置为只读即可 # create user 'yasql_ro'@'%' identified by '1234.com' # grant select on *.* to 'yasql_ro'@'%'; # 用户名和密码请进行自行修改,不要使用默认的 QUERY_USER = { 'user': 'yasql_ro', 'password': '1234.com' } ``` 将配置文件中的用户和密码改为自己设置的,然后保存,使用supervisorctl重启服务即可 #### 创建只读账号 请在需要执行查询的目标数据库上创建上面已配置的账号,并设置为只读(SELECT)权限 > 请务必保证当前主机能够访问远程数据库 > 日志位置:yasql/logs/celery.log #### 配置主机 登录后台:{your domain_name}/admin/,找到:SQL工单配置 -> DB主机配置 -> 增加DB主机配置 -> 录入远程数据库实例信息 > 用途:SQL查询 #### 配置定时采集任务 登录后台:{your domain_name}/admin/,找到:PERIODIC TASKS -> Periodic tasks -> 增加PERIODIC TASKS 新增一个任务: - name:DMS-同步库表信息 - Task(registered):sqlquery.tasks.sqlquery_sync_schemas_tables - Enabled:勾选 - Interval Schedule:10 minutes (这里随意配置,建议周期设置为分钟级别,比如:10分钟,集群主机越多,建议越大) 点击保存即可 #### 手动触发定时采集任务 如果您配置的是10分钟,则每10分钟触发同步一次。为了快速触发采集,您可以手动触发任务运行 登录后台:{your domain_name}/admin/,找到:PERIODIC TASKS -> Periodic tasks 找到您刚才添加的任务名,点击前面的选择框、然后点击:动作 选择:Run Selected tasks,点击:执行 #### 查看同步结果 定时任务运行后,会自动采集相关信息存放到表里面,在后台可以查看 登录后台:{your domain_name}/admin/,找到:SQLQUERY -> DB查询表 如果里面存在您目标库的库表信息,则表明同步成功 如果没有找到,可以通过日志看看是否存在网络不通、密码错误等问题。日志:logs/celery.log
20.419355
86
0.751185
yue_Hant
0.750662
e5bc2095a57848968f7f4900ece897f5e77b8612
100,822
markdown
Markdown
_posts/2019-12-28-MySQL_backup.markdown
xiazemin/MyBlog
a0bf678e052efd238a4eb694a27528ccc234c186
[ "MIT" ]
1
2021-08-14T12:11:15.000Z
2021-08-14T12:11:15.000Z
_posts/2019-12-28-MySQL_backup.markdown
xiazemin/MyBlogSrc
cc55274b05e2a6fd414066b09f41dab26c3f7e75
[ "MIT" ]
2
2020-10-30T15:42:56.000Z
2020-10-30T15:42:56.000Z
_posts/2019-12-28-MySQL_backup.markdown
xiazemin/MyBlog
a0bf678e052efd238a4eb694a27528ccc234c186
[ "MIT" ]
1
2018-12-11T13:49:13.000Z
2018-12-11T13:49:13.000Z
--- title: MySQL_backup layout: post category: storage author: 夏泽民 --- Mysql最常用的三种备份工具分别是mysqldump、Xtrabackup(innobackupex工具)、lvm-snapshot快照。 https://github.com/zonezoen/MySQL_backup 全量备份 /usr/bin/mysqldump -uroot -p123456 --lock-all-tables --flush-logs test > /home/backup.sql ,其功能是将 test 数据库全量备份。其中 MySQL 用户名为:root 密码为:123456备份的文件路径为:/home (当然这个路径也是可以按照个人意愿修改的。)备份的文件名为:backup.sql 参数 —flush-logs:使用一个新的日志文件来记录接下来的日志参数 —lock-all-tables:锁定所有数据库 <!-- more --> 恢复全量备份 mysql -h localhost -uroot -p123456 < bakdup.sql 或者 mysql> source /path/backup/bakdup.sql 定时备份 输入如下命令,进入 crontab 定时任务编辑界面: crontab -e 添加如下命令,其意思为:每分钟执行一次备份脚本: * * * * * sh /usr/your/path/mysqlBackup.sh 复制代码每五分钟执行 : */5 * * * * sh /usr/your/path/mysqlBackup.sh 每小时执行: 0 * * * * sh /usr/your/path/mysqlBackup.sh 复制代码每天执行: 0 0 * * * sh /usr/your/path/mysqlBackup.sh 每周执行: 0 0 * * 0 sh /usr/your/path/mysqlBackup.sh 复制代码每月执行: 0 0 1 * * sh /usr/your/path/mysqlBackup.sh 每年执行: 0 0 1 1 * sh /usr/your/path/mysqlBackup.sh 增量备份 首先在进行增量备份之前需要查看一下配置文件,查看 log_bin 是否开启,因为要做增量备份首先要开启 log_bin 。首先,进入到 myslq 命令行,输入如下命令: show variables like '%log_bin%'; 复制代码 如下命令所示,则为未开启 mysql> show variables like '%log_bin%'; +---------------------------------+-------+ | Variable_name | Value | +---------------------------------+-------+ | log_bin | OFF | | log_bin_basename | | | log_bin_index | | | log_bin_trust_function_creators | OFF | | log_bin_use_v1_row_events | OFF | | sql_log_bin | ON | +---------------------------------+-------+ 复制代码 修改 MySQL 配置项到如下代码段:vim /etc/mysql/mysql.conf.d/mysqld.cnf # Copyright (c) 2014, 2016, Oracle and/or its affiliates. All rights reserved. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA # # The MySQL Server configuration file. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html [mysqld] pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql #log-error = /var/log/mysql/error.log # By default we only accept connections from localhost #bind-address = 127.0.0.1 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 #binlog setting,开启增量备份的关键 log-bin=/var/lib/mysql/mysql-bin server-id=123454 复制代码 修改之后,重启 mysql 服务,输入: show variables like '%log_bin%'; 复制代码 状态如下: mysql> show variables like '%log_bin%'; +---------------------------------+--------------------------------+ | Variable_name | Value | +---------------------------------+--------------------------------+ | log_bin | ON | | log_bin_basename | /var/lib/mysql/mysql-bin | | log_bin_index | /var/lib/mysql/mysql-bin.index | | log_bin_trust_function_creators | OFF | | log_bin_use_v1_row_events | OFF | | sql_log_bin | ON | +---------------------------------+--------------------------------+ 复制代码 好了,做好了充足的准备,那我们就开始学习增量备份了。 查看当前使用的 mysql_bin.000*** 日志文件, show master status; 复制代码 状态如下: mysql> show master status; +------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+-------------------+ | mysql-bin.000015 | 610 | | | | +------------------+----------+--------------+------------------+-------------------+ 复制代码 当前正在记录日志的文件名为 mysql-bin.000015 。 当前数据库中有如下数据: mysql> select * from users; +-------+------+----+ | name | sex | id | +-------+------+----+ | zone | 0 | 1 | | zone1 | 1 | 2 | | zone2 | 0 | 3 | +-------+------+----+ 复制代码 我们插入一条数据: insert into `zone`.`users` ( `name`, `sex`, `id`) values ( 'zone3', '0', '4'); 复制代码 查看效果: mysql> select * from users; +-------+------+----+ | name | sex | id | +-------+------+----+ | zone | 0 | 1 | | zone1 | 1 | 2 | | zone2 | 0 | 3 | | zone3 | 0 | 4 | +-------+------+----+ 复制代码 我们执行如下命令,使用新的日志文件: mysqladmin -uroot -123456 flush-logs 复制代码 日志文件从 mysql-bin.000015 变为 mysql-bin.000016,而 mysql-bin.000015 则记录着刚刚 insert 命令的日志。上句代码的效果如下: mysql> show master status; +------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+-------------------+ | mysql-bin.000016 | 154 | | | | +------------------+----------+--------------+------------------+-------------------+ 复制代码 那么到现在为止,其实已经完成了增量备份了。 恢复增量备份 那么现在将刚刚插入的数据删除,效果如下: delete from `zone`.`users` where `id`='4' mysql> select * from users; +-------+------+----+ | name | sex | id | +-------+------+----+ | zone | 0 | 1 | | zone1 | 1 | 2 | | zone2 | 0 | 3 | +-------+------+----+ 复制代码 那么现在就是重点时间了,从 mysql-bin.000015 中恢复数据: mysqlbinlog /var/lib/mysql/mysql-bin.000015 | mysql -uroot -p123456 zone; 复制代码 上一句代码指定了,需要恢复的 mysql_bin 文件,指定了用户名:root 、密码:123456 、数据库名:zone。效果如下: mysql> select * from users; +-------+------+----+ | name | sex | id | +-------+------+----+ | zone | 0 | 1 | | zone1 | 1 | 2 | | zone2 | 0 | 3 | | zone3 | 0 | 4 | +-------+------+----+ 复制代码 OK,整一个增量备份的操作流程都在这里了,那么我们如何将它写成脚本文件呢,代码如下: #!/bin/bash #在使用之前,请提前创建以下各个目录 backupDir=/usr/local/work/backup/daily #增量备份时复制mysql-bin.00000*的目标目录,提前手动创建这个目录 mysqlDir=/var/lib/mysql #mysql的数据目录 logFile=/usr/local/work/backup/bak.log BinFile=/var/lib/mysql/mysql-bin.index #mysql的index文件路径,放在数据目录下的 mysqladmin -uroot -p123456 flush-logs #这个是用于产生新的mysql-bin.00000*文件 # wc -l 统计行数 # awk 简单来说awk就是把文件逐行的读入,以空格为默认分隔符将每行切片,切开的部分再进行各种分析处理。 Counter=`wc -l $BinFile |awk '{print $1}'` NextNum=0 #这个for循环用于比对$Counter,$NextNum这两个值来确定文件是不是存在或最新的 for file in `cat $BinFile` do base=`basename $file` echo $base #basename用于截取mysql-bin.00000*文件名,去掉./mysql-bin.000005前面的./ NextNum=`expr $NextNum + 1` if [ $NextNum -eq $Counter ] then echo $base skip! >> $logFile else dest=$backupDir/$base if(test -e $dest) #test -e用于检测目标文件是否存在,存在就写exist!到$logFile去 then echo $base exist! >> $logFile else cp $mysqlDir/$base $backupDir echo $base copying >> $logFile fi fi done echo `date +"%Y年%m月%d日 %H:%M:%S"` $Next Bakup succ! >> $logFile #NODE_ENV=$backUpFolder@$backUpFileName /root/node/v8.11.3/bin/node /usr/local/upload.js 一、binlog二进制日志通常作为备份的重要资源,所以再说备份方案之前先总结一下binlog日志~~ 1.binlog日志内容 1)引起mysql服务器改变的任何操作。 2)复制功能依赖于此日志。 3)slave服务器通过复制master服务器的二进制日志完成主从复制,在执行之前保存于中继日志(relay log)中。 4)slave服务器通常可以关闭二进制日志以提升性能。 2.binlog日志文件的文件表现形式 1)默认在安装目录下,存在mysql-bin.00001, mysql-bin.00002的二进制文件(binlog日志文件名依据my.cnf配置中的log-bin参数后面的设置为准) 2)还有mysql-bin.index用来记录被mysql管理的二进制文件列表 3)如果需要删除二进制日志时,切勿直接删除二进制文件,这样会使得mysql管理混乱。 3.binlog日志文件查看相关mysql命令 1)SHOW MASTER STATUS ; 查看正在使用的二进制文件 MariaDB [(none)]> SHOW MASTER STATUS ; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000003 | 245 | | | +------------------+----------+--------------+------------------+ 2)FLUSH LOGS; 手动滚动二进制日志 MariaDB [(none)]> FLUSH LOGS; MariaDB [(none)]> SHOW MASTER STATUS ; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000004 | 245 | | | +------------------+----------+--------------+------------------+ 滚动以后,mysql重新创建一个新的日志mysql-bin.000004 3)SHOW BINARY LOGS 显示使用过的二进制日志文件 MariaDB [(none)]> SHOW BINARY LOGS ; +------------------+-----------+ | Log_name | File_size | +------------------+-----------+ | mysql-bin.000001 | 30373 | | mysql-bin.000002 | 1038814 | | mysql-bin.000003 | 288 | | mysql-bin.000004 | 245 | 4)SHOW BINLOG EVENTS 以表的形式查看二进制文件 命令格式:SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos] [LIMIT [offset,] row_count] MariaDB [(none)]> SHOW BINLOG EVENTS IN 'mysql-bin.000001' \G; *************************** 99. row *************************** Log_name: mysql-bin.000001 Pos: 30225 Event_type: Query Server_id: 1 End_log_pos: 30354 Info: use `mysql`; DROP TEMPORARY TABLE `tmp_proxies_priv` /* generated by server */ 4.MySQL二进制文件读取工具mysqlbinlog 命令格式:mysqlbinlog [参数] log-files 有以下四种参数选择: --start-datetime --stop-datetime --start-position --stop-position [root@test-huanqiu ~]# mysqlbinlog --start-position 30225 --stop-position 30254 mysql-bin.000001 截取一下结果: # at 30225 #151130 12:43:35 server id 1 end_log_pos 30354 Querythread_id=1exec_time=0error_code=0 use `mysql`/*!*/; SET TIMESTAMP=1448858615/*!*/; SET @@session.pseudo_thread_id=1/*!*/ 根据以上截取结果第二行,进行解释二进制日志内容 1)时间点: 151130 12:43:35 2)服务器ID: server id 1 服务器ID主要用于标记日志产生的服务器,主要用于双主模型中,互为主从,确保二进制文件不会被相互循环复制 3)记录类型: Query 4) 线程号: thread_id = 1 5) 语句的时间戳和写入二进制日志文件的时间差; exec_time=0 6) 事件内容 7)事件位置 #at 30225 8) 错误代码 error_code=0 9) 事件结束位置 end_log_pos也就是下一事件开始的位置 5.二进制日志格式 由bin_log_format={statement|row|mixed}定义 1)statement: 基于语句,记录生成数据的语句 缺点在于如果当时插入信息为函数生成,有可能不同时间点执行结果不一样, 例如: INSERT INTO t1 VALUE (CURRENT_DATE()); 2)row: 基于行数据 缺点在于,有时候数据量会过大 3)mixed: 混合模式,又mysql自行决定何时使用statement, 何时使用row 模式 6.二进制相关参数总结 1)log_bin = {ON|OFF} 还可以是个文件路径,自定义binlog日志文件名,使用“log_bin=“或“log-bin=“都可以,主要用于控制全局binlog的存放位置和是否开启binlog日志功能。 比如:log_bin=mysql-bin 或者 log-bin=mysql-bin,这样binlog日志默认会和mysql数据放在同一目录下。 2) log_bin_trust_function_creators 是否记录在 3) sql_log_bin = {ON|OFF} 会话级别是否关闭binlog, 如果关闭当前会话内的操作将不会记录 4) sync_binlog 是否马上同步事务类操作到二进制日志中 5) binlog_format = {statement|row|mixed} 二进制日志的格式,上面单独提到了 6) max_binlog_cache_size = 二进制日志缓冲空间大小,仅用于缓冲事务类的语句; 7) max_binlog_stmt_cache_size = 语句缓冲,非事务类和事务类共用的空间大小 8) max_binlog_size = 二进制日志文件上限,超过上限后则滚动 9) 删除二进制日志 命令格式:PURGE { BINARY | MASTER } LOGS { TO 'log_name' | BEFORE datetime_expr } MariaDB> PURGE BINARY LOGS TO 'mysql-bin.010'; MariaDB> PURGE BINARY LOGS BEFORE '2016-11-02 22:46:26'; 建议:切勿将二进制日志与数据文件放在一同设备;可以将binlog日志实时备份到远程设备上,以防出现机器故障进行数据恢复; 二、接下来说下binlog二进制日志备份和恢复 1.为什么做备份: (1)灾难恢复 (2)审计,数据库在过去某一个时间点是什么样的 (3)测试 2.备份的目的: (1)用于恢复数据 (2)备份结束后,需要周期性的做恢复测试 3.备份类型: (1)根据备份时,服务器是否在线 1)冷备(cold backup): 服务器离线,读写操作都不能进行 2)温备份: 全局施加共享锁,只能读不能写 3)热备(hot backup):数据库在线,读写照样进行 (2)根据备份时的数据集分类 1)完全备份(full backup) 2)部分备份(partial backup) (3)根据备份时的接口 1)物理备份(physical backup):直接复制数据文件 ,打包归档 特点: 不需要额外工具,直接归档命令即可,但是跨平台能力比较差;如果数据量超过几十个G,则适用于物理备份 2)逻辑备份(logical backup): 把数据抽取出来保存在sql脚本中 特点: 可以使用文本编辑器编辑;导入方便,直接读取sql语句即可;逻辑备份恢复时间慢,占据空间大;无法保证浮点数的精度;恢复完数据库后需要重建索引。 (4)根据备份整个数据还是变化数据 1) 全量备份 full backup 2) 增量备份 incremental backup 在不同时间点起始备份一段数据,比较节约空间;针对的是上一次备份后有变化的数据,备份数据少,备份快,恢复慢 3) 差异备份 differential backup 备份从每个时间点到上一次全部备份之间的数据,随着时间增多二增多;比较容易恢复;对于很大的数据库,可以考虑主从模型,备份从服务器的内容。针对的是上一次全量备份后有变化的数据,备份数据多,备份慢,恢复快。 (5)备份策略,需要考虑因素如下 备份方式 备份实践 备份成本 锁时间 时长 性能开销 恢复成本 恢复时长 所能够容忍丢失的数据量 (6)备份内容 1)数据库中的数据 2)配置文件 3)mysql中的代码: 存储过程,存储函数,触发器 4)OS 相关的配置文件,chrontab 中的备份策略脚本 5)如果是主从复制的场景中: 跟复制相关的信息 6)二进制日志文件需要定期备份,一旦发现二进制文件出现问题,需马上对数据进行完全备份 (7)Mysql最常用的三种备份工具: 1)mysqldump: 通常为小数据情况下的备份 innodb: 热备,温备 MyISAM, Aria: 温备 单线程备份恢复比较慢 2)Xtrabackup(通常用innobackupex工具): 备份mysql大数据 InnoDB热备,增量备份; MyISAM温备,不支持增量,只有完全备份 属于物理备份,速度快; 3)lvm-snapshot: 接近于热备的工具:因为要先请求全局锁,而后创建快照,并在创建快照完成后释放全局锁; 使用cp、tar等工具进行物理备份; 备份和恢复速度较快; 很难实现增量备份,并且请求全局需要等待一段时间,在繁忙的服务器上尤其如此; 除此之外,还有其他的几个备份工具: -->mysqldumper: 多线程的mysqldump -->SELECT clause INTO OUTFILE '/path/to/somefile' LOAD DATA INFILE '/path/from/somefile' 部分备份工具, 不会备份关系定义,仅备份表中的数据; 逻辑备份工具,快于mysqldump,因为不备份表格式信息 -->mysqlhotcopy: 接近冷备,基本没用 mysqldump工具基本使用 1. mysqldump [OPTIONS] database [tables…] 还原时库必须存在,不存在需要手动创建 --all-databases: 备份所有库 --databases db1 db2 ...: 备份指定的多个库,如果使用此命令,恢复时将不用手动创建库。或者是-B db1 db2 db3 .... --lock-all-tables:请求锁定所有表之后再备份,对MyISAM、InnoDB、Aria做温备 --lock-table: 对正在备份的表加锁,但是不建议使用,如果其它表被修改,则备份后表与表之间将不同步 --single-transaction: 能够对InnoDB存储引擎实现热备; 启动一个很大的大事物,基于MOCC可以保证在事物内的表版本一致 自动加锁不需要,再加--lock-table, 可以实现热备 备份代码: --events: 备份事件调度器代码 --routines: 备份存储过程和存储函数 --triggers:备份触发器 备份时滚动日志: --flush-logs: 备份前、请求到锁之后滚动日志; 方恢复备份时间点以后的内容 复制时的同步位置标记:主从架构中的,主服务器数据。效果相当于标记一个时间点。 --master-data=[0|1|2] 0: 不记录 1:记录为CHANGE MASTER语句 2:记录为注释的CHANGE MASTER语句 2. 使用mysqldump备份大体过程: 1) 请求锁:--lock-all-tables或使用–singe-transaction进行innodb热备; 2) 滚动日志:--flush-logs 3) 选定要备份的库:--databases 4) 记录二进制日志文件及位置:--master-data= FLUSH TABLES5 WITH READ LOCK; 3. 恢复: 恢复过程无需写到二进制日志中 建议:关闭二进制日志,关闭其它用户连接; 4. 备份策略:基于mysqldump 备份:mysqldump+二进制日志文件;(“mysqldump >”) 周日做一次完全备份:备份的同时滚动日志 周一至周六:备份二进制日志; 恢复:(“mysql < ”)或在mysql数据库中直接执行“source sql备份文件;”进行恢复。如果sql执行语句比较多,可以将sql语句放在一个文件内,将文件名命名为.sql结尾,然后在mysql数据库中使用"source 文件.sql;"命令进行执行即可! 完全备份+各二进制日志文件中至此刻的事件 5. 实例说明: 参考:Mysql备份系列(2)--mysqldump备份(全量+增量)方案操作记录 lvm-snapshot:基于LVM快照的备份 1.关于快照: 1)事务日志跟数据文件必须在同一个卷上; 2)刚刚创立的快照卷,里面没有任何数据,所有数据均来源于原卷 3)一旦原卷数据发生修改,修改的数据将复制到快照卷中,此时访问数据一部分来自于快照卷,一部分来自于原卷 4)当快照使用过程中,如果修改的数据量大于快照卷容量,则会导致快照卷崩溃。 5)快照卷本身不是备份,只是提供一个时间一致性的访问目录。 2.基于快照备份几乎为热备: 1)创建快照卷之前,要请求MySQL的全局锁;在快照创建完成之后释放锁; 2)如果是Inoodb引擎, 当flush tables 后会有一部分保存在事务日志中,却不在文件中。 因此恢复时候,需要事务日志和数据文件 但释放锁以后,事务日志的内容会同步数据文件中,因此备份内容并不绝对是锁释放时刻的内容,由于有些为完成的事务已经完成,但在备份数据中因为没完成而回滚。 因此需要借助二进制日志往后走一段 3.基于快照备份注意事项: 1)事务日志跟数据文件必须在同一个卷上; 2)创建快照卷之前,要请求MySQL的全局锁;在快照创建完成之后释放锁; 3)请求全局锁完成之后,做一次日志滚动;做二进制日志文件及位置标记(手动进行); 4.为什么基于MySQL快照的备份很好? 原因如下几点: 1)几乎是热备 在大多数情况下,可以在应用程序仍在运行的时候执行备份。无需关机,只需设置为只读或者类似只读的限制。 2)支持所有基于本地磁盘的存储引擎 它支持MyISAM, Innodb, BDB,还支持 Solid, PrimeXT 和 Falcon。 3)快速备份 只需拷贝二进制格式的文件,在速度方面无以匹敌。 4)低开销 只是文件拷贝,因此对服务器的开销很细微。 5)容易保持完整性 想要压缩备份文件吗?把它们备份到磁带上,FTP或者网络备份软件 -- 十分简单,因为只需要拷贝文件即可。 6)快速恢复 恢复的时间和标准的MySQL崩溃恢复或数据拷贝回去那么快,甚至可能更快,将来会更快。 7)免费 无需额外的商业软件,只需Innodb热备工具来执行备份。 快照备份mysql的缺点: 1)需要兼容快照 -- 这是明显的。 2)需要超级用户(root) 在某些组织,DBA和系统管理员来自不同部门不同的人,因此权限各不一样。 3)停工时间无法预计,这个方法通常指热备,但是谁也无法预料到底是不是热备 -- FLUSH TABLES WITH READ LOCK 可能会需要执行很长时间才能完成。 4)多卷上的数据问题 如果你把日志放在独立的设备上或者你的数据库分布在多个卷上,这就比较麻烦了,因为无法得到全部数据库的一致性快照。不过有些系统可能能自动做到多卷快照。 5.备份与恢复的大体步骤 备份: 1)请求全局锁,并滚动日志 mysql> FLUSH TABLES WITH READ LOCK; mysql> FLUSH LOGS; 2)做二进制日志文件及位置标记(手动进行); [root@test-huanqiu ~]# mysql -e 'show master status' > /path/to/orignal_volume 3)创建快照卷 [root@test-huanqiu ~]# lvcreate -L -s -n -p r /path/to/some_lv 4)释放全局锁 5)挂载快照卷并备份 6)备份完成之后,删除快照卷 恢复: 1)二进制日志保存好; 提取备份之后的所有事件至某sql脚本中; 2)还原数据,修改权限及属主属组等,并启动mysql 3)做即时点还原 4)生产环境下, 一次大型恢复后,需要马上进行一次完全备份。 备份与恢复实例说明: 环境, 实现创建了一个test_vg卷组,里面有个mylv1用来装mysql数据,挂载到/data/mysqldata 备份实例: 1. 创建备份专用的用户,授予权限FLUSH LOGS 和 LOCK TABLES MariaDB > GRANT RELOAD,LOCK TABLES,SUPER ON *.* TO 'lvm'@'192.168.1.%' IDENTIFIED BY 'lvm'; MariaDB > FLUSH PRIVILEGES; 2. 记录备份点 [root@test-huanqiu ~]# mysql -ulvm -h192.168.1.10 -plvm -e 'SHOW MASTER STATUS' > /tmp/backup_point.txt 3. 创建快照卷并挂载快照卷 [root@test-huanqiu ~]# lvcreate -L 1G -s -n lvmbackup -p r /dev/test_vg/mylv1 [root@test-huanqiu ~]# mount -t ext4 /dev/test_vg/lvmbackup /mnt/ 4. 释放锁 [root@test-huanqiu ~]# mysql -ulvm -h192.168.98.10 -plvm -e 'UNLOCK TABLES' 做一些模拟写入工作 MariaDB [test]> create database testdb2 5. 复制文件 [root@test-huanqiu ~]# cp /data/mysqldata /tmp/backup_mysqldata -r 6. 备份完成卸载,删除快照卷 [root@test-huanqiu ~]# umount /mnt [root@test-huanqiu ~]# lvmremove /dev/test_vg/lvmbackup 还原实例: 假如整个mysql服务器崩溃,并且目录全部被删除 1. 数据文件复制回源目录 [root@test-huanqiu ~]# cp -r /tmp/backup_mysqldata/* /data/mysqldata/ MariaDB [test]> show databases ; +--------------------+ | Database | +--------------------+ | information_schema | | hellodb | | mysql | | mysqldata | | openstack | | performance_schema | | test | +--------------------+ 此时还没有testdb2, 因为这个是备份之后创建的,因此需要通过之前记录的二进制日志 2. 查看之前记录的记录点。向后还原 [root@test-huanqiu ~]# cat /tmp/backup_point.txt FilePositionBinlog_Do_DBBinlog_Ignore_DB mysql-bin.000001245 [root@test-huanqiu ~]# mysqlbinlog /data/binlog/mysql-bin.000001 --start-position 245 > tmp.sql MariaDB [test]> source /data/mysqldata/tmp.sql MariaDB [test]> show databases ; +--------------------+ | Database | +--------------------+ | information_schema | | hellodb | | mysql | | mysqldata | | openstack | | performance_schema | | test | | testdb2 | +--------------------+ 8 rows in set (0.00 sec) testdb2 已经被还原回来。 具体实例说明,参考:Mysql备份系列(4)--lvm-snapshot备份mysql数据(全量+增量)操作记录 使用Xtrabackup进行MySQL备份: 参考:Mysql备份系列(3)--innobackupex备份mysql大数据(全量+增量)操作记录 -------------------------------------------------------------------------------------- 关于备份和恢复的几点经验之谈 备份注意: 1. 将数据和备份放在不同的磁盘设备上;异机或异地备份存储较为理想; 2. 备份的数据应该周期性地进行还原测试; 3. 每次灾难恢复后都应该立即做一次完全备份; 4. 针对不同规模或级别的数据量,要定制好备份策略; 5. 二进制日志应该跟数据文件在不同磁盘上,并周期性地备份好二进制日志文件; 从备份中恢复应该遵循步骤: 1. 停止MySQL服务器; 2. 记录服务器的配置和文件权限; 3. 将数据从备份移到MySQL数据目录;其执行方式依赖于工具; 4. 改变配置和文件权限; 5. 以限制访问模式重启服务器;mysqld的--skip-networking选项可跳过网络功能; 方法:编辑my.cnf配置文件,添加如下项: skip-networking socket=/tmp/mysql-recovery.sock 6. 载入逻辑备份(如果有);而后检查和重放二进制日志; 7. 检查已经还原的数据; 8. 重新以完全访问模式重启服务器; 注释前面第5步中在my.cnf中添加的选项,并重启; 在日常运维工作中,对mysql数据库的备份是万分重要的,以防在数据库表丢失或损坏情况出现,可以及时恢复数据。 线上数据库备份场景: 每周日执行一次全量备份,然后每天下午1点执行MySQLdump增量备份. 下面对这种备份方案详细说明下: 1.MySQLdump增量备份配置 执行增量备份的前提条件是MySQL打开binlog日志功能,在my.cnf中加入 log-bin=/opt/Data/MySQL-bin “log-bin=”后的字符串为日志记载目录,一般建议放在不同于MySQL数据目录的磁盘上。 1 2 3 4 5 6 7 8 9 ----------------------------------------------------------------------------------- mysqldump > 导出数据 mysql < 导入数据 (或者使用source命令导入数据,导入前要先切换到对应库下) 注意一个细节: 若是mysqldump导出一个库的数据,导出文件为a.sql,然后mysql导入这个数据到新的空库下。 如果新库名和老库名不一致,那么需要将a.sql文件里的老库名改为新库名, 这样才能顺利使用mysql命令导入数据(如果使用source命令导入就不需要修改a.sql文件了)。 ----------------------------------------------------------------------------------- 2.MySQLdump增量备份 假定星期日下午1点执行全量备份,适用于MyISAM存储引擎。 [root@test-huanqiu ~]# MySQLdump --lock-all-tables --flush-logs --master-data=2 -u root -p test > backup_sunday_1_PM.sql 对于InnoDB将--lock-all-tables替换为--single-transaction --flush-logs为结束当前日志,生成新日志文件; --master-data=2 选项将会在输出SQL中记录下完全备份后新日志文件的名称, 用于日后恢复时参考,例如输出的备份SQL文件中含有: CHANGE MASTER TO MASTER_LOG_FILE=’MySQL-bin.000002′, MASTER_LOG_POS=106; 3.MySQLdump增量备份其他说明: 如果MySQLdump加上–delete-master-logs 则清除以前的日志,以释放空间。但是如果服务器配置为镜像的复制主服务器,用MySQLdump –delete-master-logs删掉MySQL二进制日志很危险,因为从服务器可能还没有完全处理该二进制日志的内容。在这种情况下,使用 PURGE MASTER LOGS更为安全。 每日定时使用 MySQLadmin flush-logs来创建新日志,并结束前一日志写入过程。并把前一日志备份,例如上例中开始保存数据目录下的日志文件 MySQL-bin.000002 , ... 1.恢复完全备份 mysql -u root -p < backup_sunday_1_PM.sql 2.恢复增量备份 mysqlbinlog MySQL-bin.000002 … | MySQL -u root -p注意此次恢复过程亦会写入日志文件,如果数据量很大,建议先关闭日志功能 --compatible=name 它告诉 MySQLdump,导出的数据将和哪种数据库或哪个旧版本的 MySQL 服务器相兼容。值可以为 ansi、MySQL323、MySQL40、postgresql、oracle、mssql、db2、maxdb、no_key_options、no_tables_options、no_field_options 等,要使用几个值,用逗号将它们隔开。当然了,它并不保证能完全兼容,而是尽量兼容。 --complete-insert,-c 导出的数据采用包含字段名的完整 INSERT 方式,也就是把所有的值都写在一行。这么做能提高插入效率,但是可能会受到 max_allowed_packet 参数的影响而导致插入失败。因此,需要谨慎使用该参数,至少我不推荐。 --default-character-set=charset 指定导出数据时采用何种字符集,如果数据表不是采用默认的 latin1 字符集的话,那么导出时必须指定该选项,否则再次导入数据后将产生乱码问题。 --disable-keys 告诉 MySQLdump 在 INSERT 语句的开头和结尾增加 /*!40000 ALTER TABLE table DISABLE KEYS */; 和 /*!40000 ALTER TABLE table ENABLE KEYS */; 语句,这能大大提高插入语句的速度,因为它是在插入完所有数据后才重建索引的。该选项只适合 MyISAM 表。 --extended-insert = true|false 默认情况下,MySQLdump 开启 --complete-insert 模式,因此不想用它的的话,就使用本选项,设定它的值为 false 即可。 --hex-blob 使用十六进制格式导出二进制字符串字段。如果有二进制数据就必须使用本选项。影响到的字段类型有 BINARY、VARBINARY、BLOB。 --lock-all-tables,-x 在开始导出之前,提交请求锁定所有数据库中的所有表,以保证数据的一致性。这是一个全局读锁,并且自动关闭 --single-transaction 和 --lock-tables 选项。 --lock-tables 它和 --lock-all-tables 类似,不过是锁定当前导出的数据表,而不是一下子锁定全部库下的表。本选项只适用于 MyISAM 表,如果是 Innodb 表可以用 --single-transaction 选项。 --no-create-info,-t 只导出数据,而不添加 CREATE TABLE 语句。 --no-data,-d 不导出任何数据,只导出数据库表结构。 mysqldump --no-data --databases mydatabase1 mydatabase2 mydatabase3 > test.dump 将只备份表结构。--databases指示主机上要备份的数据库。 --opt 这只是一个快捷选项,等同于同时添加 --add-drop-tables --add-locking --create-option --disable-keys --extended-insert --lock-tables --quick --set-charset 选项。本选项能让 MySQLdump 很快的导出数据,并且导出的数据能很快导回。该选项默认开启,但可以用 --skip-opt 禁用。注意,如果运行 MySQLdump 没有指定 --quick 或 --opt 选项,则会将整个结果集放在内存中。如果导出大数据库的话可能会出现问题。 --quick,-q 该选项在导出大表时很有用,它强制 MySQLdump 从服务器查询取得记录直接输出而不是取得所有记录后将它们缓存到内存中。 --routines,-R 导出存储过程以及自定义函数。 --single-transaction 该选项在导出数据之前提交一个 BEGIN SQL语句,BEGIN 不会阻塞任何应用程序且能保证导出时数据库的一致性状态。它只适用于事务表,例如 InnoDB 和 BDB。本选项和 --lock-tables 选项是互斥的,因为 LOCK TABLES 会使任何挂起的事务隐含提交。要想导出大表的话,应结合使用 --quick 选项。 --triggers 同时导出触发器。该选项默认启用,用 --skip-triggers 禁用它。 跨主机备份 使用下面的命令可以将host1上的sourceDb复制到host2的targetDb,前提是host2主机上已经创建targetDb数据库: -C 指示主机间的数据传输使用数据压缩 mysqldump --host=host1 --opt sourceDb| mysql --host=host2 -C targetDb 结合Linux的cron命令实现定时备份 比如需要在每天凌晨1:30备份某个主机上的所有数据库并压缩dump文件为gz格式 30 1 * * * mysqldump -u root -pPASSWORD --all-databases | gzip > /mnt/disk2/database_`date '+%m-%d-%Y'`.sql.gz 一个完整的Shell脚本备份MySQL数据库示例。比如备份数据库opspc [root@test-huanqiu ~]# vim /root/backup.sh #!bin/bash echo "Begin backup mysql database" mysqldump -u root -ppassword opspc > /home/backup/mysqlbackup-`date +%Y-%m-%d`.sql echo "Your database backup successfully completed" [root@test-huanqiu ~]# crontab -e 30 1 * * * /bin/bash -x /root/backup.sh > /dev/null 2>&1 mysqldump全量备份+mysqlbinlog二进制日志增量备份 1)从mysqldump备份文件恢复数据会丢失掉从备份点开始的更新数据,所以还需要结合mysqlbinlog二进制日志增量备份。 首先确保已开启binlog日志功能。在my.cnf中包含下面的配置以启用二进制日志: [mysqld] log-bin=mysql-bin 2)mysqldump命令必须带上--flush-logs选项以生成新的二进制日志文件: mysqldump --single-transaction --flush-logs --master-data=2 > backup.sql 其中参数--master-data=[0|1|2] 0: 不记录 1:记录为CHANGE MASTER语句 2:记录为注释的CHANGE MASTER语句 mysqldump全量+增量备份方案的具体操作可参考下面两篇文档: 数据库误删除后的数据恢复操作说明 解说mysql之binlog日志以及利用binlog日志恢复数据 -------------------------------------------------------------------------- 下面分享一下自己用过的mysqldump全量和增量备份脚本 应用场景: 1)增量备份在周一到周六凌晨3点,会复制mysql-bin.00000*到指定目录; 2)全量备份则使用mysqldump将所有的数据库导出,每周日凌晨3点执行,并会删除上周留下的mysq-bin.00000*,然后对mysql的备份操作会保留在bak.log文件中。 脚本实现: 1)全量备份脚本(假设mysql登录密码为123456;注意脚本中的命令路径): [root@test-huanqiu ~]# vim /root/Mysql-FullyBak.sh #!/bin/bash # Program # use mysqldump to Fully backup mysql data per week! # History # Path BakDir=/home/mysql/backup LogFile=/home/mysql/backup/bak.log Date=`date +%Y%m%d` Begin=`date +"%Y年%m月%d日 %H:%M:%S"` cd $BakDir DumpFile=$Date.sql GZDumpFile=$Date.sql.tgz /usr/local/mysql/bin/mysqldump -uroot -p123456 --quick --events --all-databases --flush-logs --delete-master-logs --single-transaction > $DumpFile /bin/tar -zvcf $GZDumpFile $DumpFile /bin/rm $DumpFile Last=`date +"%Y年%m月%d日 %H:%M:%S"` echo 开始:$Begin 结束:$Last $GZDumpFile succ >> $LogFile cd $BakDir/daily /bin/rm -f * 2)增量备份脚本(脚本中mysql的数据存放路径是/home/mysql/data,具体根据自己的实际情况进行调整) [root@test-huanqiu ~]# vim /root/Mysql-DailyBak.sh #!/bin/bash # Program # use cp to backup mysql data everyday! # History # Path BakDir=/home/mysql/backup/daily //增量备份时复制mysql-bin.00000*的目标目录,提前手动创建这个目录 BinDir=/home/mysql/data //mysql的数据目录 LogFile=/home/mysql/backup/bak.log BinFile=/home/mysql/data/mysql-bin.index //mysql的index文件路径,放在数据目录下的 /usr/local/mysql/bin/mysqladmin -uroot -p123456 flush-logs #这个是用于产生新的mysql-bin.00000*文件 Counter=`wc -l $BinFile |awk '{print $1}'` NextNum=0 #这个for循环用于比对$Counter,$NextNum这两个值来确定文件是不是存在或最新的 for file in `cat $BinFile` do base=`basename $file` #basename用于截取mysql-bin.00000*文件名,去掉./mysql-bin.000005前面的./ NextNum=`expr $NextNum + 1` if [ $NextNum -eq $Counter ] then echo $base skip! >> $LogFile else dest=$BakDir/$base if(test -e $dest) #test -e用于检测目标文件是否存在,存在就写exist!到$LogFile去 then echo $base exist! >> $LogFile else cp $BinDir/$base $BakDir echo $base copying >> $LogFile fi fi done echo `date +"%Y年%m月%d日 %H:%M:%S"` $Next Bakup succ! >> $LogFile 3)设置crontab任务,执行备份脚本。先执行的是增量备份脚本,然后执行的是全量备份脚本: [root@test-huanqiu ~]# crontab -e #每个星期日凌晨3:00执行完全备份脚本 0 3 * * 0 /bin/bash -x /root/Mysql-FullyBak.sh >/dev/null 2>&1 #周一到周六凌晨3:00做增量备份 0 3 * * 1-6 /bin/bash -x /root/Mysql-DailyBak.sh >/dev/null 2>&1 4)手动执行上面两个脚本,测试下备份效果 [root@test-huanqiu backup]# pwd /home/mysql/backup [root@test-huanqiu backup]# mkdir daily [root@test-huanqiu backup]# ll total 4 drwxr-xr-x. 2 root root 4096 Nov 29 11:29 daily [root@test-huanqiu backup]# ll daily/ total 0 先执行增量备份脚本 [root@test-huanqiu backup]# sh /root/Mysql-DailyBak.sh [root@test-huanqiu backup]# ll total 8 -rw-r--r--. 1 root root 121 Nov 29 11:29 bak.log drwxr-xr-x. 2 root root 4096 Nov 29 11:29 daily [root@test-huanqiu backup]# ll daily/ total 8 -rw-r-----. 1 root root 152 Nov 29 11:29 mysql-binlog.000030 -rw-r-----. 1 root root 152 Nov 29 11:29 mysql-binlog.000031 [root@test-huanqiu backup]# cat bak.log mysql-binlog.000030 copying mysql-binlog.000031 copying mysql-binlog.000032 skip! 2016年11月29日 11:29:32 Bakup succ! 然后执行全量备份脚本 [root@test-huanqiu backup]# sh /root/Mysql-FullyBak.sh 20161129.sql [root@test-huanqiu backup]# ll total 152 -rw-r--r--. 1 root root 145742 Nov 29 11:30 20161129.sql.tgz -rw-r--r--. 1 root root 211 Nov 29 11:30 bak.log drwxr-xr-x. 2 root root 4096 Nov 29 11:30 daily [root@test-huanqiu backup]# ll daily/ total 0 [root@test-huanqiu backup]# cat bak.log mysql-binlog.000030 copying mysql-binlog.000031 copying mysql-binlog.000032 skip! 2016年11月29日 11:29:32 Bakup succ! 开始:2016年11月29日 11:30:38 结束:2016年11月29日 11:30:38 20161129.sql.tgz succ 在日常的linux运维工作中,大数据量备份与还原,始终是个难点。关于mysql的备份和恢复,比较传统的是用mysqldump工具,今天这里推荐另一个备份工具innobackupex。innobackupex和mysqldump都可以对mysql进行热备份的,mysqldump对mysql的innodb的备份可以使用single-transaction参数来开启一个事务,利用innodb的mvcc来不进行锁表进行热备份,mysqldump备份是逻辑备份,备份出来的文件是sql语句,所以备份和恢复的时候很慢,但是备份和恢复时候很清楚。当MYSQL数据超过10G时,用mysqldump来导出备份就比较慢了,此种情况下用innobackupex这个工具就比mysqldump要快很多。利用它对mysql做全量和增量备份,仅仅依据本人实战操作做一记录,如有误述,敬请指出~ 一、innobackupex的介绍 Xtrabackup是由percona开发的一个开源软件,是使用perl语言完成的脚本工具,能够非常快速地备份与恢复mysql数据库,且支持在线热备份(备份时不影响数据读写),此工具调用xtrabackup和tar4ibd工具,实现很多对性能要求并不高的任务和备份逻辑,可以说它是innodb热备工具ibbackup的一个开源替代品。 Xtrabackup中包含两个工具: 1)xtrabackup :只能用于热备份innodb,xtradb两种数据引擎表的工具,不能备份其他表。 2)innobackupex:是一个对xtrabackup封装的perl脚本,提供了用于myisam(会锁表)和innodb引擎,及混合使用引擎备份的能力。主要是为了方便同时备份InnoDB和MyISAM引擎的表,但在处理myisam时需要加一个读锁。并且加入了一些使用的选项。如slave-info可以记录备份恢 复后,作为slave需要的一些信息,根据这些信息,可以很方便的利用备份来重做slave。 innobackupex比xtarbackup有更强的功能,它整合了xtrabackup和其他的一些功能,它不但可以全量备份/恢复,还可以基于时间的增量备份与恢复。innobackupex同时支持innodb,myisam。 Xtrabackup可以做什么 1)在线(热)备份整个库的InnoDB, XtraDB表 2)在xtrabackup的上一次整库备份基础上做增量备份(innodb only) 3)以流的形式产生备份,可以直接保存到远程机器上(本机硬盘空间不足时很有用) MySQL数据库本身提供的工具并不支持真正的增量备份,二进制日志恢复是point-in-time(时间点)的恢复而不是增量备份。Xtrabackup工具支持对InnoDB存储引擎的增量备份,工作原理如下: 1)首先完成一个完全备份,并记录下此时检查点的LSN(Log Sequence Number)。 2)在进程增量备份时,比较表空间中每个页的LSN是否大于上次备份时的LSN,如果是,则备份该页,同时记录当前检查点的LSN。首先,在logfile中找到并记录最后一个checkpoint(“last checkpoint LSN”),然后开始从LSN的位置开始拷贝InnoDB的logfile到xtrabackup_logfile;接着,开始拷贝全部的数据文件.ibd;在拷贝全部数据文件结束之后,才停止拷贝logfile。因为logfile里面记录全部的数据修改情况,所以,即时在备份过程中数据文件被修改过了,恢复时仍然能够通过解析xtrabackup_logfile保持数据的一致。 innobackupex备份mysql数据的流程 innobackupex首先调用xtrabackup来备份innodb数据文件,当xtrabackup完成后,innobackupex就查看文件xtrabackup_suspended ;然后执行“FLUSH TABLES WITH READ LOCK”来备份其他的文件。 innobackupex恢复mysql数据的流程 innobackupex首先读取my.cnf,查看变量(datadir,innodb_data_home_dir,innodb_data_file_path,innodb_log_group_home_dir)对应的目录是存在,确定相关目录存在后,然后先copy myisam表和索引,然后在copy innodb的表、索引和日志。 ------------------------------------------------------------------------------------------------------------------------------------------ 下面详细说下innobackupex备份和恢复的工作原理: (1)备份的工作原理 如果在程序启动阶段未指定模式,innobackupex将会默认以备份模式启动。 默认情况下,此脚本以--suspend-at-end选项启动xtrabackup,然后xtrabackup程序开始拷贝InnoDB数据文件。当xtrabackup程序执行结束,innobackupex将会发现xtrabackup创建了xtrabackup_suspended_2文件,然后执行FLUSH TABLES WITH READ LOCK,此语句对所有的数据库表加读锁。然后开始拷贝其他类型的文件。 如果--ibbackup未指定,innobackupex将会自行尝试确定使用的xtrabackup的binary。其确定binary的逻辑如下:首先判断备份目录中xtrabackup_binary文件是否存在,如果存在,此脚本将会依据此文件确定使用的xtrabackup binary。否则,脚本将会尝试连接database server,通过server版本确定binary。如果连接无法建立,xtrabackup将会失败,需要自行指定binary文件。 在binary被确定后,将会检查到数据库server的连接是否可以建立。其执行逻辑是:建立连接、执行query、关闭连接。若一切正常,xtrabackup将以子进程的方式启动。 FLUSH TABLES WITH READ LOCK是为了备份MyISAM和其他非InnoDB类型的表,此语句在xtrabackup已经备份InnoDB数据和日志文件后执行。在这之后,将会备份 .frm, .MRG, .MYD, .MYI, .TRG, .TRN, .ARM, .ARZ, .CSM, .CSV, .par, and .opt 类型的文件。 当所有上述文件备份完成后,innobackupex脚本将会恢复xtrabackup的执行,等待其备份上述逻辑执行过程中生成的事务日志文件。接下来,表被解锁,slave被启动,到server的连接被关闭。接下来,脚本会删掉xtrabackup_suspended_2文件,允许xtrabackup进程退出。 (2)恢复的工作原理 为了恢复一个备份,innobackupex需要以--copy-back选项启动。 innobackupex将会首先通过my.cnf文件读取如下变量:datadir, innodb_data_home_dir, innodb_data_file_path, innodb_log_group_home_dir,并确定这些目录存在。 接下来,此脚本将会首先拷贝MyISAM表、索引文件、其他类型的文件(如:.frm, .MRG, .MYD, .MYI, .TRG, .TRN, .ARM, .ARZ, .CSM, .CSV, par and .opt files),接下来拷贝InnoDB表数据文件,最后拷贝日志文件。拷贝执行时将会保留文件属性,在使用备份文件启动MySQL前,可能需要更改文件的owener(如从拷贝文件的user更改到mysql用户)。 --------------------------------------------------------------------------------------------------------------------------------------------- 二、innobackupex针对mysql数据库的备份环境部署 1)源码安装Xtrabackup,将源码包下载到/usr/local/src下 源码包下载 [root@test-huanqiu ~]# cd /usr/local/src 先安装依赖包 [root@test-huanqiu src]# yum -y install cmake gcc gcc-c++ libaio libaio-devel automake autoconf bzr bison libtool zlib-devel libgcrypt-devel libcurl-devel crypt* libgcrypt* python-sphinx openssl imake libxml2-devel expat-devel ncurses5-devel ncurses-devle vim-common libgpg-error-devel libidn-devel perl-DBI perl-DBD-MySQL perl-Time-HiRes perl-IO-Socket-SSL [root@test-huanqiu src]# wget http://www.percona.com/downloads/XtraBackup/XtraBackup-2.1.9/source/percona-xtrabackup-2.1.9.tar.gz [root@test-huanqiu src]# tar -zvxf percona-xtrabackup-2.1.9.tar.gz [root@test-huanqiu src]# cd percona-xtrabackup-2.1.9 [root@test-huanqiu percona-xtrabackup-2.1.9]# ./utils/build.sh //执行该安装脚本,会出现下面信息 Build an xtrabackup binary against the specified InnoDB flavor. Usage: build.sh CODEBASE where CODEBASE can be one of the following values or aliases: innodb51 | plugin build against InnoDB plugin in MySQL 5.1 innodb55 | 5.5 build against InnoDB in MySQL 5.5 innodb56 | 5.6,xtradb56, build against InnoDB in MySQL 5.6 | mariadb100,galera56 xtradb51 | xtradb,mariadb51 build against Percona Server with XtraDB 5.1 | mariadb52,mariadb53 xtradb55 | galera55,mariadb55 build against Percona Server with XtraDB 5.5 根据上面提示和你使用的存储引擎及版本,选择相应的参数即可。因为我用的是MySQL 5.6版本,所以执行如下语句安装: [root@test-huanqiu percona-xtrabackup-2.1.9]# ./utils/build.sh innodb56 以上语句执行成功后,表示安装完成。 最后,把生成的二进制文件拷贝到一个自定义目录下(本例中为/home/mysql/admin/bin/percona-xtrabackup-2.1.9),并把该目录放到环境变量PATH中。 [root@test-huanqiu percona-xtrabackup-2.1.9]# mkdir -p /home/mysql/admin/bin/percona-xtrabackup-2.1.9/ [root@test-huanqiu percona-xtrabackup-2.1.9]# cp ./innobackupex /home/mysql/admin/bin/percona-xtrabackup-2.1.9/ [root@test-huanqiu percona-xtrabackup-2.1.9]# cp ./src/xtrabackup_56 ./src/xbstream /home/mysql/admin/bin/percona-xtrabackup-2.1.9/ [root@test-huanqiu percona-xtrabackup-2.1.9]# vim /etc/profile ....... export PATH=$PATH:/home/mysql/admin/bin/percona-xtrabackup-2.1.9/ [root@test-huanqiu percona-xtrabackup-2.1.9]# source /etc/profile 测试下innobackupex是否正常使用 [root@test-huanqiu percona-xtrabackup-2.1.9]# innobackupex --help -------------------------------------------------------------------------------------------------------------------------------------------- 可能报错1 Can't locate Time/HiRes.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /home/mysql/admin/bin/percona-xtrabackup-2.1.9/innobackupex line 23. BEGIN failed--compilation aborted at /home/mysql/admin/bin/percona-xtrabackup-2.1.9/innobackupex line 23. 解决方案: .pm实际上是Perl的包,只需安装perl-Time-HiRes即可: [root@test-huanqiu percona-xtrabackup-2.1.9]# yum install -y perl-Time-HiRes 可能报错2 Can't locate DBI.pm in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at /usr/local/webserver/mysql5.1.57/bin/mysqlhotcopy line 25. BEGIN failed--compilation aborted at /usr/local/webserver/mysql5.1.57/bin/mysqlhotcopy line 25. 报错原因:系统没有按安装DBI组件。 DBI(Database Interface)是perl连接数据库的接口。其是perl连接数据库的最优秀方法,他支持包括Orcal,Sybase,mysql,db2等绝大多数的数据库。 解决办法: 安装DBI组件(Can't locate DBI.pm in @INC-mysql接口) 或者单独装DBI、Data-ShowTable、DBD-mysql 三个组件 [root@test-huanqiu percona-xtrabackup-2.1.9]# yum -y install perl-DBD-MySQL 接着使用innobackupex命令测试是否正常 [root@test-huanqiu percona-xtrabackup-2.1.9]# innobackupex --help Options: --apply-log Prepare a backup in BACKUP-DIR by applying the transaction log file named "xtrabackup_logfile" located in the same directory. Also, create new transaction logs. The InnoDB configuration is read from the file "backup-my.cnf". --compact Create a compact backup with all secondary index pages omitted. This option is passed directly to xtrabackup. See xtrabackup documentation for details. --compress This option instructs xtrabackup to compress backup copies of InnoDB data files. It is passed directly to the xtrabackup child process. Try 'xtrabackup --help' for more details. ............ ------------------------------------------------------------------------------------------------------------------------------------------------------------- 2)全量备份和恢复 ---------------->全量备份操作<---------------- 执行下面语句进行全备: mysql的安装目录是/usr/local/mysql mysql的配置文件路径/usr/local/mysql/my.cnf mysql的密码是123456 全量备份后的数据存放目录是/backup/mysql/data [root@test-huanqiu ~]# mkdir -p /backup/mysql/data [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 /backup/mysql/data InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy and Percona LLC and/or its affiliates 2009-2013. All Rights Reserved. ................... 161201 00:07:15 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/usr/local/mysql/my.cnf;mysql_read_default_group=xtrabackup' as 'root' (using password: YES). 161201 00:07:15 innobackupex: Connected to MySQL server 161201 00:07:15 innobackupex: Executing a version check against the server... 161201 00:07:15 innobackupex: Done. .................. 161201 00:07:19 innobackupex: Connection to database server closed 161201 00:07:19 innobackupex: completed OK! 出现上面的信息,表示备份已经ok。 上面执行的备份语句会将mysql数据文件(即由my.cnf里的变量datadir指定)拷贝至备份目录下(/backup/mysql/data) 注意:如果不指定--defaults-file,默认值为/etc/my.cnf。 备份成功后,将在备份目录下创建一个时间戳目录(本例创建的目录为/backup/mysql/data/2016-12-01_00-07-15),在该目录下存放备份文件。 查看备份数据: [root@test-huanqiu ~]# ll /backup/mysql/data total 4 drwxr-xr-x. 6 root root 4096 Dec 1 00:07 2016-12-01_00-07-15 [root@test-huanqiu ~]# ll /backup/mysql/data/2016-12-01_00-07-15/ total 12324 -rw-r--r--. 1 root root 357 Dec 1 00:07 backup-my.cnf drwx------. 2 root root 4096 Dec 1 00:07 huanqiu -rw-r-----. 1 root root 12582912 Dec 1 00:07 ibdata1 drwx------. 2 root root 4096 Dec 1 00:07 mysql drwxr-xr-x. 2 root root 4096 Dec 1 00:07 performance_schema drwxr-xr-x. 2 root root 4096 Dec 1 00:07 test -rw-r--r--. 1 root root 13 Dec 1 00:07 xtrabackup_binary -rw-r--r--. 1 root root 24 Dec 1 00:07 xtrabackup_binlog_info -rw-r-----. 1 root root 89 Dec 1 00:07 xtrabackup_checkpoints -rw-r-----. 1 root root 2560 Dec 1 00:07 xtrabackup_logfile ---------------------------------------------------------------------------------------------------------------------------------- 可能报错1: 161130 05:56:48 innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_file=/usr/local/mysql/my.cnf;mysql_read_default_group=xtrabackup' as 'root' (using password: YES). innobackupex: Error: Failed to connect to MySQL server as DBD::mysql module is not installed at /home/mysql/admin/bin/percona-xtrabackup-2.1.9/innobackupex line 2956. 解决办法: [root@test-huanqiu ~]# yum -y install perl-DBD-MySQL.x86_64 ...... Package perl-DBD-MySQL-4.013-3.el6.x86_64 already installed and latest version //发现本机已经安装了 [root@test-huanqiu ~]# rpm -qa|grep perl-DBD-MySQL perl-DBD-MySQL-4.013-3.el6.x86_64 发现本机已经安装了最新版的perl-DBD-MYSQL了,但是仍然报出上面的错误!! 莫慌~~继续下面的操作进行问题的解决 查看mysql.so依赖的lib库 [root@test-huanqiu ~]# ldd /usr/lib64/perl5/auto/DBD/mysql/mysql.so linux-vdso.so.1 => (0x00007ffd291fc000) libmysqlclient.so.16 => not found //这一项为通过检查,缺失libmysqlclient.so.16库导致 libz.so.1 => /lib64/libz.so.1 (0x00007f78ff9de000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f78ff7a7000) libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f78ff58e000) libm.so.6 => /lib64/libm.so.6 (0x00007f78ff309000) libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007f78ff09d000) libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x00007f78fecb9000) libc.so.6 => /lib64/libc.so.6 (0x00007f78fe924000) libfreebl3.so => /lib64/libfreebl3.so (0x00007f78fe721000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f78fe4dd000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f78fe1f5000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f78fdff1000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f78fddc5000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f78fdbc0000) /lib64/ld-linux-x86-64.so.2 (0x00007f78ffe1d000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f78fd9b5000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f78fd7b2000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f78fd597000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f78fd37a000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f78fd15a000) 以上结果说明缺少libmysqlclient.so.16这个二进制包,找个官方原版的mysql的libmysqlclient.so.16替换了即可! [root@test-huanqiu~]# find / -name libmysqlclient.so.16 //查看本机并没有libmysqlclient.so.16库文件 查看mysql/lib下的libmysqlclinet.so库文件 [root@test-huanqiu~]# ll /usr/local/mysql/lib/ total 234596 -rw-r--r--. 1 mysql mysql 19520800 Nov 29 12:27 libmysqlclient.a lrwxrwxrwx. 1 mysql mysql 16 Nov 29 12:34 libmysqlclient_r.a -> libmysqlclient.a lrwxrwxrwx. 1 mysql mysql 17 Nov 29 12:34 libmysqlclient_r.so -> libmysqlclient.so lrwxrwxrwx. 1 mysql mysql 20 Nov 29 12:34 libmysqlclient_r.so.18 -> libmysqlclient.so.18 lrwxrwxrwx. 1 mysql mysql 24 Nov 29 12:34 libmysqlclient_r.so.18.1.0 -> libmysqlclient.so.18.1.0 lrwxrwxrwx. 1 mysql mysql 20 Nov 29 12:34 libmysqlclient.so -> libmysqlclient.so.18 lrwxrwxrwx. 1 mysql mysql 24 Nov 29 12:34 libmysqlclient.so.18 -> libmysqlclient.so.18.1.0 -rwxr-xr-x. 1 mysql mysql 8858235 Nov 29 12:27 libmysqlclient.so.18.1.0 -rw-r--r--. 1 mysql mysql 211822074 Nov 29 12:34 libmysqld.a -rw-r--r--. 1 mysql mysql 14270 Nov 29 12:27 libmysqlservices.a drwxr-xr-x. 3 mysql mysql 4096 Nov 29 12:34 plugin 将mysql/lib/libmysqlclient.so.18.1.0库文件拷贝到/lib64下,拷贝后命名为libmysqlclient.so.16 [root@test-huanqiu~]# cp /usr/local/mysql/lib/libmysqlclient.so.18.1.0 /lib64/libmysqlclient.so.16 [root@test-huanqiu~]# cat /etc/ld.so.conf include ld.so.conf.d/*.conf /usr/local/mysql/lib/ /lib64/ [root@test-huanqiu~]# ldconfig 最后卸载perl-DBD-MySQL,并重新安装perl-DBD-MySQL [root@test-huanqiu~]# rpm -qa|grep perl-DBD-MySQL perl-DBD-MySQL-4.013-3.el6.x86_64 [root@test-huanqiu~]# rpm -e --nodeps perl-DBD-MySQL [root@test-huanqiu~]# rpm -qa|grep perl-DBD-MySQL [root@test-huanqiu~]# yum -y install perl-DBD-MySQL 待重新安装后,再次重新检查mysql.so依赖的lib库,发现已经都通过了 [root@test-huanqiu~]# ldd /usr/lib64/perl5/auto/DBD/mysql/mysql.so linux-vdso.so.1 => (0x00007ffe3669b000) libmysqlclient.so.16 => /usr/lib64/mysql/libmysqlclient.so.16 (0x00007f4af5c25000) libz.so.1 => /lib64/libz.so.1 (0x00007f4af5a0f000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f4af57d7000) libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f4af55be000) libm.so.6 => /lib64/libm.so.6 (0x00007f4af533a000) libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007f4af50cd000) libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x00007f4af4ce9000) libc.so.6 => /lib64/libc.so.6 (0x00007f4af4955000) libfreebl3.so => /lib64/libfreebl3.so (0x00007f4af4751000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f4af450d000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f4af4226000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f4af4021000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f4af3df5000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f4af3bf1000) /lib64/ld-linux-x86-64.so.2 (0x00007f4af61d1000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f4af39e5000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f4af37e2000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f4af35c8000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4af33aa000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f4af318b000) 可能报错2 sh: xtrabackup_56: command not found innobackupex: Error: no 'mysqld' group in MySQL options at /home/mysql/admin/bin/percona-xtrabackup-2.1.9/innobackupex line 4350. 有可能是percona-xtrabackup编译安装后,在编译目录的src下存在xtrabackup_innodb56,只需要其更名为xtrabackup_56,然后拷贝到上面的/home/mysql/admin/bin/percona-xtrabackup-2.1.9/下即可! ---------------------------------------------------------------------------------------------------------------------------------- 还可以在远程进行全量备份,命令如下: [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --host=127.0.0.1 --parallel=2 --throttle=200 /backup/mysql/data 2>/backup/mysql/data/bak.log 1>/backup/mysql/data/`data +%Y-%m-%d_%H-%M%S` 参数解释: --user=root 备份操作用户名,一般都是root用户 --password=root123 数据库密码 --host=127.0.0.1 主机ip,本地可以不加(适用于远程备份)。注意要提前在mysql中授予连接的权限,最好备份前先测试用命令中的用户名、密码和host能否正常连接mysql。 --parallel=2 --throttle=200 并行个数,根据主机配置选择合适的,默认是1个,多个可以加快备份速度。 /backup/mysql/data 备份存放的目录 2>/backup/mysql/data/bak.log 备份日志,将备份过程中的输出信息重定向到bak.log 这种备份跟上面相比,备份成功后,不会自动在备份目录下创建一个时间戳目录,需要如上命令中自己定义。 [root@test-huanqiu ~]# cd /backup/mysql/data/ [root@test-huanqiu data]# ll drwxr-xr-x. 6 root root 4096 Dec 1 03:18 2016-12-01_03-18-37 -rw-r--r--. 1 root root 5148 Dec 1 03:18 bak.log [root@test-huanqiu data]# cat bak.log //备份信息都记录在这个日志里,如果备份失败,可以到这里日志里查询 ---------------------------------------------------------------------------------------------------------------------------- ---------------->全量备份后的恢复操作<---------------- 比如在上面进行全量备份后,由于误操作将数据库中的huanqiu库删除了。 [root@test-huanqiu ~]# mysql -p123456 ....... mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | huanqiu | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec) mysql> use huanqiu; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +-------------------------+ | Tables_in_huanqiu | +-------------------------+ | card_agent_file | | product_sale_management | +-------------------------+ 2 rows in set (0.00 sec) mysql> drop database huanqiu; Query OK, 2 rows affected (0.12 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) 现在进行恢复数据操作 注意:恢复之前 1)要先关闭数据库 2)要删除数据文件和日志文件(也可以mv移到别的地方,只要确保清空mysql数据存放目录就行) [root@test-huanqiu ~]# ps -ef|grep mysql root 2442 21929 0 00:25 pts/2 00:00:00 grep mysql root 28279 1 0 Nov29 ? 00:00:00 /bin/sh /usr/local/mysql//bin/mysqld_safe --datadir=/data/mysql/data --pid-file=/data/mysql/data/mysql.pid mysql 29059 28279 0 Nov29 ? 00:09:07 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql/ --datadir=/data/mysql/data --plugin-dir=/usr/local/mysql//lib/plugin --user=mysql --log-error=/data/mysql/data/mysql-error.log --pid-file=/data/mysql/data/mysql.pid --socket=/usr/local/mysql/var/mysql.sock --port=3306 由上面可查出mysql的数据和日志存放目录是/data/mysql/data [root@test-huanqiu ~]# /etc/init.d/mysql stop Shutting down MySQL.. SUCCESS! [root@test-huanqiu ~]# rm -rf /data/mysql/data/* [root@test-huanqiu ~]# ls /data/mysql/data [root@test-huanqiu ~]# 查看备份数据 [root@[root@test-huanqiu ~]# ls /backup/mysql/data/ 2016-12-01_00-07-15 恢复数据 [root@[root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --use-memory=4G --apply-log /backup/mysql/data/2016-12-01_00-07-15 [root@[root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --copy-back /backup/mysql/data/2016-12-01_00-07-15 ........ innobackupex: Copying '/backup/mysql/data/2016-12-01_00-07-15/ib_logfile2' to '/data/mysql/data/ib_logfile2' innobackupex: Copying '/backup/mysql/data/2016-12-01_00-07-15/ib_logfile0' to '/data/mysql/data/ib_logfile0' innobackupex: Finished copying back files. 161201 00:31:33 innobackupex: completed OK! 出现上面的信息,说明数据恢复成功了!! 从上面的恢复操作可以看出,执行恢复分为两个步骤: 1)第一步恢复步骤是应用日志(apply-log),为了加快速度,一般建议设置--use-memory(如果系统内存充足,可以使用加大内存进行备份 ),这个步骤完成之后,目录/backup/mysql/data/2016-12-01_00-07-15下的备份文件已经准备就绪。 2)第二步恢复步骤是拷贝文件(copy-back),即把备份文件拷贝至原数据目录下。 恢复完成之后,一定要记得检查数据目录的所有者和权限是否正确。 [root@test-huanqiu ~]# ll /data/mysql/data/ total 110608 drwxr-xr-x. 2 root root 4096 Dec 1 00:31 huanqiu -rw-r--r--. 1 root root 12582912 Dec 1 00:31 ibdata1 -rw-r--r--. 1 root root 33554432 Dec 1 00:31 ib_logfile0 -rw-r--r--. 1 root root 33554432 Dec 1 00:31 ib_logfile1 -rw-r--r--. 1 root root 33554432 Dec 1 00:31 ib_logfile2 drwxr-xr-x. 2 root root 4096 Dec 1 00:31 mysql drwxr-xr-x. 2 root root 4096 Dec 1 00:31 performance_schema drwxr-xr-x. 2 root root 4096 Dec 1 00:31 test [root@test-huanqiu ~]# chown -R mysql.mysql /data/mysql/data/ //将数据目录的权限修改为mysql:mysql [root@test-huanqiu ~]# ll /data/mysql/data/ total 110608 drwxr-xr-x. 2 mysql mysql 4096 Dec 1 00:31 huanqiu -rw-r--r--. 1 mysql mysql 12582912 Dec 1 00:31 ibdata1 -rw-r--r--. 1 mysql mysql 33554432 Dec 1 00:31 ib_logfile0 -rw-r--r--. 1 mysql mysql 33554432 Dec 1 00:31 ib_logfile1 -rw-r--r--. 1 mysql mysql 33554432 Dec 1 00:31 ib_logfile2 drwxr-xr-x. 2 mysql mysql 4096 Dec 1 00:31 mysql drwxr-xr-x. 2 mysql mysql 4096 Dec 1 00:31 performance_schema drwxr-xr-x. 2 mysql mysql 4096 Dec 1 00:31 test ------------------------------------------------------------------------------------------------------------------------------------------- 可能报错: sh: xtrabackup: command not found innobackupex: Error: no 'mysqld' group in MySQL options at /home/mysql/admin/bin/percona-xtrabackup-2.1.9/innobackupex line 4350. 解决:将xtrabackup_56复制成xtrabackup即可 [root@test-huanqiu percona-xtrabackup-2.1.9]# ls innobackupex xbstream xtrabackup_56 [root@test-huanqiu percona-xtrabackup-2.1.9]# cp xtrabackup_56 xtrabackup [root@test-huanqiu percona-xtrabackup-2.1.9]# ls innobackupex xbstream xtrabackup xtrabackup_56 ------------------------------------------------------------------------------------------------------------------------------------------- 最后,启动mysql,查看数据是否恢复回来了 [root@test-huanqiu ~]# /etc/init.d/mysql start Starting MySQL.. SUCCESS! [root@test-huanqiu ~]# mysql -p123456 ........ mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | huanqiu | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec) mysql> use huanqiu; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +-------------------------+ | Tables_in_huanqiu | +-------------------------+ | card_agent_file | | product_sale_management | +-------------------------+ 2 rows in set (0.00 sec) mysql> 3)增量备份和恢复 ---------------->增量备份操作<---------------- 特别注意: innobackupex 增量备份仅针对InnoDB这类支持事务的引擎,对于MyISAM等引擎,则仍然是全备。 增量备份需要基于全量备份 先假设我们已经有了一个全量备份(如上面的/backup/mysql/data/2016-12-01_00-07-15),我们需要在该全量备份的基础上做第一次增量备份。 [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --incremental-basedir=/backup/mysql/data/2016-12-01_00-07-15 --incremental /backup/mysql/data 其中: --incremental-basedir 指向全量备份目录 --incremental 指向增量备份的目录 上面语句执行成功之后,会在--incremental执行的目录下创建一个时间戳子目录(本例中为:/backup/mysql/data/2016-12-01_01-12-22),在该目录下存放着增量备份的所有文件。 [root@test-huanqiu ~]# ll /backup/mysql/data/ total 8 drwxr-xr-x. 6 root root 4096 Dec 1 00:27 2016-12-01_00-07-15 //全量备份目录 drwxr-xr-x. 6 root root 4096 Dec 1 01:12 2016-12-01_01-12-22 //增量备份目录 在备份目录下,有一个文件xtrabackup_checkpoints记录着备份信息,其中可以查出 1)全量备份的信息如下: [root@test-huanqiu 2016-12-01_00-07-15]# pwd /backup/mysql/data/2016-12-01_00-07-15 [root@test-huanqiu 2016-12-01_00-07-15]# cat xtrabackup_checkpoints backup_type = full-prepared from_lsn = 0 to_lsn = 1631561 last_lsn = 1631561 compact = 0 2)基于以上全量备份的增量备份的信息如下: [root@test-huanqiu 2016-12-01_01-12-22]# pwd /backup/mysql/data/2016-12-01_01-12-22 [root@test-huanqiu 2016-12-01_01-12-22]# cat xtrabackup_checkpoints backup_type = incremental from_lsn = 1631561 to_lsn = 1631776 last_lsn = 1631776 compact = 0 从上面可以看出,增量备份的from_lsn正好等于全备的to_lsn。 那么,我们是否可以在增量备份的基础上再做增量备份呢? 答案是肯定的,只要把--incremental-basedir执行上一次增量备份的目录即可,如下所示: [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --incremental-basedir=/backup/mysql/data/2016-12-01_01-12-22 --incremental /backup/mysql/data [root@test-huanqiu data]# ll total 12 drwxr-xr-x. 6 root root 4096 Dec 1 00:27 2016-12-01_00-07-15 //全量备份目录 drwxr-xr-x. 6 root root 4096 Dec 1 01:12 2016-12-01_01-12-22 //增量备份目录1 drwxr-xr-x. 6 root root 4096 Dec 1 01:23 2016-12-01_01-23-23 //增量备份目录2 它的xtrabackup_checkpoints记录着备份信息如下: [root@test-huanqiu 2016-12-01_01-23-23]# pwd /backup/mysql/data/2016-12-01_01-23-23 [root@test-huanqiu 2016-12-01_01-23-23]# cat xtrabackup_checkpoints backup_type = incremental from_lsn = 1631776 to_lsn = 1638220 last_lsn = 1638220 compact = 0 可以看到,第二次增量备份的from_lsn是从上一次增量备份的to_lsn开始的。 ---------------->增量备份后的恢复操作<---------------- 增量备份的恢复要比全量备份复杂很多,增量备份与全量备份有着一些不同,尤其要注意的是: 1)需要在每个备份(包括完全和各个增量备份)上,将已经提交的事务进行“重放”。“重放”之后,所有的备份数据将合并到完全备份上。 2)基于所有的备份将未提交的事务进行“回滚”。于是,操作就变成了:不能回滚,因为有可能第一次备份时候没提交,在增量中已经成功提交 第一步是在所有备份目录下重做已提交的日志(注意备份目录路径要跟全路径) 1)innobackupex --apply-log --redo-only BASE-DIR 2)innobackupex --apply-log --redo-only BASE-DIR --incremental-dir=INCREMENTAL-DIR-1 3)innobackupex --apply-log BASE-DIR --incremental-dir=INCREMENTAL-DIR-2 其中: BASE-DIR 是指全量备份的目录 INCREMENTAL-DIR-1 是指第一次增量备份的目录 INCREMENTAL-DIR-2 是指第二次增量备份的目录,以此类推。 这里要注意的是: 1)最后一步的增量备份并没有--redo-only选项!回滚进行崩溃恢复过程 2)可以使用--use_memory提高性能。 以上语句执行成功之后,最终数据在BASE-DIR(即全量目录)下,其实增量备份就是把增量目录下的数据,整合到全变量目录下,然后在进行,全数据量的还原。 第一步完成之后,我们开始下面关键的第二步,即拷贝文件,进行全部还原!注意:必须先停止mysql数据库,然后清空数据库目录(这里是指/data/mysql/data)下的文件。 4)innobackupex --copy-back BASE-DIR 同样地,拷贝结束之后,记得检查下数据目录(这里指/data/mysql/data)的权限是否正确(修改成mysql:mysql),然后再重启mysql。 接下来进行案例说明: 假设我们已经有了一个全量备份2016-12-01_00-07-15 删除在上面测试创建的两个增量备份 [root@test-huanqiu ~]# cd /backup/mysql/data/ [root@test-huanqiu data]# ll total 12 drwxr-xr-x. 6 root root 4096 Dec 1 00:27 2016-12-01_00-07-15 drwxr-xr-x. 6 root root 4096 Dec 1 01:12 2016-12-01_01-12-22 drwxr-xr-x. 6 root root 4096 Dec 1 01:23 2016-12-01_01-23-23 [root@test-huanqiu data]# rm -rf 2016-12-01_01-12-22/ [root@test-huanqiu data]# rm -rf 2016-12-01_01-23-23/ [root@test-huanqiu data]# ll total 4 drwxr-xr-x. 6 root root 4096 Dec 1 00:27 2016-12-01_00-07-15 假设在全量备份后,mysql数据库中又有新数据写入 [root@test-huanqiu ~]# mysql -p123456 ......... mysql> create database ceshi; Query OK, 1 row affected (0.00 sec) mysql> use ceshi; Database changed mysql> create table test1( -> id int3, -> name varchar(20) -> ); Query OK, 0 rows affected (0.07 sec) mysql> insert into test1 values(1,"wangshibo"); Query OK, 1 row affected, 1 warning (0.03 sec) mysql> select * from test1; +------+-----------+ | id | name | +------+-----------+ | 1 | wangshibo | +------+-----------+ 1 row in set (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | ceshi | | huanqiu | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.00 sec) mysql> 然后进行一次增量备份: [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --incremental-basedir=/backup/mysql/data/2016-12-01_00-07-15 --incremental /backup/mysql/data [root@test-huanqiu ~]# ll /backup/mysql/data/ total 8 drwxr-xr-x. 6 root root 4096 Dec 1 00:27 2016-12-01_00-07-15 //全量备份目录 drwxr-xr-x. 7 root root 4096 Dec 1 03:41 2016-12-01_03-41-41 //增量备份目录 接着再在mysql数据库中写入新数据 mysql> insert into test1 values(2,"guohuihui"); Query OK, 1 row affected, 1 warning (0.00 sec) mysql> insert into test1 values(3,"wuxiang"); Query OK, 1 row affected, 1 warning (0.00 sec) mysql> insert into test1 values(4,"liumengnan"); Query OK, 1 row affected, 1 warning (0.01 sec) mysql> select * from test1; +------+------------+ | id | name | +------+------------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | | 4 | liumengnan | +------+------------+ 4 rows in set (0.00 sec) 接着在增量的基础上再进行一次增量备份 [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --incremental-basedir=/backup/mysql/data/2016-12-01_03-41-41 --incremental /backup/mysql/data [root@test-huanqiu ~]# ll /backup/mysql/data/ total 12 drwxr-xr-x. 6 root root 4096 Dec 1 00:27 2016-12-01_00-07-15 //全量备份目录 drwxr-xr-x. 7 root root 4096 Dec 1 02:24 2016-12-01_02-24-11 //增量备份目录1 drwxr-xr-x. 7 root root 4096 Dec 1 03:42 2016-12-01_03-42-43 //增量备份目录2 现在删除数据库huanqiu、ceshi mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | ceshi | | huanqiu | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.00 sec) mysql> drop database huanqiu; Query OK, 2 rows affected (0.02 sec) mysql> drop database ceshi; Query OK, 1 row affected (0.01 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) mysql> 接下来就开始进行数据恢复操作: 先恢复应用日志(注意最后一个不需要加--redo-only参数) [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --apply-log --redo-only /backup/mysql/data/2016-12-01_00-07-15 [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --apply-log --redo-only /backup/mysql/data/2016-12-01_00-07-15 --incremental-dir=/backup/mysql/data/2016-12-01_02-24-11 [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --apply-log /backup/mysql/data/2016-12-01_00-07-15 --incremental-dir=/backup/mysql/data/2016-12-01_03-42-43 到此,恢复数据工作还没有结束!还有最重要的一个环节,就是把增量目录下的数据整合到全量备份目录下,然后再进行一次全量还原。 停止mysql数据库,并清空数据目录 [root@test-huanqiu ~]# /etc/init.d/mysql stop [root@test-huanqiu ~]# rm -rf /data/mysql/data/* 最后拷贝文件,并验证数据目录的权限 [root@test-huanqiu ~]# innobackupex --defaults-file=/usr/local/mysql/my.cnf --user=root --password=123456 --copy-back /backup/mysql/data/2016-12-01_00-07-15 [root@test-huanqiu ~]# chown -R mysql.mysql /data/mysql/data/* [root@test-huanqiu ~]# /etc/init.d/mysql start 最后,检查下数据是否恢复 [root@test-huanqiu ~]# mysql -p123456 ........ mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | ceshi | | huanqiu | | mysql | | performance_schema | | test | +--------------------+ 6 rows in set (0.00 sec) mysql> select * from ceshi.test1; +------+------------+ | id | name | +------+------------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | | 4 | liumengnan | +------+------------+ 4 rows in set (0.00 sec) 另外注意: 上面在做备份的时候,将备份目录和增量目录都放在了同一个目录路径下,其实推荐放在不同的路径下,方便管理!比如: /backup/mysql/data/full 存放全量备份目录 /backup/mysql/data/daily1 存放第一次增量备份目录 /backup/mysql/data/daily2 存放第二次增量目录 以此类推 在恢复的时候,注意命令中的路径要跟对! ----------------------------------------------------------------------------------------------------- innobackupex 常用参数说明 --defaults-file 同xtrabackup的--defaults-file参数 --apply-log 对xtrabackup的--prepare参数的封装 --copy-back 做数据恢复时将备份数据文件拷贝到MySQL服务器的datadir ; --remote-host=HOSTNAME 通过ssh将备份数据存储到进程服务器上; --stream=[tar] 备 份文件输出格式, tar时使用tar4ibd , 该文件可在XtarBackup binary文件中获得.如果备份时有指定--stream=tar, 则tar4ibd文件所处目录一定要在$PATH中(因为使用的是tar4ibd去压缩, 在XtraBackup的binary包中可获得该文件)。 在 使用参数stream=tar备份的时候,你的xtrabackup_logfile可能会临时放在/tmp目录下,如果你备份的时候并发写入较大的话 xtrabackup_logfile可能会很大(5G+),很可能会撑满你的/tmp目录,可以通过参数--tmpdir指定目录来解决这个问题。 --tmpdir=DIRECTORY 当有指定--remote-host or --stream时, 事务日志临时存储的目录, 默认采用MySQL配置文件中所指定的临时目录tmpdir --redo-only --apply-log组, 强制备份日志时只redo ,跳过rollback。这在做增量备份时非常必要。 --use-memory=# 该参数在prepare的时候使用,控制prepare时innodb实例使用的内存量 --throttle=IOS 同xtrabackup的--throttle参数 --sleep=是给ibbackup使用的,指定每备份1M数据,过程停止拷贝多少毫秒,也是为了在备份时尽量减小对正常业务的影响,具体可以查看ibbackup的手册 ; --compress[=LEVEL] 对备份数据迚行压缩,仅支持ibbackup,xtrabackup还没有实现; --include=REGEXP 对 xtrabackup参数--tables的封装,也支持ibbackup。备份包含的库表,例如:--include="test.*",意思是要备份 test库中所有的表。如果需要全备份,则省略这个参数;如果需要备份test库下的2个表:test1和test2,则写 成:--include="test.test1|test.test2"。也可以使用通配符,如:--include="test.test*"。 --databases=LIST 列出需要备份的databases,如果没有指定该参数,所有包含MyISAM和InnoDB表的database都会被备份; --uncompress 解压备份的数据文件,支持ibbackup,xtrabackup还没有实现该功能; --slave-info, 备 份从库, 加上--slave-info备份目录下会多生成一个xtrabackup_slave_info 文件, 这里会保存主日志文件以及偏移, 文件内容类似于:CHANGE MASTER TO MASTER_LOG_FILE='', MASTER_LOG_POS=0 --socket=SOCKET 指定mysql.sock所在位置,以便备份进程登录mysql. 三、innobackupex全量、增量备份脚本 可以根据自己线上数据库情况,编写全量和增量备份脚本,然后结合crontab设置计划执行。 比如:每周日的1:00进行全量备份,每周1-6的1:00进行增量备份。 还可以在脚本里编写邮件通知信息(可以用mail或sendemail) ================================================================== 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 在使用xtrabackup对mysql执行备份操作的时候,出现下面的报错: ..................... xtrabackup: innodb_log_file_size = 50331648 InnoDB: Error: log file ./ib_logfile0 is of different size 33554432 bytes InnoDB: than specified in the .cnf file 50331648 bytes! 解决办法: 可以计算一下33554432的大小,33554432/1024/1024=32 查看my.cnf配置文件的innodb_log_file_size参数配置: innodb_log_file_size = 32M 需要调整这个文件的大小 再计算一下50331648的大小,50331648/1024/1024=48 修改my.cnf配置文件的下面一行参数值: innodb_log_file_size = 48M 然后重启mysql ============下面是曾经使用过的一个mysql通过innobackupex进行增量备份脚本=========== 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 [root@mysql-node ~]# cat /data/backup/script/incremental-backup-mysql.sh #!/bin/sh ######################################################################### ## Description: Mysql增量备份脚本 ## File Name: incremental-backup-mysql.sh ## Author: wangshibo ## mail: wangshibo@kevin.com ## Created Time: 2018年1月11日 14:17:09 ########################################################################## today=`date +%Y%m%d` datetime=`date +%Y%m%d-%H-%M-%S` config=/etc/my.cnf basePath=/data/backup logfilePath=$basePath/logs logfile=$logfilePath/incr_$datetime.log USER=mybak PASSWD=Mysql@!@#1988check dataBases="activiti batchdb core scf_v2 midfax asset bc_asset" pid=`ps -ef | grep -v "grep" |grep -i innobackupex|awk '{print $2}'|head -n 1` if [ -z $pid ] then echo " start incremental backup database " >> $logfile OneMonthAgo=`date -d "1 month ago" +%Y%m%d` path=$basePath/incr_$datetime mkdir -p $path last_backup=`cat $logfilePath/last_backup_sucess.log| head -1` echo " last backup is ===> " $last_backup >> $logfile sudo /data/backup/script/percona-xtrabackup-2.4.2-Linux-x86_64/bin/innobackupex --defaults-file=$config --user=$USER --password=$PASSWD --compress --compress-threads=2 --compress-chunk-size=64K --slave-info --safe-slave-backup --host=localhost --incremental $path --incremental-basedir=$last_backup --databases="${dataBases}" --no-timestamp >> $logfile 2>&1 sudo chown app.app $path -R ret=`tail -n 2 $logfile |grep "completed OK"|wc -l` if [ "$ret" = 1 ] ; then echo 'delete expired backup ' $basePath/incr_$OneMonthAgo* >> $logfile rm -rf $basePath/incr_$OneMonthAgo* rm -f $logfilePath/incr_$OneMonthAgo*.log echo $path > $logfilePath/last_backup_sucess.log else echo 'backup failure ,no delete expired backup' >> $logfile fi else echo "****** innobackupex in backup database ****** " >> $logfile fi 增量备份文件放在了本机的/data/backup目录下,再编写一个rsync脚本同步到远程备份机上(192.168.10.130): [root@mysql-node ~]# cat /data/rsync.sh #!/bin/bash datetime=`date +%Y%m%d-%H-%M-%S` logfile=/data/rsync.log echo "$datetime Rsync backup mysql start " >> $logfile sudo rsync -e "ssh -p6666" -avpgolr /data/backup bigtree@192.168.10.130:/data/backup_data/bigtree/DB_bak/10.0.40.52/ >> $logfile 2>&1 ret=`tail -n 1 $logfile |grep "total size"|wc -l` if [ "$ret" = 1 ] ; then echo "$datetime Rsync backup mysql finish " >> $logfile else echo "$datetime Rsync backup failure ,pls sendmail" >> $logfile fi 结合crontab进行定时任务执行(每4个小时执行一次) [root@mysql-node ~]# crontab -e 1 4,8,12,16,20,23 * * * /data/backup/script/incremental-backup-mysql.sh > /dev/null 2>&1 10 0,4,8,12,16,20,23 * * * /data/rsync.sh > /dev/null 2>&1 顺便看一下本机/data/backup目录下面的增量备份数据 [root@mysql-node ~]# ll /data/backup 总用量 786364 drwxr-xr-x 2 app app 4096 7月 31 01:01 2018-07-31 drwxr-xr-x 2 app app 4096 8月 1 01:01 2018-08-01 drwxr-xr-x 14 app app 4096 7月 31 00:01 full_20180731-00-01-01 drwxr-xr-x 14 app app 4096 8月 1 00:01 full_20180801-00-01-01 drwxr-xr-x 9 app app 4096 7月 31 04:01 incr_20180731-04-01-01 drwxr-xr-x 9 app app 4096 7月 31 08:01 incr_20180731-08-01-01 drwxr-xr-x 9 app app 4096 7月 31 12:01 incr_20180731-12-01-01 drwxr-xr-x 9 app app 4096 7月 31 16:01 incr_20180731-16-01-01 drwxr-xr-x 9 app app 4096 7月 31 20:01 incr_20180731-20-01-01 drwxr-xr-x 9 app app 4096 7月 31 23:01 incr_20180731-23-01-01 drwxr-xr-x 9 app app 4096 8月 1 04:01 incr_20180801-04-01-01 drwxr-xr-x 9 app app 4096 8月 1 08:01 incr_20180801-08-01-01 drwxr-xr-x 9 app app 4096 8月 1 12:01 incr_20180801-12-01-01 drwxr-xr-x 9 app app 4096 8月 1 16:01 incr_20180801-16-01-01 drwxr-xr-x 9 app app 4096 8月 1 20:01 incr_20180801-20-01-01 drwxr-xr-x 9 app app 4096 8月 1 23:01 incr_20180801-23-01-01 drwxrwxr-x 2 app app 20480 8月 9 08:01 logs drwxrwxr-x 3 app app 4096 7月 12 17:43 script lvm-snapshot:基于LVM快照的备份 1.关于快照: 1)事务日志跟数据文件必须在同一个卷上; 2)刚刚创立的快照卷,里面没有任何数据,所有数据均来源于原卷 3)一旦原卷数据发生修改,修改的数据将复制到快照卷中,此时访问数据一部分来自于快照卷,一部分来自于原卷 4)当快照使用过程中,如果修改的数据量大于快照卷容量,则会导致快照卷崩溃。 5)快照卷本身不是备份,只是提供一个时间一致性的访问目录。 2.基于快照备份几乎为热备: 1)创建快照卷之前,要请求MySQL的全局锁;在快照创建完成之后释放锁; 2)如果是Inoodb引擎, 当flush tables 后会有一部分保存在事务日志中,却不在文件中。 因此恢复时候,需要事务日志和数据文件 但释放锁以后,事务日志的内容会同步数据文件中,因此备份内容并不绝对是锁释放时刻的内容,由于有些为完成的事务已经完成,但在备份数据中因为没完成而回滚。 因此需要借助二进制日志往后走一段 3.基于快照备份注意事项: 1)事务日志跟数据文件必须在同一个卷上; 2)创建快照卷之前,要请求MySQL的全局锁;在快照创建完成之后释放锁; 3)请求全局锁完成之后,做一次日志滚动;做二进制日志文件及位置标记(手动进行); 4.为什么基于MySQL快照的备份很好? 原因如下几点: 1)几乎是热备 在大多数情况下,可以在应用程序仍在运行的时候执行备份。无需关机,只需设置为只读或者类似只读的限制。 2)支持所有基于本地磁盘的存储引擎 它支持MyISAM, Innodb, BDB,还支持 Solid, PrimeXT 和 Falcon。 3)快速备份 只需拷贝二进制格式的文件,在速度方面无以匹敌。 4)低开销 只是文件拷贝,因此对服务器的开销很细微。 5)容易保持完整性 想要压缩备份文件吗?把它们备份到磁带上,FTP或者网络备份软件 -- 十分简单,因为只需要拷贝文件即可。 6)快速恢复 恢复的时间和标准的MySQL崩溃恢复或数据拷贝回去那么快,甚至可能更快,将来会更快。 7)免费 无需额外的商业软件,只需Innodb热备工具来执行备份。 快照备份mysql的缺点: 1)需要兼容快照 -- 这是明显的。 2)需要超级用户(root) 在某些组织,DBA和系统管理员来自不同部门不同的人,因此权限各不一样。 3)停工时间无法预计,这个方法通常指热备,但是谁也无法预料到底是不是热备 -- FLUSH TABLES WITH READ LOCK 可能会需要执行很长时间才能完成。 4)多卷上的数据问题 如果你把日志放在独立的设备上或者你的数据库分布在多个卷上,这就比较麻烦了,因为无法得到全部数据库的一致性快照。不过有些系统可能能自动做到多卷快照。 下面即是使用lvm-snapshot快照方式备份mysql的操作记录,仅依据本人实验中使用而述. 操作记录: 如下环境,本机是在openstack上开的云主机,在openstack上创建一个30G的云硬盘挂载到本机,然后制作lvm逻辑卷。 一、准备LVM卷,并将mysql数据恢复(或者说迁移)到LVM卷上: 1) 创建一个分区或保存到另一块硬盘上面 2) 创建PV、VG、LVM 3) 格式化 LV0 4) 挂载LV到临时目录 5) 确认服务处于stop状态 6) 将数据迁移到LV0 7) 重新挂载LV0到mysql数据库的主目录/data/mysql/data 8) 审核权限并启动服务 [root@test-huanqiu ~]# fdisk -l ......... Disk /dev/vdc: 32.2 GB, 32212254720 bytes 16 heads, 63 sectors/track, 62415 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 [root@test-huanqiu ~]# fdisk /dev/vdc //依次输入p->n->p->1->回车->回车->w ......... Command (m for help): p Disk /dev/vdc: 32.2 GB, 32212254720 bytes 16 heads, 63 sectors/track, 62415 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x343250e4 Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-62415, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-62415, default 62415): Using default value 62415 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@test-huanqiu ~]# fdisk /dev/vdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/vdc: 32.2 GB, 32212254720 bytes 16 heads, 63 sectors/track, 62415 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x343250e4 Device Boot Start End Blocks Id System /dev/vdc1 1 62415 31457128+ 5 Extended Command (m for help): [root@test-huanqiu ~]# pvcreate /dev/vdc1 Device /dev/vdc1 not found (or ignored by filtering). [root@test-huanqiu ~]# vgcreate vg0 /dev/vdc1 Volume group "vg0" successfully created [root@test-huanqiu ~]# lvcreate -L +3G -n lv0 vg0 Logical volume "lv0" created. [root@test-huanqiu ~]# mkfs.ext4 /dev/vg0/lv0 [root@test-huanqiu ~]# mkdir /var/lv0/ [root@test-huanqiu ~]# mount /dev/vg0/lv0 /var/lv0/ [root@test-huanqiu ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 6.0G 1.7G 79% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 4.5M 2.8G 1% /var/lv0 [root@test-huanqiu ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LogVol00 VolGroup00 -wi-ao---- 8.28g LogVol01 VolGroup00 -wi-ao---- 1.50g lv0 vg0 -wi-a----- 3.00g ---------------------------------------------------------------------------------------------------- 如果要想删除这个lvs,操作如下: [root@test-huanqiu ~]# umount /data/mysql/data/ //先卸载掉这个lvs的挂载关系 [root@test-huanqiu ~]# lvremove /dev/vg0/lv0 [root@test-huanqiu ~]# vgremove vg0 [root@test-huanqiu ~]# pvremove /dev/vdc1 [root@test-huanqiu ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LogVol00 VolGroup00 -wi-ao---- 8.28g LogVol01 VolGroup00 -wi-ao---- 1.50g ---------------------------------------------------------------------------------------------------- mysql的数据目录是/data/mysql/data,密码是123456 [root@test-huanqiu ~]# ps -ef|grep mysql mysql 2066 1286 0 07:33 ? 00:00:06 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql/ --datadir=/data/mysql/data --plugin-dir=/usr/local/mysql//lib/plugin --user=mysql --log-error=/data/mysql/data/mysql-error.log --pid-file=/data/mysql/data/mysql.pid --socket=/usr/local/mysql/var/mysql.sock --port=3306 root 2523 2471 0 07:55 pts/1 00:00:00 grep mysql [root@test-huanqiu ~]# /etc/init.d/mysql stop Shutting down MySQL.... SUCCESS! [root@test-huanqiu ~]# cd /data/mysql/data/ [root@test-huanqiu data]# tar -cf - . | tar xf - -C /var/lv0/ [root@test-huanqiu data]# umount /var/lv0/ [root@test-huanqiu data]# mount /dev/vg0/lv0 /data/mysql/data [root@test-huanqiu data]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 6.0G 1.7G 79% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 164M 2.6G 6% /data/mysql/data 删除挂载后产生的lost+found目录 [root@test-huanqiu data]# rm -rf lost+found [root@test-huanqiu data]# ll -d /data/mysql/data [root@test-huanqiu data]# ll -Z /data/mysql/data [root@test-huanqiu data]# ll -Zd /data/mysql/data 需要注意的是: 当SElinux功能开启情况下,mysql数据库重启会失败,所以必须执行下面命令,恢复SElinux安全上下文. [root@test-huanqiu data]# restorecon -R /data/mysql/data/ [root@test-huanqiu data]# /etc/init.d/mysql start Starting MySQL... SUCCESS! 二、备份: (生产环境下一般都是整个数据库备份) 1)锁表 2)查看position号并记录,便于后期恢复 3)创建snapshot快照 4)解表 5)挂载snapshot 6)拷贝snapshot数据,进行备份。备份整个数据库之前,要关闭mysql服务(保护ibdata1文件) 7)移除快照 设置此变量为1,让每个事件尽可能同步到二进制日志文件里,以消耗IO来尽可能确保数据一致性。 mysql> SET GLOBAL sync_binlog=1; 查看二进制日志和position,以备后续进行binlog日志恢复增量数据(记住这个position节点记录,对后面的增量数据备份很重要) mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+-------------------+ | mysql-bin.000004 | 1434 | | | | +------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec) 创建存放binlog日志的position节点记录的目录 所有的position节点记录都放在这同一个binlog.pos文件下(后面就使用>>符号追加到这个文件下) [root@test-huanqiu ~]# mkdir /backup/mysql/binlog [root@test-huanqiu ~]# mysql -p123456 -e "SHOW MASTER STATUS;" > /backup/mysql/binlog/binlog.pos [root@test-huanqiu snap1]# cat /backup/mysql/binlog/binlog.pos File Position Binlog_Do_DB Binlog_Ignore_DB Executed_Gtid_Set mysql-bin.000004 1434 刷新日志,产生新的binlog日志,保证日志信息不会再写入到上面的mysql-bin.000004日志内。 mysql> FLUSH LOGS; 全局读锁,读锁请求到后不要关闭此mysql交互界面 mysql> FLUSH TABLES WITH READ LOCK; 在innodb表中,即使是请求到了读锁,但InnoDB在后台依然可能会有事务在进行读写操作, 可用"mysql> SHOW ENGINE INNODB STATUS;"查看后台进程的状态,等没有写请求后再做备份。 创建快照,以只读的方式(--permission r)创建一个3GB大小的快照卷snap1 -s:相当于--snapshot [root@test-huanqiu ~]# mkdir /var/snap1 [root@test-huanqiu ~]# lvcreate -s -L 2G -n snap1 /dev/vg0/lv0 --permission r Logical volume "snap1" created. 查看快照卷的详情(快照卷也是LV): [root@test-huanqiu ~]# lvdisplay 解除锁定 回到锁定表的mysql交互式界面,解锁: mysql> UNLOCK TABLES; 此参数可以根据服务器磁盘IO的负载来调整 mysql> SET GLOBAL sync_binlog=0; [root@test-huanqiu ~]# mount /dev/vg0/snap1 /var/snap1 //挂载快照卷 [root@test-huanqiu snap1]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 5.8G 1.9G 76% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 115M 2.7G 5% /data/mysql/data /dev/mapper/vg0-snap1 2.9G 115M 2.7G 5% /var/snap1 [root@test-huanqiu ~]# cd /var/snap1/ && ll /var/snap1 [root@test-huanqiu snap1]# mkdir -p /backup/mysql/data/ //创建备份目录 total 0 对本机的数据库进行备份,备份整个数据库。 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.01 sec) mysql> create database beijing; Query OK, 1 row affected (0.00 sec) mysql> use beijing; Database changed mysql> create table people(id int(5),name varchar(20)); Query OK, 0 rows affected (0.03 sec) mysql> insert into people values("1","wangshibo"); Query OK, 1 row affected (0.00 sec) mysql> insert into people values("2","guohuihui"); Query OK, 1 row affected (0.01 sec) mysql> insert into people values("3","wuxiang"); Query OK, 1 row affected (0.01 sec) mysql> select * from people; +------+-----------+ | id | name | +------+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | +------+-----------+ 3 rows in set (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.01 sec) -------------------------------------------------------------------------------------------------------------------------- 需要注意的是: innodb表,一般会打开独立表空间模式(innodb_file_per_table)。 由于InnoDB默认会将所有的数据库InnoDB引擎的表数据存储在一个共享空间中:ibdata1文件。 增删数据库的时候,ibdata1文件不会自动收缩,这对单个或部分数据库的备份也将成为问题(如果不是整个数据库备份的情况下,ibdata1文件就不能备份,否则会影响全部数据库的数据)。 所以若是对单个数据库或部分数据库进行快照备份: 1)若是直接误删除mysql数据目录下备份库目录,可以直接将快照备份数据解压就能恢复 2)若是使用drop或delete误删除的数据,那么在使用快照备份数据恢复时,就会出问题!因为单库备份时ibdata1文件不能单独备份,恢复时会导致这个文件损坏! 所以正确的做法是: 要对整个数据库进行备份,并且一定要在mysql服务关闭的情况下(这样是为了保护ibdata1文件)。 因为mysql是采用缓冲方式来将数据写入到ibdata1文件中的,这正是fflush()函数存在的理由。当mysql在运行时,对ibdata1进行拷贝肯定会导致ibdata1文件中的数据出错,这样在数据恢复时,也就肯定会出现“ERROR 1146 (42S02): Table '****' doesn't exist“的报错! 在对启用innodb引擎的mysql数据库进行迁移的时候也是同理: 在对innodb数据库进行数据迁移的时候,即将msyql(innodb引擎)服务从一台服务器迁移到另一台服务器时,在对数据库目录进行整体拷贝的时候(当然就包括了对ibdata1文件拷贝),一定要在关闭对方mysql服务的情况下进行拷贝! ibdata1用来储存文件的数据,而库名的文件夹里面的那些表文件只是结构而已,由于新版的mysql默认试innodb,所以ibdata1文件默认就存在了,少了这个文件有的数据表就会出错。要知道:数据库目录下的.frm文件是数据库中很多的表的结构描述文件;而ibdata1文件才是数据库的真实数据存放文件。 -------------------------------------------innodb_file_per_table参数说明------------------------------------------ 线上环境的话,一般都建议打开这个独立表空间模式。 因为ibdata1文件会不断的增大,不会减少,无法向OS回收空间,容易导致线上出现过大的共享表空间文件,致使当前空间爆满。 并且ibdata1文件大到一定程序会影响insert、update的速度;并且 另外如果删表频繁的话,共享表空间产生的碎片会比较多。打开独立表空间,方便进行innodb表的碎片整理 使用MyISAM表引擎的数据库会分别创建三个文件:表结构、表索引、表数据空间。 可以将某个数据库目录直接迁移到其他数据库也可以正常工作。 然而当使用InnoDB的时候,一切都变了。 InnoDB默认会将所有的数据库InnoDB引擎的表数据存储在一个共享空间中:ibdata1文件。 增删数据库的时候,ibdata1文件不会自动收缩,单个数据库的备份也将成为问题。 通常只能将数据使用mysqldump 导出,然后再导入解决这个问题。 在MySQL的配置文件[mysqld]部分,增加innodb_file_per_table参数。 可以修改InnoDB为独立表空间模式,每个数据库的每个表都会生成一个数据空间。 它的优点: 1)每个表都有自已独立的表空间。 2)每个表的数据和索引都会存在自已的表空间中。 3)可以实现单表在不同的数据库中移动。 4)空间可以回收(除drop table操作处,表空不能自已回收) Drop table操作自动回收表空间,如果对于统计分析或是日值表,删除大量数据后可以通过:alter table TableName engine=innodb;回缩不用的空间。 对于使innodb-plugin的Innodb使用turncate table也会使空间收缩。 对于使用独立表空间的表,不管怎么删除,表空间的碎片不会太严重的影响性能,而且还有机会处理。 它的缺点: 单表增加过大,如超过100个G。 结论: 共享表空间在Insert操作上少有优势。其它都没独立表空间表现好。当启用独立表空间时,请合理调整一下:innodb_open_files。 InnoDB Hot Backup(冷备)的表空间cp不会面对很多无用的copy了。而且利用innodb hot backup及表空间的管理命令可以实。 1)innodb_file_per_table设置.设置为1,表示打开了独立的表空间模式。 如果设置为0,表示关闭独立表空间模式,开启方法如下: 在my.cnf中[mysqld]下设置 innodb_file_per_table=1 2)查看是否开启: mysql> show variables like "%per_table%"; +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | innodb_file_per_table | ON | +-----------------------+-------+ 1 row in set (0.00 sec) 3)关闭独享表空间 innodb_file_per_table=0关闭独立的表空间 mysql> show variables like ‘%per_table%’; -------------------------------------------innodb_file_per_table参数说明------------------------------------------ -------------------------------------------------------------------------------------------------------------------------- 备份前,一定要关闭mysql数据库!因为里面会涉及到ibdata1文件备份,不关闭mysql的话,ibdata1文件备份后会损坏,从而导致恢复数据失败! [root@test-huanqiu snap1]# /etc/init.d/mysql stop Shutting down MySQL.... SUCCESS! [root@test-huanqiu data]# lsof -i:3306 [root@test-huanqiu data]# 现在备份整个数据库 [root@test-huanqiu snap1]# tar -zvcf /backup/mysql/data/`date +%Y-%m-%d`dbbackup.tar.gz ./ [root@test-huanqiu snap1]# ll /backup/mysql/data/ total 384 -rw-r--r--. 1 root root 392328 Dec 5 22:15 2016-12-05dbbackup.tar.gz 释放快照卷,每次备份之后,应该删除快照,减少IO操作 先卸载,再删除 [root@test-huanqiu ~]# umount /var/snap1/ [root@test-huanqiu ~]# df -h //确认上面的挂载关系已经没了 Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 5.8G 1.9G 76% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 115M 2.7G 5% /data/mysql/data [root@test-huanqiu ~]# lvremove /dev/vg0/snap1 Do you really want to remove active logical volume snap1? [y/n]: y Logical volume "snap1" successfully removed 数据被快照备份后,可以启动数据库 [root@test-huanqiu ~]# /etc/init.d/mysql start Starting MySQL.. SUCCESS! [root@test-huanqiu ~]# lsof -i:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 15943 mysql 16u IPv4 93348 0t0 TCP *:mysql (LISTEN) [root@test-huanqiu ~]# 现在再进行新的数据写入: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 mysql> use beijing; Database changed mysql> insert into people values("4","liumengnan"); Query OK, 1 row affected (0.02 sec) mysql> insert into people values("5","zhangjuanjuan"); Query OK, 1 row affected (0.00 sec) mysql> select * from people; +------+---------------+ | id | name | +------+---------------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | | 4 | liumengnan | | 5 | zhangjuanjuan | +------+---------------+ 5 rows in set (0.00 sec) mysql> create table heihei(name varchar(20),age varchar(20)); Query OK, 0 rows affected (0.02 sec) mysql> insert into heihei values("jiujiujiu","nan"); Query OK, 1 row affected (0.00 sec) mysql> select * from heihei; +-----------+------+ | name | age | +-----------+------+ | jiujiujiu | nan | +-----------+------+ 1 row in set (0.00 sec) mysql> create database shanghai; Query OK, 1 row affected (0.01 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | shanghai | | test | +--------------------+ 6 rows in set (0.00 sec) 假设一不小心误操作删除beijing和shanghai库 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 mysql> drop database beijing; Query OK, 2 rows affected (0.03 sec) mysql> drop database shanghai; Query OK, 0 rows affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) 莫慌!接下来就说下数据恢复操作~~ 三、恢复流程如下: 0)由于涉及到增量数据备份,所以提前将最近一次的binlog日志从mysql数据目录复制到别的路径下 1)在mysql数据库中执行flush logs命令,产生新的binlog日志,让日志信息写入到新的这个binlog日志中 1)关闭数据库,一定要关闭 2)删除数据目录下的文件 3)快照数据拷贝回来,position节点记录回放 4)增量数据就利用mysqlbinlog命令将上面提前拷贝的binlog日志文件导出为sql文件,并剔除其中的drop语句,然后进行恢复。 5)重启数据 先将最新一次的binlog日志备份到别处,用作增量数据备份。 比如mysql-bin.000006是最新一次的binlog日志 [root@test-huanqiu data]# cp mysql-bin.000006 /backup/mysql/data/ 产生新的binlog日志,确保日志写入到这个新的binlog日志内,而不再写入到上面备份的binlog日志里。 mysql> flush logs; [root@test-huanqiu data]# ll mysql-bin.000007 -rw-rw----. 1 mysql mysql 120 Dec 5 23:19 mysql-bin.000007 [root@test-huanqiu data]# /etc/init.d/mysql stop Shutting down MySQL.... SUCCESS! [root@test-huanqiu data]# lsof -i:3306 [root@test-huanqiu data]# pwd /data/mysql/data [root@test-huanqiu data]# rm -rf ./* [root@test-huanqiu data]# tar -zvxf /backup/mysql/data/2016-12-05dbbackup.tar.gz ./ [root@test-huanqiu data]# /etc/init.d/mysql start Starting MySQL SUCCESS! [root@test-huanqiu data]# cat /backup/mysql/binlog/binlog.pos File Position Binlog_Do_DB Binlog_Ignore_DB Executed_Gtid_Set mysql-bin.000004 1434 [root@test-huanqiu data]# mysqlbinlog --start-position=1434 /data/mysql/data/mysql-bin.000004 | mysql -p123456 登陆数据库查看,发现这只是恢复到快照备份阶段的数据: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec) mysql> select * from beijing.people; +------+-----------+ | id | name | +------+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | +------+-----------+ 3 rows in set (0.00 sec) mysql> 快照备份之后写入的数据要利用mysqlbinlog命令将上面拷贝的mysql-bin000006文件导出为sql文件,并剔除其中的drop语句,然后进行恢复。 [root@test-huanqiu ~]# cd /backup/mysql/data/ [root@test-huanqiu data]# ll total 388 -rw-r--r--. 1 root root 392328 Dec 5 22:15 2016-12-05dbbackup.tar.gz -rw-r-----. 1 root root 1274 Dec 5 23:19 mysql-bin.000006 [root@test-huanqiu data]# mysqlbinlog mysql-bin.000006 >000006bin.sql 剔除其中的drop语句 [root@test-huanqiu data]# vim 000006bin.sql //手动删除sql语句中的drop语句 然后在mysql中使用source命令恢复数据 mysql> source /backup/mysql/data/000006bin.sql; 再次查看下,发现增量部分的数据也已经恢复回来了 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | shanghai | | test | +--------------------+ 6 rows in set (0.00 sec) mysql> use beijing; Database changed mysql> show tables; +-------------------+ | Tables_in_beijing | +-------------------+ | heihei | | people | +-------------------+ 2 rows in set (0.00 sec) mysql> select * from people; +------+---------------+ | id | name | +------+---------------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | | 4 | liumengnan | | 5 | zhangjuanjuan | +------+---------------+ 5 rows in set (0.00 sec) mysql> select * from heihei; +-----------+------+ | name | age | +-----------+------+ | jiujiujiu | nan | +-----------+------+ 1 row in set (0.00 sec) lvm-snapshot:基于LVM快照的备份 1.关于快照: 1)事务日志跟数据文件必须在同一个卷上; 2)刚刚创立的快照卷,里面没有任何数据,所有数据均来源于原卷 3)一旦原卷数据发生修改,修改的数据将复制到快照卷中,此时访问数据一部分来自于快照卷,一部分来自于原卷 4)当快照使用过程中,如果修改的数据量大于快照卷容量,则会导致快照卷崩溃。 5)快照卷本身不是备份,只是提供一个时间一致性的访问目录。 2.基于快照备份几乎为热备: 1)创建快照卷之前,要请求MySQL的全局锁;在快照创建完成之后释放锁; 2)如果是Inoodb引擎, 当flush tables 后会有一部分保存在事务日志中,却不在文件中。 因此恢复时候,需要事务日志和数据文件 但释放锁以后,事务日志的内容会同步数据文件中,因此备份内容并不绝对是锁释放时刻的内容,由于有些为完成的事务已经完成,但在备份数据中因为没完成而回滚。 因此需要借助二进制日志往后走一段。 3.基于快照备份注意事项: 1)事务日志跟数据文件必须在同一个卷上; 2)创建快照卷之前,要请求MySQL的全局锁;在快照创建完成之后释放锁; 3)请求全局锁完成之后,做一次日志滚动;做二进制日志文件及位置标记(手动进行); 4.为什么基于MySQL快照的备份很好? 原因如下几点: 1)几乎是热备 在大多数情况下,可以在应用程序仍在运行的时候执行备份。无需关机,只需设置为只读或者类似只读的限制。 2)支持所有基于本地磁盘的存储引擎 它支持MyISAM, Innodb, BDB,还支持 Solid, PrimeXT 和 Falcon。 3)快速备份 只需拷贝二进制格式的文件,在速度方面无以匹敌。 4)低开销 只是文件拷贝,因此对服务器的开销很细微。 5)容易保持完整性 想要压缩备份文件吗?把它们备份到磁带上,FTP或者网络备份软件 -- 十分简单,因为只需要拷贝文件即可。 6)快速恢复 恢复的时间和标准的MySQL崩溃恢复或数据拷贝回去那么快,甚至可能更快,将来会更快。 7)免费 无需额外的商业软件,只需Innodb热备工具来执行备份。 快照备份mysql的缺点: 1)需要兼容快照 -- 这是明显的。 2)需要超级用户(root) 在某些组织,DBA和系统管理员来自不同部门不同的人,因此权限各不一样。 3)停工时间无法预计,这个方法通常指热备,但是谁也无法预料到底是不是热备 -- FLUSH TABLES WITH READ LOCK 可能会需要执行很长时间才能完成。 4)多卷上的数据问题 如果你把日志放在独立的设备上或者你的数据库分布在多个卷上,这就比较麻烦了,因为无法得到全部数据库的一致性快照。不过有些系统可能能自动做到多卷快照。 下面即是使用lvm-snapshot快照方式备份mysql的操作记录,仅依据本人实验中使用而述. 操作记录: 如下环境,本机是在openstack上开的云主机,在openstack上创建一个30G的云硬盘挂载到本机,然后制作lvm逻辑卷。 一、准备LVM卷,并将mysql数据恢复(或者说迁移)到LVM卷上: 1) 创建一个分区或保存到另一块硬盘上面 2) 创建PV、VG、LVM 3) 格式化 LV0 4) 挂载LV到临时目录 5) 确认服务处于stop状态 6) 将数据迁移到LV0 7) 重新挂载LV0到mysql数据库的主目录/data/mysql/data 8) 审核权限并启动服务 [root@test-huanqiu ~]# fdisk -l ......... Disk /dev/vdc: 32.2 GB, 32212254720 bytes 16 heads, 63 sectors/track, 62415 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 [root@test-huanqiu ~]# fdisk /dev/vdc //依次输入p->n->p->1->回车->回车->w ......... Command (m for help): p Disk /dev/vdc: 32.2 GB, 32212254720 bytes 16 heads, 63 sectors/track, 62415 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x343250e4 Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-62415, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-62415, default 62415): Using default value 62415 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@test-huanqiu ~]# fdisk /dev/vdc WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/vdc: 32.2 GB, 32212254720 bytes 16 heads, 63 sectors/track, 62415 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x343250e4 Device Boot Start End Blocks Id System /dev/vdc1 1 62415 31457128+ 5 Extended Command (m for help): [root@test-huanqiu ~]# pvcreate /dev/vdc1 Device /dev/vdc1 not found (or ignored by filtering). [root@test-huanqiu ~]# vgcreate vg0 /dev/vdc1 Volume group "vg0" successfully created [root@test-huanqiu ~]# lvcreate -L +3G -n lv0 vg0 Logical volume "lv0" created. [root@test-huanqiu ~]# mkfs.ext4 /dev/vg0/lv0 [root@test-huanqiu ~]# mkdir /var/lv0/ [root@test-huanqiu ~]# mount /dev/vg0/lv0 /var/lv0/ [root@test-huanqiu ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 6.0G 1.7G 79% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 4.5M 2.8G 1% /var/lv0 [root@test-huanqiu ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LogVol00 VolGroup00 -wi-ao---- 8.28g LogVol01 VolGroup00 -wi-ao---- 1.50g lv0 vg0 -wi-a----- 3.00g ---------------------------------------------------------------------------------------------------- 如果要想删除这个lvs,操作如下: [root@test-huanqiu ~]# umount /data/mysql/data/ //先卸载掉这个lvs的挂载关系 [root@test-huanqiu ~]# lvremove /dev/vg0/lv0 [root@test-huanqiu ~]# vgremove vg0 [root@test-huanqiu ~]# pvremove /dev/vdc1 [root@test-huanqiu ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert LogVol00 VolGroup00 -wi-ao---- 8.28g LogVol01 VolGroup00 -wi-ao---- 1.50g ---------------------------------------------------------------------------------------------------- mysql的数据目录是/data/mysql/data,密码是123456 [root@test-huanqiu ~]# ps -ef|grep mysql mysql 2066 1286 0 07:33 ? 00:00:06 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql/ --datadir=/data/mysql/data --plugin-dir=/usr/local/mysql//lib/plugin --user=mysql --log-error=/data/mysql/data/mysql-error.log --pid-file=/data/mysql/data/mysql.pid --socket=/usr/local/mysql/var/mysql.sock --port=3306 root 2523 2471 0 07:55 pts/1 00:00:00 grep mysql [root@test-huanqiu ~]# /etc/init.d/mysql stop Shutting down MySQL.... SUCCESS! [root@test-huanqiu ~]# cd /data/mysql/data/ [root@test-huanqiu data]# tar -cf - . | tar xf - -C /var/lv0/ [root@test-huanqiu data]# umount /var/lv0/ [root@test-huanqiu data]# mount /dev/vg0/lv0 /data/mysql/data [root@test-huanqiu data]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 6.0G 1.7G 79% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 164M 2.6G 6% /data/mysql/data 删除挂载后产生的lost+found目录 [root@test-huanqiu data]# rm -rf lost+found [root@test-huanqiu data]# ll -d /data/mysql/data [root@test-huanqiu data]# ll -Z /data/mysql/data [root@test-huanqiu data]# ll -Zd /data/mysql/data 需要注意的是: 当SElinux功能开启情况下,mysql数据库重启会失败,所以必须执行下面命令,恢复SElinux安全上下文. [root@test-huanqiu data]# restorecon -R /data/mysql/data/ [root@test-huanqiu data]# /etc/init.d/mysql start Starting MySQL... SUCCESS! 二、备份: (生产环境下一般都是整个数据库备份) 1)锁表 2)查看position号并记录,便于后期恢复 3)创建snapshot快照 4)解表 5)挂载snapshot 6)拷贝snapshot数据,进行备份。备份整个数据库之前,要关闭mysql服务(保护ibdata1文件) 7)移除快照 设置此变量为1,让每个事件尽可能同步到二进制日志文件里,以消耗IO来尽可能确保数据一致性。 mysql> SET GLOBAL sync_binlog=1; 查看二进制日志和position,以备后续进行binlog日志恢复增量数据(记住这个position节点记录,对后面的增量数据备份很重要) mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+-------------------+ | mysql-bin.000004 | 1434 | | | | +------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec) 创建存放binlog日志的position节点记录的目录 所有的position节点记录都放在这同一个binlog.pos文件下(后面就使用>>符号追加到这个文件下) [root@test-huanqiu ~]# mkdir /backup/mysql/binlog [root@test-huanqiu ~]# mysql -p123456 -e "SHOW MASTER STATUS;" > /backup/mysql/binlog/binlog.pos [root@test-huanqiu snap1]# cat /backup/mysql/binlog/binlog.pos File Position Binlog_Do_DB Binlog_Ignore_DB Executed_Gtid_Set mysql-bin.000004 1434 刷新日志,产生新的binlog日志,保证日志信息不会再写入到上面的mysql-bin.000004日志内。 mysql> FLUSH LOGS; 全局读锁,读锁请求到后不要关闭此mysql交互界面 mysql> FLUSH TABLES WITH READ LOCK; 在innodb表中,即使是请求到了读锁,但InnoDB在后台依然可能会有事务在进行读写操作, 可用"mysql> SHOW ENGINE INNODB STATUS;"查看后台进程的状态,等没有写请求后再做备份。 创建快照,以只读的方式(--permission r)创建一个3GB大小的快照卷snap1 -s:相当于--snapshot [root@test-huanqiu ~]# mkdir /var/snap1 [root@test-huanqiu ~]# lvcreate -s -L 2G -n snap1 /dev/vg0/lv0 --permission r Logical volume "snap1" created. 查看快照卷的详情(快照卷也是LV): [root@test-huanqiu ~]# lvdisplay 解除锁定 回到锁定表的mysql交互式界面,解锁: mysql> UNLOCK TABLES; 此参数可以根据服务器磁盘IO的负载来调整 mysql> SET GLOBAL sync_binlog=0; [root@test-huanqiu ~]# mount /dev/vg0/snap1 /var/snap1 //挂载快照卷 [root@test-huanqiu snap1]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 5.8G 1.9G 76% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 115M 2.7G 5% /data/mysql/data /dev/mapper/vg0-snap1 2.9G 115M 2.7G 5% /var/snap1 [root@test-huanqiu ~]# cd /var/snap1/ && ll /var/snap1 [root@test-huanqiu snap1]# mkdir -p /backup/mysql/data/ //创建备份目录 total 0 对本机的数据库进行备份,备份整个数据库。 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.01 sec) mysql> create database beijing; Query OK, 1 row affected (0.00 sec) mysql> use beijing; Database changed mysql> create table people(id int(5),name varchar(20)); Query OK, 0 rows affected (0.03 sec) mysql> insert into people values("1","wangshibo"); Query OK, 1 row affected (0.00 sec) mysql> insert into people values("2","guohuihui"); Query OK, 1 row affected (0.01 sec) mysql> insert into people values("3","wuxiang"); Query OK, 1 row affected (0.01 sec) mysql> select * from people; +------+-----------+ | id | name | +------+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | +------+-----------+ 3 rows in set (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.01 sec) -------------------------------------------------------------------------------------------------------------------------- 需要注意的是: innodb表,一般会打开独立表空间模式(innodb_file_per_table)。 由于InnoDB默认会将所有的数据库InnoDB引擎的表数据存储在一个共享空间中:ibdata1文件。 增删数据库的时候,ibdata1文件不会自动收缩,这对单个或部分数据库的备份也将成为问题(如果不是整个数据库备份的情况下,ibdata1文件就不能备份,否则会影响全部数据库的数据)。 所以若是对单个数据库或部分数据库进行快照备份: 1)若是直接误删除mysql数据目录下备份库目录,可以直接将快照备份数据解压就能恢复 2)若是使用drop或delete误删除的数据,那么在使用快照备份数据恢复时,就会出问题!因为单库备份时ibdata1文件不能单独备份,恢复时会导致这个文件损坏! 所以正确的做法是: 要对整个数据库进行备份,并且一定要在mysql服务关闭的情况下(这样是为了保护ibdata1文件)。 因为mysql是采用缓冲方式来将数据写入到ibdata1文件中的,这正是fflush()函数存在的理由。当mysql在运行时,对ibdata1进行拷贝肯定会导致ibdata1文件中的数据出错,这样在数据恢复时,也就肯定会出现“ERROR 1146 (42S02): Table '****' doesn't exist“的报错! 在对启用innodb引擎的mysql数据库进行迁移的时候也是同理: 在对innodb数据库进行数据迁移的时候,即将msyql(innodb引擎)服务从一台服务器迁移到另一台服务器时,在对数据库目录进行整体拷贝的时候(当然就包括了对ibdata1文件拷贝),一定要在关闭对方mysql服务的情况下进行拷贝! ibdata1用来储存文件的数据,而库名的文件夹里面的那些表文件只是结构而已,由于新版的mysql默认试innodb,所以ibdata1文件默认就存在了,少了这个文件有的数据表就会出错。要知道:数据库目录下的.frm文件是数据库中很多的表的结构描述文件;而ibdata1文件才是数据库的真实数据存放文件。 -------------------------------------------innodb_file_per_table参数说明------------------------------------------ 线上环境的话,一般都建议打开这个独立表空间模式。 因为ibdata1文件会不断的增大,不会减少,无法向OS回收空间,容易导致线上出现过大的共享表空间文件,致使当前空间爆满。 并且ibdata1文件大到一定程序会影响insert、update的速度;并且 另外如果删表频繁的话,共享表空间产生的碎片会比较多。打开独立表空间,方便进行innodb表的碎片整理 使用MyISAM表引擎的数据库会分别创建三个文件:表结构、表索引、表数据空间。 可以将某个数据库目录直接迁移到其他数据库也可以正常工作。 然而当使用InnoDB的时候,一切都变了。 InnoDB默认会将所有的数据库InnoDB引擎的表数据存储在一个共享空间中:ibdata1文件。 增删数据库的时候,ibdata1文件不会自动收缩,单个数据库的备份也将成为问题。 通常只能将数据使用mysqldump 导出,然后再导入解决这个问题。 在MySQL的配置文件[mysqld]部分,增加innodb_file_per_table参数。 可以修改InnoDB为独立表空间模式,每个数据库的每个表都会生成一个数据空间。 它的优点: 1)每个表都有自已独立的表空间。 2)每个表的数据和索引都会存在自已的表空间中。 3)可以实现单表在不同的数据库中移动。 4)空间可以回收(除drop table操作处,表空不能自已回收) Drop table操作自动回收表空间,如果对于统计分析或是日值表,删除大量数据后可以通过:alter table TableName engine=innodb;回缩不用的空间。 对于使innodb-plugin的Innodb使用turncate table也会使空间收缩。 对于使用独立表空间的表,不管怎么删除,表空间的碎片不会太严重的影响性能,而且还有机会处理。 它的缺点: 单表增加过大,如超过100个G。 结论: 共享表空间在Insert操作上少有优势。其它都没独立表空间表现好。当启用独立表空间时,请合理调整一下:innodb_open_files。 InnoDB Hot Backup(冷备)的表空间cp不会面对很多无用的copy了。而且利用innodb hot backup及表空间的管理命令可以实。 1)innodb_file_per_table设置.设置为1,表示打开了独立的表空间模式。 如果设置为0,表示关闭独立表空间模式,开启方法如下: 在my.cnf中[mysqld]下设置 innodb_file_per_table=1 2)查看是否开启: mysql> show variables like "%per_table%"; +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | innodb_file_per_table | ON | +-----------------------+-------+ 1 row in set (0.00 sec) 3)关闭独享表空间 innodb_file_per_table=0关闭独立的表空间 mysql> show variables like ‘%per_table%’; -------------------------------------------innodb_file_per_table参数说明------------------------------------------ -------------------------------------------------------------------------------------------------------------------------- 备份前,一定要关闭mysql数据库!因为里面会涉及到ibdata1文件备份,不关闭mysql的话,ibdata1文件备份后会损坏,从而导致恢复数据失败! [root@test-huanqiu snap1]# /etc/init.d/mysql stop Shutting down MySQL.... SUCCESS! [root@test-huanqiu data]# lsof -i:3306 [root@test-huanqiu data]# 现在备份整个数据库 [root@test-huanqiu snap1]# tar -zvcf /backup/mysql/data/`date +%Y-%m-%d`dbbackup.tar.gz ./ [root@test-huanqiu snap1]# ll /backup/mysql/data/ total 384 -rw-r--r--. 1 root root 392328 Dec 5 22:15 2016-12-05dbbackup.tar.gz 释放快照卷,每次备份之后,应该删除快照,减少IO操作 先卸载,再删除 [root@test-huanqiu ~]# umount /var/snap1/ [root@test-huanqiu ~]# df -h //确认上面的挂载关系已经没了 Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.1G 5.8G 1.9G 76% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 190M 37M 143M 21% /boot /dev/mapper/vg0-lv0 2.9G 115M 2.7G 5% /data/mysql/data [root@test-huanqiu ~]# lvremove /dev/vg0/snap1 Do you really want to remove active logical volume snap1? [y/n]: y Logical volume "snap1" successfully removed 数据被快照备份后,可以启动数据库 [root@test-huanqiu ~]# /etc/init.d/mysql start Starting MySQL.. SUCCESS! [root@test-huanqiu ~]# lsof -i:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 15943 mysql 16u IPv4 93348 0t0 TCP *:mysql (LISTEN) [root@test-huanqiu ~]# 现在再进行新的数据写入: mysql> use beijing; Database changed mysql> insert into people values("4","liumengnan"); Query OK, 1 row affected (0.02 sec) mysql> insert into people values("5","zhangjuanjuan"); Query OK, 1 row affected (0.00 sec) mysql> select * from people; +------+---------------+ | id | name | +------+---------------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | | 4 | liumengnan | | 5 | zhangjuanjuan | +------+---------------+ 5 rows in set (0.00 sec) mysql> create table heihei(name varchar(20),age varchar(20)); Query OK, 0 rows affected (0.02 sec) mysql> insert into heihei values("jiujiujiu","nan"); Query OK, 1 row affected (0.00 sec) mysql> select * from heihei; +-----------+------+ | name | age | +-----------+------+ | jiujiujiu | nan | +-----------+------+ 1 row in set (0.00 sec) mysql> create database shanghai; Query OK, 1 row affected (0.01 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | shanghai | | test | +--------------------+ 6 rows in set (0.00 sec) 假设一不小心误操作删除beijing和shanghai库 mysql> drop database beijing; Query OK, 2 rows affected (0.03 sec) mysql> drop database shanghai; Query OK, 0 rows affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) 莫慌!接下来就说下数据恢复操作~~ 三、恢复流程如下: 0)由于涉及到增量数据备份,所以提前将最近一次的binlog日志从mysql数据目录复制到别的路径下 1)在mysql数据库中执行flush logs命令,产生新的binlog日志,让日志信息写入到新的这个binlog日志中 1)关闭数据库,一定要关闭 2)删除数据目录下的文件 3)快照数据拷贝回来,position节点记录回放 4)增量数据就利用mysqlbinlog命令将上面提前拷贝的binlog日志文件导出为sql文件,并剔除其中的drop语句,然后进行恢复。 5)重启数据 先将最新一次的binlog日志备份到别处,用作增量数据备份。 比如mysql-bin.000006是最新一次的binlog日志 [root@test-huanqiu data]# cp mysql-bin.000006 /backup/mysql/data/ 产生新的binlog日志,确保日志写入到这个新的binlog日志内,而不再写入到上面备份的binlog日志里。 mysql> flush logs; [root@test-huanqiu data]# ll mysql-bin.000007 -rw-rw----. 1 mysql mysql 120 Dec 5 23:19 mysql-bin.000007 [root@test-huanqiu data]# /etc/init.d/mysql stop Shutting down MySQL.... SUCCESS! [root@test-huanqiu data]# lsof -i:3306 [root@test-huanqiu data]# pwd /data/mysql/data [root@test-huanqiu data]# rm -rf ./* [root@test-huanqiu data]# tar -zvxf /backup/mysql/data/2016-12-05dbbackup.tar.gz ./ [root@test-huanqiu data]# /etc/init.d/mysql start Starting MySQL SUCCESS! [root@test-huanqiu data]# cat /backup/mysql/binlog/binlog.pos File Position Binlog_Do_DB Binlog_Ignore_DB Executed_Gtid_Set mysql-bin.000004 1434 [root@test-huanqiu data]# mysqlbinlog --start-position=1434 /data/mysql/data/mysql-bin.000004 | mysql -p123456 登陆数据库查看,发现这只是恢复到快照备份阶段的数据: mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec) mysql> select * from beijing.people; +------+-----------+ | id | name | +------+-----------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | +------+-----------+ 3 rows in set (0.00 sec) 快照备份之后写入的数据要利用mysqlbinlog命令将上面拷贝的mysql-bin000006文件导出为sql文件,并剔除其中的drop语句,然后进行恢复。 [root@test-huanqiu ~]# cd /backup/mysql/data/ [root@test-huanqiu data]# ll total 388 -rw-r--r--. 1 root root 392328 Dec 5 22:15 2016-12-05dbbackup.tar.gz -rw-r-----. 1 root root 1274 Dec 5 23:19 mysql-bin.000006 [root@test-huanqiu data]# mysqlbinlog mysql-bin.000006 >000006bin.sql 剔除其中的drop语句 [root@test-huanqiu data]# vim 000006bin.sql //手动删除sql语句中的drop语句 然后在mysql中使用source命令恢复数据 mysql> source /backup/mysql/data/000006bin.sql; 再次查看下,发现增量部分的数据也已经恢复回来了 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | beijing | | mysql | | performance_schema | | shanghai | | test | +--------------------+ 6 rows in set (0.00 sec) mysql> use beijing; Database changed mysql> show tables; +-------------------+ | Tables_in_beijing | +-------------------+ | heihei | | people | +-------------------+ 2 rows in set (0.00 sec) mysql> select * from people; +------+---------------+ | id | name | +------+---------------+ | 1 | wangshibo | | 2 | guohuihui | | 3 | wuxiang | | 4 | liumengnan | | 5 | zhangjuanjuan | +------+---------------+ 5 rows in set (0.00 sec) mysql> select * from heihei; +-----------+------+ | name | age | +-----------+------+ | jiujiujiu | nan | +-----------+------+ 1 row in set (0.00 sec) ----------------------------------------------------------------------------------------------------------------- 思路: 1)全库的快照备份只需要在开始时备份一份即可,这相当于全量备份。 2)后续只需要每天备份一次最新的binlog日志(备份后立即flush logs产生新的binlog日志),这相当于增量备份了。 3)利用快照备份恢复全量数据,利用备份的binlog日志进行增量数据恢复 4)crontab计划任务,每天定时备份最近一次的binlog日志即可。 mysql增量备份的方案,看到有2种: 1.也是使用mysqldump,然后将这一次的dump文件和前一次的文件用比对工具比对,生成patch补丁,将补丁打到上一次的dump文件中,生成一份新的dump文件。(stackoverflow上看到的,感觉思路很奇特,但是不知道这样会不会有问题) 2.增量拷贝 mysql 的数据文件,以及日志文件。典型的方式就是用官方mysqlbackup工具配合 --incremental 参数,但是 mysqlbackup 是mysql Enterprise版才有的工具(收费?),现在项目安装的mysql版本貌似没有。还有v友中分享的各种增量备份脚本或工具也是基于这种方案?
31.575947
414
0.682966
yue_Hant
0.4228
e5bc74046032042d08476a2c3bb33d338ffb4d38
2,095
md
Markdown
README.md
ibogood/json-rpc
01062328e745eb9fcc00dd29dff7678e20b54daf
[ "MIT" ]
1
2021-10-20T08:48:32.000Z
2021-10-20T08:48:32.000Z
README.md
ibogood/json-rpc
01062328e745eb9fcc00dd29dff7678e20b54daf
[ "MIT" ]
null
null
null
README.md
ibogood/json-rpc
01062328e745eb9fcc00dd29dff7678e20b54daf
[ "MIT" ]
null
null
null
## 使用 具体查看DEMO [点击预览](https://unpkg.com/@js-next/json-rpc@1.1.3/demo/server.html) ### 原生方案 ```html <!-- 注意: 不要在files:// 打开 建议在localhost:// 打开 --> <!-- server.html --> <!DOCTYPE html> <html> <head> <meta charset="utf8" /> <title>server</title> <script src="https://unpkg.com/@js-next/json-rpc@1.1.3/dist/json-rpc.umd.min.js"></script> </head> <body> server: 打印请看控制台 <iframe id="iframe" src="./client.html" width="800" height="600"></iframe> <script> const { RPCServer } = window.jsonRPC RPCServer.export({ // 添加函数,可以提交已有函数 foo(id){ console.log('foo call', id) return id + 2 }, bar(name, id) { console.log('bar call', name, id) return name + id + 'bar call' } }).listen() </script> </body> </html> <!-- 客户端--> <!DOCTYPE html> <html> <head> <meta charset="utf8" /> <title>client</title> <script src="https://unpkg.com/@js-next/json-rpc@1.1.3/dist/json-rpc.umd.min.js"></script> </head> <body> client: 打印请看控制台 <script> (async () => { const { RPCClient } = window.jsonRPC const rpcClient = new RPCClient(parent) const r1 = await rpcClient.foo(1) console.log(r1) const r2 = await rpcClient.bar('xxx', 'yyy') console.log(r2) })() </script> </body> </html> ``` 具体查看DEMO [点击预览](https://unpkg.com/@js-next/json-rpc@1.1.3/demo/server.html) ### npm方式 #### 安装扩展 ```shell npm install -S @js-next/json-rpc ``` #### 代码调用 ```javascript // server.js import { RPCServer } from '@js-next/json-rpc' RPCServer.export({ // 添加函数,可以提交已有函数 foo(id: number){ console.log('foo call', id) return id + 2 }, bar(name: string, id: string) { console.log('bar call', name, id) return name + id + 'bar call' } }).listen() // client.js import { RPCClient } from '@js-next/json-rpc' const rpcClient = new RPCClient(parent) as any const r1 = await rpcClient.foo(1) console.log(r1) const r2 = await rpcClient.bar('xxx', 'yyy') console.log(r2) ```
23.806818
94
0.566587
yue_Hant
0.3639
e5bc985b7788b41133de84072f8e903a65802e0d
3,696
md
Markdown
src/pages/cele/index.md
MassivDash/Ideanowa
b52dfbca7ed7398d73f7adf94223462755426119
[ "MIT" ]
null
null
null
src/pages/cele/index.md
MassivDash/Ideanowa
b52dfbca7ed7398d73f7adf94223462755426119
[ "MIT" ]
1
2021-09-21T01:25:20.000Z
2021-09-21T01:25:20.000Z
src/pages/cele/index.md
MassivDash/nocniepodleglosci
ee82d8c4da57a8662eb312796834835fb2ff2811
[ "MIT" ]
null
null
null
--- templateKey: cele-page title: Cele thumbnail: /img/background.jpg description: >- Celem Fundacji jest popieranie wszechstronnego rozwoju społeczeństwa polskiego, a zwłaszcza działalności społecznej, informacyjnej, kulturalnej, naukowej i oświatowej na rzecz rozwoju rynku, demokracji i społeczeństwa obywatelskiego w Polsce oraz zbliżenia narodów i państw Europy Środkowej i Wschodniej. --- **Celem Fundacji jest popieranie wszechstronnego rozwoju społeczeństwa polskiego, a zwłaszcza działalności społecznej, informacyjnej, kulturalnej, naukowej i oświatowej na rzecz rozwoju rynku, demokracji i społeczeństwa obywatelskiego w Polsce oraz zbliżenia narodów i państw Europy Środkowej i Wschodniej.** Fundacja realizuje cele statutowe podejmując i organizując (we własnym zakresie oraz wspierając finansowo, rzeczowo lub organizacyjnie inne organizacje i instytucje prowadzące): * działania służące upowszechnieniu zasad demokratycznego państwa prawa, * przejrzystości w życiu publicznym, * społecznej kontroli nad instytucjami zaufania publicznego oraz przeciwdziałania patologiom życia publicznego i społecznego,działania na rzecz ochrony praw i swobód obywatelskich, * dostępu obywateli do informacji, * pomocy prawnej i wymiaru sprawiedliwości oraz propagowania postaw obywatelskiej aktywności i odpowiedzialności, * działania wspomagające rozwój społeczności lokalnych, * samorządnych wspólnot, organizacji pozarządowych i innych instytucji działających na rzecz dobra publicznego w różnych dziedzinach życia społecznego (m.in.: edukacja, nauka, kultura, informacja,* integracja europejska, ochrona środowiska, ochrona zdrowia, przedsiębiorczość, pomoc społeczna, charytatywna i humanitarna), * działania służące wyrównywaniu szans grup słabszych lub zagrożonych społecznym wykluczeniem (m.in. dzieci i młodzieży ze środowisk patologicznych lub terenów zaniedbanych gospodarczo, społecznie, kulturowo), * współpracę międzynarodową na rzecz rozwoju demokracji, rynku, edukacji, nauki, kultury, wymiany informacji, ochrony środowiska i zdrowia, pomocy społecznej i humanitarnej, ze szczególnym uwzględnieniem współpracy w regionie Europy Środkowej i Wschodniej, * programy badawcze, informacyjne i wydawnicze służące zdobywaniu i upowszechnianiu wiedzy na temat zjawisk społecznych, ekonomicznych, politycznych, programy stypendialne i szkoleniowe dla młodzieży szkolnej i akademickiej, wolontariuszy oraz specjalistów różnych dziedzin, * działania wspierające środowiska naukowców i zespołów badawczych, pracujących w tych obszarach nauki, które posiadają znaczenie dla rozwoju cywilizacyjnego, kulturowego i gospodarczego Polski oraz jej międzynarodowego prestiżu, * działania wspierające transfer polskich osiągnięć naukowych do praktyki gospodarczej działania wspierające inicjatywy inwestycyjne, służące nauce w Polsce Oprócz realizacji inicjowanych przez siebie przedsięwzięć, * Fundacja współdziała z innymi instytucjami, organizacjami i osobami dla osiągania wspólnych celów statutowych. Współdziałanie to może mieć charakter wsparcia organizacyjnego, częściowego lub całkowitego finansowania przedsięwzięcia albo pomocy w uzyskaniu niezbędnych funduszy z innych źródeł. Fundacja realizuje cele statutowe także poprzez członkostwo w organizacjach zrzeszających fundacje polskie i zagraniczne, o celach statutowych zbieżnych lub tożsamych z celem Fundacji. W ramach realizacji celów statutowych Fundacja może inicjować postępowania i przystępować do postępowań toczących się przed organami wymiaru sprawiedliwości oraz organami administracji publicznej w charakterze organizacji społecznej w sposób i na zasadach określonych w obowiązujących przepisach prawa.
112
323
0.848755
pol_Latn
1.000003
e5bccb2a4aa9d488ade26bf902e25a0bced2d062
5,438
md
Markdown
guac-style.md
mike-jumper/guacamole-website
1a40876299fa61ad53d43191991b0430c7bee89f
[ "Apache-2.0" ]
121
2017-12-13T03:43:37.000Z
2022-03-07T09:03:09.000Z
guac-style.md
mike-jumper/guacamole-website
1a40876299fa61ad53d43191991b0430c7bee89f
[ "Apache-2.0" ]
15
2018-01-12T22:58:22.000Z
2022-01-09T12:28:52.000Z
guac-style.md
mike-jumper/guacamole-website
1a40876299fa61ad53d43191991b0430c7bee89f
[ "Apache-2.0" ]
65
2018-01-22T06:15:34.000Z
2022-03-31T23:52:30.000Z
--- layout: page title: Guacamole Style Guidelines permalink: /guac-style/ --- These guidelines are intended to serve as a reference and means of standardizing the code style of the Guacamole codebase. Not all developers or contributors will agree with these guidelines, but this is beside the point. Above all, these guidelines ensure Guacamole code is readable and maintainable. The key to achieving this is consistency. [The Google style guidelines](http://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml) perhaps describe this need best: > > BE CONSISTENT. > > If you're editing code, take a few minutes to look at the code around you and > determine its style. If they use spaces around all their arithmetic > operators, you should too. If their comments have little boxes of hash marks > around them, make your comments have little boxes of hash marks around them > too. > > The point of having style guidelines is to have a common vocabulary of coding > so people can concentrate on what you're saying rather than on how you're > saying it. We present global style rules here so people know the vocabulary, > but local style is also important. If code you add to a file looks > drastically different from the existing code around it, it throws readers out > of their rhythm when they go to read it. Avoid this. > If you are a developer on the Guacamole project, or intend to contribute code to the Guacamole project, you absolutely must follow these guidelines; to do otherwise would be detrimental to the project and the collaborative effort it represents. We won't attempt to argue against other styles here - to do so is pointless, as all styles have merit. This is simply the style we prefer, for our own reasons. Rule of Thumb ------------- **When in doubt, follow the style around you.** If you see that the code uses a 4-space indent and no tabs, then obviously you should not use tabs. If you see the code uses `variables_named_like_this`, then your code should not suddenly start `namingThingsLikeThis`. That said, these style guidelines are intended to be adopted going forward. We develop according to these guidelines, but definitely may not have done so in the ancient past. General Style ------------- 1. **Use 4-space indents and no tabs.** 2. Comment everything semantically and liberally, use blank lines to separate logical blocks. Yes, HTML should be commented, too. 3. Wrap lines to 80 columns when doing so does not decrease readability. Comments and Documentation -------------------------- 1. All functions, all parameters, all return values, all structures, and all members must be documented with JavaDoc/Doxygen, wrapping lines as necessary to fit the 80 column maximum: /** * High-level function description. Implementation details if * appropriate (it's usually not). * * @param var1 * Description of what var1 is. * * @param var2 * Description of what var2 is, though this description is * particularly long to demonstrate how lines should be wrapped and * indented. * * @return * Description of return value. */ int fun(int var1, int var2); 2. Do not use the `@author` tag (or similar). The authors of various parts of the codebase should be tracked by git, not by the code itself. 3. There must be no undocumented behavior of functions. 4. If changes you are making will make parts of the existing manual incorrect, you are not expected to update the manual yourself, but **please let us know so we correct it**. 5. For C code, local functions should be static and documented locally. Functions, types, etc. which are not local should be declared in an appropriate header file, and documented within the header file. Braces ------ 1. Avoid braces when unnecessary (single statement `if`'s, for example) unless it significantly increases readability. There is no hard rule here - use your own best judgment. 2. Do not cuddle the `else`, etc. /* Do this */ if (thing) { } else { } /* Not this */ if (thing) { } else { } The only exception here is `do`/`while`: do { } while (thing); Naming ------ 1. Variables, functions, and datatypes in C should use the standard C convention of `words_separated_by_underscores`. Functions must have an appropriate namespace. 2. Variables and functions in Java and JavaScript should use `headlessCamelCase`. Classes in Java and JavaScript should use `CamelCase`. 3. Constants and enums (or variables which are intended to be constant) in all languages should use `UPPERCASE_WORDS_SEPARATED_BY_UNDERSCORES`. 4. Prefer `'single quotes'` over `"double quotes"` for strings in JavaScript. Error Handling -------------- 1. Exceptions within Java and return types of function calls in C which can fail may not be ignored, unless the failure (A) has no effect on the running of the program and (B) would not be useful even if logged at the debug level. 2. If an exception should be logged, log it at an appropriate log level. Additionally, log the exception itself at the debug level such that a stack trace is included when debug-level logging is enabled. **Do not pollute non-debug log levels with stack traces!**
38.567376
82
0.717911
eng_Latn
0.999489
e5bd0284e72d8d20b38ca555cbead94530654943
179
md
Markdown
README.md
BahaaEldeenOsama/Concepts-of-Programming-language
bf153109ea300a2d8ae44d5996efc0d242072e26
[ "MIT" ]
null
null
null
README.md
BahaaEldeenOsama/Concepts-of-Programming-language
bf153109ea300a2d8ae44d5996efc0d242072e26
[ "MIT" ]
null
null
null
README.md
BahaaEldeenOsama/Concepts-of-Programming-language
bf153109ea300a2d8ae44d5996efc0d242072e26
[ "MIT" ]
null
null
null
# Concepts-of-Programming-language ### This is a Course assignment of Concepts-of-Programming-language at the Faculty of Computers and Artificial Intelligence - Cairo University.
59.666667
143
0.815642
eng_Latn
0.953893
e5bd3c899c81089a1e800caf26e3db0e16309436
86
md
Markdown
PULL_REQUEST_TEMPLATE.md
apeinot/DD2480_lab2_group18
4a188dc76c86d41f482d5d7c8701174898425585
[ "BSD-2-Clause" ]
null
null
null
PULL_REQUEST_TEMPLATE.md
apeinot/DD2480_lab2_group18
4a188dc76c86d41f482d5d7c8701174898425585
[ "BSD-2-Clause" ]
33
2019-02-01T08:41:44.000Z
2019-02-08T09:59:07.000Z
PULL_REQUEST_TEMPLATE.md
apeinot/DD2480_lab2_group18
4a188dc76c86d41f482d5d7c8701174898425585
[ "BSD-2-Clause" ]
null
null
null
Description of changes and tests. This should resolve #number of the issue addressed
21.5
50
0.813953
eng_Latn
0.999511
e5bee4fc04636ea9acc825a2efb5c257c08d1570
1,259
md
Markdown
packages/elements/date/CHANGELOG.md
amazingandyyy/atlaskit
347f30efdce6e6b9af6acfe11efcd307abc60a56
[ "Apache-2.0" ]
1
2018-11-17T08:06:33.000Z
2018-11-17T08:06:33.000Z
packages/elements/date/CHANGELOG.md
amazingandyyy/atlaskit
347f30efdce6e6b9af6acfe11efcd307abc60a56
[ "Apache-2.0" ]
null
null
null
packages/elements/date/CHANGELOG.md
amazingandyyy/atlaskit
347f30efdce6e6b9af6acfe11efcd307abc60a56
[ "Apache-2.0" ]
null
null
null
# @atlaskit/date ## 0.1.6 - [patch] [36c362f](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/36c362f): - FS-3174 - Fix usage of gridSize() and borderRadius() ## 0.1.5 - [patch] [527b954](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/527b954): - FS-3174 - Remove usage of util-shared-styles from elements components ## 0.1.4 - [patch] ED-5529 Fix JSON Schema [d286ab3](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/d286ab3) ## 0.1.3 - [patch] Fix rxjs and date-fns import in TS components [ab15cee](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/ab15cee) ## 0.1.2 - [patch] Updated dependencies [df22ad8](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/df22ad8) - @atlaskit/theme@6.0.0 - @atlaskit/docs@5.0.6 ## 0.1.1 - [patch] update the dependency of react-dom to 16.4.2 due to vulnerability in previous versions read https://reactjs.org/blog/2018/08/01/react-v-16-4-2.html for details [a4bd557](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/a4bd557) - [none] Updated dependencies [a4bd557](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/a4bd557) - @atlaskit/theme@5.1.3 ## 0.1.0 - [minor] FS-2131 add date element [b026429](https://bitbucket.org/atlassian/atlaskit-mk-2/commits/b026429)
40.612903
242
0.729944
kor_Hang
0.166321
e5bf69d05c1c2a6b1a64e23dc665329a3993337f
34
md
Markdown
README.md
strive1988/demo
b85e0105976f7395dfaabcfc389dad33fad9e0bd
[ "MIT" ]
null
null
null
README.md
strive1988/demo
b85e0105976f7395dfaabcfc389dad33fad9e0bd
[ "MIT" ]
null
null
null
README.md
strive1988/demo
b85e0105976f7395dfaabcfc389dad33fad9e0bd
[ "MIT" ]
null
null
null
# demo about github study and use
11.333333
26
0.764706
eng_Latn
0.997102
e5bf87b1e483e1fbb4f458b41e5b8bdcccd6ec75
7,226
md
Markdown
memdocs/configmgr/develop/osd/operating-system-deployment-task-sequences-overview.md
amakuru01/memdocs
d7e6d370deb92a3ff1e63c8c86542df886c7546b
[ "CC-BY-4.0", "MIT" ]
1
2021-11-07T18:53:51.000Z
2021-11-07T18:53:51.000Z
memdocs/configmgr/develop/osd/operating-system-deployment-task-sequences-overview.md
amakuru01/memdocs
d7e6d370deb92a3ff1e63c8c86542df886c7546b
[ "CC-BY-4.0", "MIT" ]
null
null
null
memdocs/configmgr/develop/osd/operating-system-deployment-task-sequences-overview.md
amakuru01/memdocs
d7e6d370deb92a3ff1e63c8c86542df886c7546b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "OS Deployment Task Sequences Overview" titleSuffix: "Configuration Manager" ms.date: "09/20/2016" ms.prod: "configuration-manager" ms.technology: configmgr-sdk ms.topic: conceptual ms.assetid: 957ac65f-a14b-4659-84b8-b83190698cc7 author: aczechowski ms.author: aaroncz manager: dougeby ms.localizationpriority: null --- # Operating System Deployment Task Sequences Overview In Configuration Manager, a task sequence is a series of one or more task steps that can be advertised to Configuration Manager clients to run user-specified actions. Task sequences are used with operating system deployment to build source computers, capture an operating system image, migrate user and computer settings, and deploy an image to a collection of target computers. Task sequences can also be used to run other Configuration Manager actions, such as deploying Configuration Manager software packages or running custom command lines. Configuration Manager provides a rich Windows Management Instrumentation (WMI) object model for creating and editing task sequences. For more information, see [Operating System Deployment Task Sequence Object Model](../../develop/osd/operating-system-deployment-task-sequence-object-model.md). ## Task Sequence Steps A task sequence step is either an individual action that is run on a computer, such as a running a command line, or it is a set of actions arranged in a group. Task steps are processed in order and can have conditions associated with them that determine whether the action, or group of actions, is processed. ## Actions There are two types of actions: built in action and custom actions. ### Built-in Actions A Configuration Manager action that performs a specific action on the Configuration Manager client computer is a built-in action. For example, Configuration Manager provides built-in actions for partitioning disks and also for installing software. For more information about the Configuration Manager built in actions, see the Configuration Manager documentation library. There is also a command-line action that the administrator can use for running scripts or executable files on the Configuration Manager client computer. ### Custom Actions An action that you create yourself is a custom action. You can create custom actions that call a process or script that you define in a Managed Object Format (MOF) file. You can also create a control that integrates the custom action you create into the task sequence editor. This allows the administrator to change custom action properties in the same way that the Configuration Manager supplied actions are changed. Typically, you create these custom actions when the built-in actions do not satisfy your requirements for an action. For more information about creating custom actions, see [About Configuration Manager Custom Actions](../../develop/osd/about-configuration-manager-custom-actions.md). ## Running Task Sequences To run a task sequence, you must perform the following: #### To run a task sequence 1. Ensure that you have the Configuration Manager site server installed and that you have clients to deploy task sequences to. Depending on your environment, you might need to configure the State Migration Point or PXE Service Point. For more information, see [About OS deployment site role configuration](about-operating-system-deployment-site-role-configuration.md). 2. Create a package containing the files you need for deployment. For example, to deploy a boot image you will need to create a boot image package ([SMS_BootImagePackage Server WMI Class](../../develop/reference/osd/sms_bootimagepackage-server-wmi-class.md)). 3. Assign the package to a distribution point. For more information, see [How to Assign a Package to a Distribution Point](../../develop/core/servers/configure/how-to-assign-a-package-to-a-distribution-point.md). 4. Create a task sequence. For more information, see [How to Create an Operating System Deployment Task Sequence](../../develop/osd/how-to-create-an-operating-system-deployment-task-sequence.md). 5. Associate the task sequence with a task sequence package. For more information, see [How to Create an Operating System Deployment Task Sequence Package](../../develop/osd/how-to-create-an-operating-system-deployment-task-sequence-package.md). 6. Advertise the task sequence package to the required client computers. To do this you create an [SMS_Advertisement](../../develop/reference/core/servers/configure/sms_advertisement-server-wmi-class.md) package. If you want to show a task sequence progress dialog box while the task sequence runs, set the [SMS_Advertisement](../../develop/reference/core/servers/configure/sms_advertisement-server-wmi-class.md) class `AdvertFlags` show task sequence progress bit (0x00800000). For more information, see [About Software Distribution Advertisements](../../develop/core/servers/configure/about-software-distribution-deployments.md). 7. On the client computer, the task sequence is eventually available as an advertised program. Click the program to run it. ## Detecting a Failed Task Sequence When a task sequence runs, you can use the `_SMSTSLastActionSucceeded` variable to determine if the last task sequence group run has failed. Depending on the environment the task sequence is running in, you can then take appropriate action. Typically you will copy the task logs to a share for inspection. #### To detect a failed task sequence 1. Set the continue on error property for the task sequence group that you want to detect failure on. 2. Immediately after the group, create a group to handle the error. 3. In the error handler group, Add a condition that runs the error handler group if `_SMSTLastActionSucceeded` = `false`. 4. In the error handler group, add a Run Command Line action. This will be used for error handling in a WinPE environment. 5. In the WinPE action, add the following command line to copy the log to an external share: `smsswd.exe /run: cmd /c copy x:\windows\temp\smsts.log \\<Your server>\<Your Share>\%_SMSTSClientGuid%-smsts.log` 6. In the WinPE action, add a condition that runs the action if `_SMSTSInWinPE` is true. 7. In the error handler group, add a run command-line action. This will be used for error handling in a full operating system environment. 8. In the full operating system action, add the following command line to copy the log to an external share: `smsswd.exe /run: cmd /c copy %windir%\system32\ccm\logs\smsts.log \\server\share\%_SMSTSClientGuid%-smsts.log` 9. In the WinPE action, add a condition that runs the action if `_SMSTSInWinPE` is false. 10. In the error handler group, add a run command-line action and a command line that runs a recovery tool of your choosing. ## Pre-Execution Hooks You can run scripts or executables that can interact with the user in Windows PE before the task sequence is selected. For more information, see Operating System Media Pre-Execution Hook in the Configuration Manager library documentation. ## See also [OS deployment task sequence object model](operating-system-deployment-task-sequence-object-model.md)
85.011765
704
0.788126
eng_Latn
0.994507
e5c0acfdab94e8a604fa60f678988f9d9939a4ed
16,904
md
Markdown
README.md
Romancha/awesome-app-rating
7d41a44b72fe9291dd3d752d10f04eaaf6c0671a
[ "Apache-2.0" ]
null
null
null
README.md
Romancha/awesome-app-rating
7d41a44b72fe9291dd3d752d10f04eaaf6c0671a
[ "Apache-2.0" ]
null
null
null
README.md
Romancha/awesome-app-rating
7d41a44b72fe9291dd3d752d10f04eaaf6c0671a
[ "Apache-2.0" ]
null
null
null
# Awesome App Rating [![Build Status](https://app.bitrise.io/app/3156ba39c3e19393/status.svg?token=xXPr-IjWuLLKVliX5QPZKg&branch=master)](https://app.bitrise.io/app/3156ba39c3e19393) A highly customizable Android library providing a dialog, which asks the user to rate the app. If the user rates below the defined threshold, the dialog will show a feedback form or ask the user to mail his feedback. Otherwise it will ask the user to rate the app in the Google Play Store. ![showcase](https://github.com/SuddenH4X/awesome-app-rating/raw/develop/preview/showcase.png) You can also use this library to show the [Google in-app review](https://developer.android.com/guide/playcore/in-app-review) easily under certain conditions: <img src="https://developer.android.com/images/google/play/in-app-review/iar-flow.jpg" alt="In app review workflow for a user" /> (Source: https://developer.android.com/guide/playcore/in-app-review) ## Features - Let the dialog (or the [Google in-app review](https://developer.android.com/guide/playcore/in-app-review)) show up at a defined app session, after n days of usage and/or if your custom conditions meet - Auto fetches the app icon to use it in the dialog - Ask the user to mail his feedback or show a custom feedback form if the user rates below the defined minimum threshold - All titles, messages and buttons are customizable - You can override all click listeners to fit your needs (or to implement extensive tracking) - The dialog handles orientation changes correctly - Extracts the accent color of your app's theme and works with dark/night theme out of the box This library: - is completely written in Kotlin - is Unit tested - is optimized for MaterialComponent themes - uses AndroidX - uses no third party dependencies - is easy debuggable - is Android 11 (API 30) ready - is easy to use ## How to use ### Gradle The library supports API level 14 and higher. You can simply include it in your app via Gradle: ```groovy dependencies { ... implementation 'com.suddenh4x.ratingdialog:awesome-app-rating:2.2.1' } ``` ### Builder usage This library provides a builder to configure its behavior. ```kotlin AppRating.Builder(this) .setMinimumLaunchTimes(5) .setMinimumDays(7) .setMinimumLaunchTimesToShowAgain(5) .setMinimumDaysToShowAgain(10) .setRatingThreshold(RatingThreshold.FOUR) .showIfMeetsConditions() ``` You should call the builder only in the `onCreate()` method of your main Activity class, because every call of the method `showIfMeetsConditions` will increase the launch times. With the settings above the dialog will show up if the following conditions are met: - The app is installed for a minimum of 7 days and - the app is launched for a minimum of 5 times Furthermore the dialog will show up again if the user has clicked the `later` button of the dialog and - The button click happened at least 10 days ago and - the app is launched again for a minimum of 5 times. If the rate or never button is clicked once or if the user rates below the defined minimum threshold, the dialog will never be shown again unless you reset the library settings with `AppRating.reset(this)` - but doing this is not recommended. If you have adjusted the dialog to suit your preferences, you have multiple possibilities to show it. Usually you want to show the dialog if the configured conditions are met: ```kotlin ratingBuilder.showIfMeetsConditions() ``` But you can also just create the dialog to show it later ```kotlin ratingBuilder.create() ``` or you can show it immediately: ```kotlin ratingBuilder.showNow() ``` ### Configuration Between the constructor and the show or create method you can adjust the dialog to suit your preferences. You have the following options: #### Google in-app review If you want to use the in-app review from Google instead of the library dialog, call the following function: ```kotlin .useGoogleInAppReview() ``` You should also add a `completeListener` which gets called if the in-app review flow has been completed. The boolean indicates if the flow started correctly, but not if the in-app review was displayed to the user. ```kotlin .setGoogleInAppReviewCompleteListener(googleInAppReviewCompleteListener: (Boolean) -> Unit) ``` Note: After the first in-app review flow was completed successfully the `toShowAgain` conditions will be used. For example `.setMinimumLaunchTimesToShowAgain(launchTimesToShowAgain: Int)` instead of `.setMinimumLaunchTimes(launchTimes: Int)`. ##### Current issues with in-app review Testing the Google in-app review isn't as easy as it should be. There is an open issue in the issuetracker of Google: https://issuetracker.google.com/issues/167352813 Follow these tips on stackoverflow to maximize your chance of testing it successfully: ``` - Use only one account in the device - Ensure that account has installed the app (appears in the app & games > Library section in Play Store) - The account is a GMAIL one, not a GSuit - You can review with the account if you go to the app play listing page. - The account has not reviewed - If you intend to use the Internal Test Track ensure the account has joined the test track. - When switching between different accounts and testing things out, sometimes might be helpful to "Clear Data" from the Play Store app. - Try all the above with different account Source: https://stackoverflow.com/a/63950373 ``` You should consider to wait with implementing it and use the normal rating dialog instead until Google has fixed the issue(s). #### When to show up - Change the number of days the app has to be installed ```kotlin .setMinimumDays(minimumDays: Int) // default is 3 ``` - Change the minimum number of app launches ```kotlin .setMinimumLaunchTimes(launchTimes: Int) // default is 5 ``` - Change the number of days that must have passed away after the last `later` button click ```kotlin .setMinimumDaysToShowAgain(minimumDaysToShowAgain: Int) // default is 14 ``` - Change the minimum number of app launches after the last `later` button click ```kotlin .setMinimumLaunchTimesToShowAgain(launchTimesToShowAgain: Int) // default is 5 ``` - Set a custom condition which will be evaluated before showing the dialog. See below for more information. ```kotlin .setCustomCondition(customCondition: () -> Boolean) ``` - Set a custom condition which will be evaluated before showing the dialog after the `later` button has been clicked. See below for more information. ```kotlin .setCustomConditionToShowAgain(customConditionToShowAgain: () -> Boolean) ``` - Disable app launch counting for this time. It makes sense to combine this option with the custom condition(s). ```kotlin .dontCountThisAsAppLaunch() ``` #### Design The following settings will only take effect if the library dialog is used (and not the Google in-app review). ##### General - Change the icon of the dialog ```kotlin .setIconDrawable(iconDrawable: Drawable?) // default is null which means app icon ``` - Change the rate later button text ```kotlin .setRateLaterButtonTextId(rateLaterButtonTextId: Int) ``` - Add a click listener to the rate later button ```kotlin .setRateLaterButtonClickListener(rateLaterButtonClickListener: RateDialogClickListener) ``` - Show the rate never button, change the button text and add a click listener ```kotlin .showRateNeverButton(rateNeverButtonTextId: Int, rateNeverButtonClickListener: RateDialogClickListener) // by default the button is hidden ``` - Show the rate never button after n times(, change the button text and add a click listener). This means the user has to click the later button for at least n times to see the never button. ```kotlin .showRateNeverButtonAfterNTimes(rateNeverButtonTextId: Int, rateNeverButtonClickListener: RateDialogClickListener, countOfLaterButtonClicks: Int) ``` ##### Rating Overview - Change the title of the rating dialog ```kotlin .setTitleTextId(titleTextId: Int) ``` - Add a message to the rating dialog ```kotlin .setMessageTextId(messageTextId: Int) // by default no message is shown ``` - Change the confirm button text ```kotlin .setConfirmButtonTextId(confirmButtonTextId: Int) ``` - Add a click listener to the confirm button ```kotlin .setConfirmButtonClickListener(confirmButtonClickListener: ConfirmButtonClickListener) ``` - Show only full star ratings ```kotlin .setShowOnlyFullStars(showOnlyFullStars: Boolean) // default is false ``` ##### Store Rating - Change the title of the store rating dialog ```kotlin .setStoreRatingTitleTextId(storeRatingTitleTextId: Int) ``` - Change the message of the store rating dialog ```kotlin .setStoreRatingMessageTextId(storeRatingMessageTextId: Int) ``` - Change the rate now button text ```kotlin .setRateNowButtonTextId(rateNowButtonTextId: Int) ``` - Overwrite the default rate now button click listener ````kotlin .overwriteRateNowButtonClickListener(rateNowButtonClickListener: RateDialogClickListener) // by default it opens the Play Store listing of your app ```` - Add an additional click listener to the rate now button (e.g. for extensive tracking while still using the default library behaviour) ````kotlin .setAdditionalRateNowButtonClickListener(additionalRateNowButtonClickListener: RateDialogClickListener) ```` ##### Feedback - Change the title of the feedback dialog ```kotlin .setFeedbackTitleTextId(feedbackTitleTextId: Int) ``` - Change the no feedback button text ```kotlin .setNoFeedbackButtonTextId(noFeedbackButtonTextId: Int) ``` - Add a click listener to the no feedback button ```kotlin .setNoFeedbackButtonClickListener(noFeedbackButtonClickListener: RateDialogClickListener) ``` - Use the custom feedback dialog instead of the mail feedback dialog ```kotlin .setUseCustomFeedback(useCustomFeedback: Boolean) // default is false ``` ##### Mail Feedback If custom feedback is enabled, these settings will be ignored: - Change the message of the mail feedback dialog ```kotlin .setMailFeedbackMessageTextId(feedbackMailMessageTextId: Int) ``` - Set the mail settings for the mail feedback dialog (mail address, subject, text and error toast message) ```kotlin .setMailSettingsForFeedbackDialog(mailSettings: MailSettings) ``` - Change the mail feedback button text ```kotlin .setMailFeedbackButtonTextId(mailFeedbackButtonTextId: Int) ``` - Overwrite the mail settings with a custom click listener ```kotlin .overwriteMailFeedbackButtonClickListener(mailFeedbackButtonClickListener: RateDialogClickListener) ``` - Add an additional click listener to the mail feedback button (e.g. for extensive tracking while still using the default library behaviour) ````kotlin .setAdditionalMailFeedbackButtonClickListener(additionalMailFeedbackButtonClickListener: RateDialogClickListener) ```` ##### Custom Feedback These settings will only apply if custom feedback is enabled: - Change the message of the custom feedback dialog ```kotlin .setCustomFeedbackMessageTextId(feedbackCustomMessageTextId: Int) ``` - Change the custom feedback button text ```kotlin .setCustomFeedbackButtonTextId(customFeedbackButtonTextId: Int) ``` - Add a click listener to the custom feedback button ```kotlin .setCustomFeedbackButtonClickListener(customFeedbackButtonClickListener: CustomFeedbackButtonClickListener) ``` #### Other settings - Choose the rating threshold. If the user rates below, the feedback dialog will show up. If the user rates the threshold or higher, the store dialog will show up. ```kotlin .setRatingThreshold(ratingThreshold: RatingThreshold) // default is RatingThreshold.THREE ``` - Choose if the dialogs should be cancelable (by clicking outside or using the back button) ```kotlin .setCancelable(cancelable: Boolean) // default is false ``` - Disable all library logs ```kotlin .setLoggingEnabled(isLoggingEnabled: Boolean) // default is true ``` - Enable debug mode, which will cause the dialog to show up immediately when calling `showIfMeetsConditions()` (no conditions will be checked) ```kotlin .setDebug(isDebug: Boolean) // default is false ``` #### Other methods - Open the mail feedback directly without showing up the rating dialog ```kotlin AppRating.openMailFeedback(context: Context, mailSettings: MailSettings) ``` - Open your app's Play Store listing without showing up the rating dialog ```kotlin AppRating.openPlayStoreListing(context: Context) ``` - Check if the dialog has been agreed. This is true if the user has clicked the rate now button or if he gave you a rating below the defined threshold. ```kotlin AppRating.isDialogAgreed(context: Context) ``` - Check if the later button has already been clicked ```kotlin AppRating.wasLaterButtonClicked(context: Context) ``` - Check if the never button has already been clicked ```kotlin AppRating.wasNeverButtonClicked(context: Context) ``` - Get the number of later button clicks ```kotlin AppRating.getNumberOfLaterButtonClicks(context: Context) ``` - Reset all library settings to factory default ```kotlin AppRating.reset(context: Context) ``` ### Orientation Change If the orientation is changed, the `onCreate()` method will be called again and so does the Builder. These additional calls will distort the library behavior because each call of `showIfMeetsConditions()` will increase the counted app launches. To guarantee the correct behavior, you have to check for the `savedInstanceState` like this: ```kotlin override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) if (savedInstanceState == null) { AppRating.Builder(this) // your configuration .showIfMeetsConditions() } } ``` ### Custom Conditions You can easily use custom conditions to show the dialog not (only) on app start but e.g. directly after the nth user interaction. Just call the Builder with your conditions and `dontCountThisAsAppLaunch()`: ```kotlin AppRating.Builder(this) // your other settings .setCustomCondition { buttonClicks > 10 } .setCustomConditionToShowAgain { buttonClicks > 20 } .dontCountThisAsAppLaunch() .showIfMeetsConditions() ``` If you want to show the dialog on app start, but with your custom conditions, you can of course just call the Builder in your `onCreate()` method of your main Activity class. If so, don't forget to remove the `dontCountThisAsAppLaunch()` method from the example above. ## Note * If the in-app review from Google will be used: After the first in-app review flow was completed successfully the `toShowAgain` conditions will be used. For example `.setMinimumLaunchTimesToShowAgain(launchTimesToShowAgain: Int)` instead of `.setMinimumLaunchTimes(launchTimes: Int)` * Use a MaterialComponent theme for better design * Don't forget to set up the mail settings if you want to use the mail feedback dialog (otherwise nothing will happen) * Use `setRatingThreshold(RatingThreshold.NONE)` if you don't want to show the feedback form to the user * If you set `setUseCustomFeedback()` to `true`, you have to handle the feedback text by yourself by adding a click listener (`setCustomFeedbackButtonClickListener()`) * If the user rates below the defined minimum threshold, the feedback dialog will be displayed and then the dialog will not show up again * If you don't want to customize anything, you can just use `AppRating.Builder(this).showIfMeetsConditions()` without any settings * App launches will only get counted if you call `showIfMeetsConditions()` and `dontCountThisAsAppLaunch()` hasn't been called * If you have any problems, check out the logs in Logcat (You can filter by `awesome_app_rating`) * Look at the example app to get first impressions ## Recommendations The following things are highly recommended to not annoy the user, which in turn could lead to negative reviews: - Don't show the dialog immediately after install - Don't set the rating threshold to 5 - Show the `Never` button (after n times) so the user can decide whether or not to rate your app - Use the methods `openPlayStoreListing()` and `openMailFeedback()` in your app settings to give the user the ability of unprompted feedback - Don't use `AppRating.reset(this)` in your production app ## License ``` Copyright (C) 2020 SuddenH4X Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
33.943775
337
0.773308
eng_Latn
0.96587
e5c0db70ece22e9f8f0fd5a3b2fa5dedcb107b93
1,049
md
Markdown
docs/error-messages/compiler-errors-1/compiler-error-c2147.md
OpenLocalizationTestOrg/cpp-docs.it-it
05d8d2dcc95498d856f8456e951d801011fe23d1
[ "CC-BY-4.0" ]
1
2020-05-21T13:09:13.000Z
2020-05-21T13:09:13.000Z
docs/error-messages/compiler-errors-1/compiler-error-c2147.md
OpenLocalizationTestOrg/cpp-docs.it-it
05d8d2dcc95498d856f8456e951d801011fe23d1
[ "CC-BY-4.0" ]
null
null
null
docs/error-messages/compiler-errors-1/compiler-error-c2147.md
OpenLocalizationTestOrg/cpp-docs.it-it
05d8d2dcc95498d856f8456e951d801011fe23d1
[ "CC-BY-4.0" ]
null
null
null
--- title: Compiler Error C2147 | Microsoft Docs ms.custom: ms.date: 11/04/2016 ms.reviewer: ms.suite: ms.technology: - devlang-cpp ms.tgt_pltfrm: ms.topic: error-reference f1_keywords: - C2147 dev_langs: - C++ helpviewer_keywords: - C2147 ms.assetid: d1adb3bf-7ece-4815-922c-ad7492fb6670 caps.latest.revision: 9 author: corob-msft ms.author: corob manager: ghogen translation.priority.ht: - cs-cz - de-de - es-es - fr-fr - it-it - ja-jp - ko-kr - pl-pl - pt-br - ru-ru - tr-tr - zh-cn - zh-tw translationtype: Human Translation ms.sourcegitcommit: 3168772cbb7e8127523bc2fc2da5cc9b4f59beb8 ms.openlocfilehash: 1d9c00f1b06931409f62b8ec00deef8eba1a34d3 --- # Compiler Error C2147 syntax error : 'identifier' is a new keyword An identifier was used that is now a reserved keyword in the language. The following sample generates C2147: ``` // C2147.cpp // compile with: /clr int main() { int gcnew = 0; // C2147 int i = 0; // OK } ``` <!--HONumber=Jan17_HO2-->
17.196721
73
0.673975
eng_Latn
0.361053
e5c19af4b73e177e04fb5294d9a558a052b8742f
915
md
Markdown
Plugin-Documentation/Reference/MsgReceivedEventArgs.md
newcat/tvdc2
be2311dc216d83e1b6322fef5090a12b755b84cf
[ "Unlicense" ]
6
2017-04-27T12:25:10.000Z
2018-07-13T17:43:35.000Z
Plugin-Documentation/Reference/MsgReceivedEventArgs.md
newcat/tvdc2
be2311dc216d83e1b6322fef5090a12b755b84cf
[ "Unlicense" ]
null
null
null
Plugin-Documentation/Reference/MsgReceivedEventArgs.md
newcat/tvdc2
be2311dc216d83e1b6322fef5090a12b755b84cf
[ "Unlicense" ]
null
null
null
# MsgReceivedEventArgs Class **Namespace:** tvdc.EventArguments ### Properties Name|Type|Description ----|----|----------- tags|Dictionary<string, string>|Tags which were sent as message prefix by Twitch. Learn more about here: https://github.com/justintv/Twitch-API/blob/master/IRC.md#tags username|string|Name of the user who sent the message. message|string|Message or additional information about the NOTICE. ### CAREFUL! This is NOT the type of event args you get when subscribed to the [IRC_PrivmsgReceived-Event](https://github.com/newcat/TVDC/blob/master/Plugin-Documentation/Reference/IPluginHost.md#events). Although the [PrivmsgReceivedEventArgs](PrivmsgReceivedEventArgs.md) got the same properties, the tags might be completely different, because the [IRC_MsgReceived-Event](https://github.com/newcat/TVDC/blob/master/Plugin-Documentation/Reference/IPluginHost.md#events) is also fired on `NOTICE`.
61
293
0.791257
eng_Latn
0.837814
e5c1d287864e4e27777a6e1b54b6c274c60e7595
2,271
md
Markdown
README.md
devzom/angular-learn
afb4d68f16faf057d3bb9e4b89c034f0c57691e7
[ "MIT" ]
null
null
null
README.md
devzom/angular-learn
afb4d68f16faf057d3bb9e4b89c034f0c57691e7
[ "MIT" ]
null
null
null
README.md
devzom/angular-learn
afb4d68f16faf057d3bb9e4b89c034f0c57691e7
[ "MIT" ]
null
null
null
# Test: Angular car rental page author: Jakub Zomerfeld / @devzom # TO DO - [x] prepare basic pages with navigation and specific routes: - homepage, - vehicles list page, - vehicle details page - [x] use routerOutlet and router table - [x] handle navbar routerActiveLink - [x] handle 404 (as a page) - [x] mock vehicles data - [x] create a few reusable components - [x] shared services - calculator.service - pricing.service - checkout.service - categories.service - vehicle-list.service - [x] handle subscribe / observer for reactive data - [x] mock pick a vehicle and checkout process - [x] reactive form / data in checkout process - [x] filtered vehicles list - [x] mock filters data - [x] use of router queryParams/queryParamsMap - [x] handle queryParams change on.subscribe() and ngOnInit - [x] handle missing query params - [x] use API to fetch data -> vehicles makes - [x] trigger API cal by btn.click() to fetch categories visible in `<aside/>` - [ ] mock data form API - [ ] feature modules / lazy loading - [x] split rental/vehicles view/components/routing into **vehicle-module** - [x] lazy loaded **Vehicles** module - [x] main module contains homepage - [ ] checkout module This project was generated with [Angular CLI](https://github.com/angular/angular-cli) version 13.2.2. ## Development server Run `ng serve` for a dev server. Navigate to `http://localhost:4200/`. The app will automatically reload if you change any of the source files. ## Code scaffolding Run `ng generate component component-name` to generate a new component. You can also use `ng generate directive|pipe|service|class|guard|interface|enum|module`. ## Build Run `ng build` to build the project. The build artifacts will be stored in the `dist/` directory. ## Running unit tests Run `ng test` to execute the unit tests via [Karma](https://karma-runner.github.io). ## Running end-to-end tests Run `ng e2e` to execute the end-to-end tests via a platform of your choice. To use this command, you need to first add a package that implements end-to-end testing capabilities. ## Further help To get more help on the Angular CLI use `ng help` or go check out the [Angular CLI Overview and Command Reference](https://angular.io/cli) page.
33.397059
120
0.727873
eng_Latn
0.96425
e5c29fa7c03d729f323fa453acbc7de0c9191ab0
260
md
Markdown
chapter01/README.md
PacktPublishing/Hands-On-Enterprise-Application-Development-with-Python
4802509ff92982a53006b8f327e010cda791f911
[ "MIT" ]
27
2018-12-07T14:31:10.000Z
2021-11-08T13:12:46.000Z
chapter01/README.md
MindaugasVaitkus2/Hands-On-Enterprise-Application-Development-with-Python
dc51e134ba2fd1d2ad91f8253f40995f161ecbd8
[ "MIT" ]
1
2020-07-16T14:28:49.000Z
2020-07-21T12:43:43.000Z
chapter01/README.md
MindaugasVaitkus2/Hands-On-Enterprise-Application-Development-with-Python
dc51e134ba2fd1d2ad91f8253f40995f161ecbd8
[ "MIT" ]
23
2018-08-22T18:34:47.000Z
2021-09-02T09:57:27.000Z
# Chapter 1 --- The directory contains code files from chapter 1. ## How to run the code samples --- The code samples have been marked executable and can be directly executed on any machine supporting python2 and python3. For example ./python2_str_types.py
20
99
0.773077
eng_Latn
0.999741
e5c2a1d99c7cb96f4a69b513692c3780561cc6c7
1,315
md
Markdown
src/projects/piatto/piatto.md
charc46/portfolio
90418f7cc6de04cbedb3ac00f55a47108433f812
[ "RSA-MD" ]
null
null
null
src/projects/piatto/piatto.md
charc46/portfolio
90418f7cc6de04cbedb3ac00f55a47108433f812
[ "RSA-MD" ]
null
null
null
src/projects/piatto/piatto.md
charc46/portfolio
90418f7cc6de04cbedb3ac00f55a47108433f812
[ "RSA-MD" ]
null
null
null
--- title: 'Piatto' url: 'https://www.piat.to' github: 'https://github.com/liamjxn/piatto' thumbnail: './cover.png' tagline: 'Piatto is a social dish discovery platform designed to enable users to find and enjoy new dishes based on their taste and that of their social network.' --- --- **Piatto is a social dish discovery platform designed to enable users to find and enjoy new dishes based on their taste and that of their social network.** --- Piatto was built by myself and a team of 3 others as our final project for Le Wagon coding bootcamp at the end of 2020. We had 10 days to design, plan and build our project to then present to our fellow students and teachers on demo day at the end of our course. We imagined Piatto as a sort of Spotify for takeaway dishes, where you can create your own 'dishlists' rather than playlists. This allows you to group together your favourite takeaway dishes for your friends to look through or simply as a reference for yourself. Piatto takes your location in to account when you search for a dish or restaurant and suggests other similar meals nearby if the one you are looking at isn't available near you. ### [Watch our pitch on Youtube](https://youtu.be/lmQx0AHFbuA?t=5403) ### Technologies used: * Ruby on Rails * HTML * CSS/Sass * PostgreSQL * Hosted on Heroku
48.703704
262
0.7673
eng_Latn
0.999837
e5c4d0e38ac402abbc981325451681944e6ba496
218
md
Markdown
_watches/M20201213_070914_TLP_5.md
Meteoros-Floripa/meteoros.floripa.br
7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad
[ "MIT" ]
5
2020-05-19T17:04:49.000Z
2021-03-30T03:09:14.000Z
_watches/M20201213_070914_TLP_5.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
null
null
null
_watches/M20201213_070914_TLP_5.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
2
2020-05-19T17:06:27.000Z
2020-09-04T00:00:43.000Z
--- layout: watch title: TLP5 - 13/12/2020 - M20201213_070914_TLP_5T.jpg date: 2020-12-13 07:09:14 permalink: /2020/12/13/watch/M20201213_070914_TLP_5 capture: TLP5/2020/202012/20201212/M20201213_070914_TLP_5T.jpg ---
27.25
62
0.784404
fra_Latn
0.055761
e5c4e7ba1d7d3034f9b460d8b913c53b72550f6b
346
md
Markdown
content/publication/rn-612/index.md
allen-cheng/allen-cheng.github.io
b592ceff0319225b573b406f40f209741461705b
[ "MIT" ]
null
null
null
content/publication/rn-612/index.md
allen-cheng/allen-cheng.github.io
b592ceff0319225b573b406f40f209741461705b
[ "MIT" ]
null
null
null
content/publication/rn-612/index.md
allen-cheng/allen-cheng.github.io
b592ceff0319225b573b406f40f209741461705b
[ "MIT" ]
null
null
null
--- title: "Further evaluation of a rapid diagnostic test for melioidosis in an area of endemicity" date: 2004-01-01 publishDate: 2019-07-14T01:34:06.975978Z authors: ["M. O'Brien", "K. Freeman", "G. Lum", "A. C. Cheng", "S. P. Jacups", "B. J. Currie"] publication_types: ["2"] abstract: "" featured: false publication: "*J Clin Microbiol*" ---
28.833333
95
0.682081
eng_Latn
0.56869
e5c5bcbb8d1746a8fd9bdfb677bbb95c1c8063cc
177
md
Markdown
README.md
SulthanHaamith/Skillrack
50b6091f136affe15955c923e693c75ed59d3163
[ "Apache-2.0" ]
null
null
null
README.md
SulthanHaamith/Skillrack
50b6091f136affe15955c923e693c75ed59d3163
[ "Apache-2.0" ]
null
null
null
README.md
SulthanHaamith/Skillrack
50b6091f136affe15955c923e693c75ed59d3163
[ "Apache-2.0" ]
null
null
null
# Skillrack Here you can find the program code to execute skill-rack programs and daily challenges. Kindly try to understand the codes for logic and don't just simply copy it.
59
164
0.79661
eng_Latn
0.998811
e5c5c16ca6978315600539106d7f5503928632eb
3,276
md
Markdown
user/pages/blog/88-examples-of-incredible-aerial-photography/post.md
itsligo/blog.jkelleher.me
ad83a77c4122ce6ed9c7bcc42b7ea5ff082b5678
[ "MIT" ]
null
null
null
user/pages/blog/88-examples-of-incredible-aerial-photography/post.md
itsligo/blog.jkelleher.me
ad83a77c4122ce6ed9c7bcc42b7ea5ff082b5678
[ "MIT" ]
null
null
null
user/pages/blog/88-examples-of-incredible-aerial-photography/post.md
itsligo/blog.jkelleher.me
ad83a77c4122ce6ed9c7bcc42b7ea5ff082b5678
[ "MIT" ]
null
null
null
--- title: 88 Examples of Incredible Aerial Photography date: 2009/10/29 16:52:00 taxonomy: category: blog --- [88 Examples of Incredible Aerial Photography](http://feedproxy.google.com/~r/iShift/~3/_fb8Rs5-n3o/): " When it comes to inspiration then there is no limitation on resources. **Photography** is one of the key sources of inspiration for some of my past work. Here we talking about Aerial Photography which I find usually amaze me more than anything. If you know how to shoot a photo then you can also change something fairly simple to something creative or abstract or otherwise more artistic. You don’t need any special skills for taking such shots. It all depends on the environment and perfect timing. There are many ways to attack photography and some are much more expensive than others. Here in this showcase, we presenting a **Stunning collection of Aerial Photography and Pictures** taken by various artists in which all pictures are linked to the author’s pages. You may want to explore further works of the photographers we’ve featured below. For those who don’t know what is **“Aerial view”** or **“Bird’s-eye view”** in terms of Photography then **aerial view defined as a view of an object from above**, as though the observer were a bird, often used in the making of blueprints, floor plans and maps. The term can also be used to describe oblique views, drawn from an imagined perspective. **You may be interested in the following inspiration related articles as well.** * **[Enjoy Moments Of Reflective Photography – Part I**](http://www.instantshift.com/2009/05/12/enjoy-the-70-more-moments-of-reflective-photography/), **[Part II**](http://www.instantshift.com/2009/05/05/enjoy-the-80-moments-of-reflective-photography/) * **[Rainbow Colors Inspired Photos and Pictures – Part I**](http://www.instantshift.com/2009/04/18/80-rainbow-colors-inspired-photos-and-pictures/), **[Part II**](http://www.instantshift.com/2009/04/28/70-more-rainbow-colors-inspired-photos-and-pictures/) * **[Motion and Blur Photography for Inspiration – Part I**](http://www.instantshift.com/2009/03/16/motion-and-blur-photography-for-inspiration/), **[Part II**](http://www.instantshift.com/2009/04/02/motion-and-blur-photography-for-inspiration-part-ii/) * **[Strange and Fantastic Buildings Architecture – Part I**](http://www.instantshift.com/2009/02/19/80-strange-and-fantastic-buildings-architecture/), **[Part II**](http://www.instantshift.com/2009/02/26/50-more-unusual-buildings-architecture/) * **[35 Beautiful and Creative Fountains Around the World**](http://www.instantshift.com/2009/07/03/35-beautiful-and-creative-fountains-around-the-world/) Please feel free to join us and you are always welcome to share your thoughts even if you have more reference links related to inspiration that our readers may like. # Incredible Examples of Aerial Photography Photography can serve as a nice source of inspiration. We designers, can derive inspiration from almost everything around, and this collection can fulfills your various photography inspiration related needs as the creativity in shooting photos is somewhat hot trend now days. We can promise you that when you start browsing them farther in details it will surely refresh your memory.
86.210526
499
0.777473
eng_Latn
0.98612
e5c5f609a7b1966fb52dde853c5cee4170e38eeb
10,006
md
Markdown
docs/mine/lotus/miner-setup.md
Wondertan/filecoin-docs
3af550d2aa76e3f956ae931bb1e8e9855ce3d73f
[ "MIT" ]
null
null
null
docs/mine/lotus/miner-setup.md
Wondertan/filecoin-docs
3af550d2aa76e3f956ae931bb1e8e9855ce3d73f
[ "MIT" ]
null
null
null
docs/mine/lotus/miner-setup.md
Wondertan/filecoin-docs
3af550d2aa76e3f956ae931bb1e8e9855ce3d73f
[ "MIT" ]
null
null
null
--- title: 'Lotus Miner: setup a high performance miner' description: 'This guide describes the necessary steps to configure a Lotus miner for production.' breadcrumb: 'Miner setup' --- # {{ $frontmatter.title }} {{ $frontmatter.description }} ::: warning Mining will only work if you fully cover the [minimal hardware requirements](../hardware-requirements.md) for the network in which you will mine. As the mining process is very demanding for the machines on several aspects and relies on precise configuration, we strongly recommend Linux systems administration experience before embarking. ::: [[TOC]] ## Pre-requisites Before attempting to follow this guide: - Make sure you meet the [minimal hardware requirements](../hardware-requirements.md). - Make sure you have followed the instructions to [install the Lotus suite](../../get-started/lotus/installation.md) and make sure you have built Lotus with ["Native Filecoin FFI"](../../get-started/lotus/installation.md#native-filecoin-ffi). Once the installation is complete, `lotus`, `lotus-miner` and `lotus-worker` will be installed. - Make sure your Lotus Node is running as the miner will communicate with it and cannot work otherwise. - If you are in China, read the [tips for running in China](tips-running-in-china.md) page first. ::: callout Be warned: if you decide to skip any of the sections below, things will not work! Read and tread carefully. ::: ## Before starting the miner ### Performance tweaks It is recommended to set the following environment variables in your environment so that they are defined **every time any of the Lotus applications is launched** (meaning, when the daemons are started): ```sh # See https://github.com/filecoin-project/bellman export BELLMAN_CPU_UTILIZATION=0.875 ``` The `BELLMAN_CPU_UTILIZATION` is an optional variable to designate a proportion of the multi-exponentiation calculation to be moved to a CPU in parallel to the GPU. This is an effort to keep all the hardware occupied. The interval must be a number between `0` and `1`. The value `0.875` is a good starting point, but you should experiment with it if you want an optimal setting. Different hardware setups will result in different values being optimal. Omitting this environment variable might also be optimal. ```sh # See https://github.com/filecoin-project/rust-fil-proofs/ export FIL_PROOFS_MAXIMIZE_CACHING=1 # More speed at RAM cost (1x sector-size of RAM - 32 GB). export FIL_PROOFS_USE_GPU_COLUMN_BUILDER=1 # precommit2 GPU acceleration export FIL_PROOFS_USE_GPU_TREE_BUILDER=1 # The following increases speed of PreCommit1 at the cost of using a full # CPU Core-Complex rather than a single core. Should be used with CPU affinities set! # See https://github.com/filecoin-project/rust-fil-proofs/ and the seal workers guide. export FIL_PROOFS_USE_MULTICORE_SDR=1 ``` ### Running the miner on a different machine as the Lotus Node If you opt to run a miner on a different machine as the Lotus Node, set: ```sh export FULLNODE_API_INFO=<api_token>:/ip4/<lotus_daemon_ip>/tcp/<lotus_daemon_port>/http ``` and **make sure the `ListenAddress` has [remote access enabled](../../build/lotus/enable-remote-api-access.md)**. Instructions on how to obtain a token are [available here](../../build/lotus/api-tokens.md). Similarly, `lotus-miner` (as a client application to the Lotus Miner daemon), can talk to a remote miner by setting: ```sh export MINER_API_INFO="TOKEN:/ip4/<IP>/tcp/<PORT>/http" ``` ### Adding the necessary swap If you have only 128GiB of RAM, you will need to make sure your system provides at least an extra 256GiB of very fast swap (preferably NVMe SSD) or you will be unable to seal sectors: ```sh sudo fallocate -l 256G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile # show current swap spaces and take note of the current highest priority swapon --show # append the following line to /etc/fstab (ensure highest priority) and then reboot # /swapfile swap swap pri=50 0 0 sudo reboot # check a 256GB swap file is automatically mounted and has the highest priority swapon --show ``` ### Creating wallets for the miner You will need at least a BLS wallet (`f3...` for mainnet) for mining. We recommend using [separate owner and worker addresses](miner-addresses.md) though. Thus, create at least two wallets (unless you have some already): ```sh # A new BLS address to use as owner address: lotus wallet new bls f3... # A new BLS address to use as worker address: lotus wallet new bls f3... ``` ::: callout Next make sure to [send some funds](../../get-started/lotus/send-and-receive-fil.md) to the **worker address** so that the miner setup can be completed. ::: For additional information about the different wallets that a miner can use and how to configure them, read the [miner addresses guide](miner-addresses.md). ::: tip Safely [backup your wallets](../../get-started/lotus/send-and-receive-fil.md#exporting-and-importing-addresses)! ::: ### Downloading parameters For the miner to start, it will need to read and verify the Filecoin proof parameters. These can be downloaded in advance (recommended), or otherwise the init process will. Proof parameters consist of several files, which in the case of 32GiB sectors, total **over 100GiB**. We recommend setting a custom location to store parameters and proofs parent cache -created during the first run- with: ```sh export FIL_PROOFS_PARAMETER_CACHE=/path/to/folder/in/fast/disk export FIL_PROOFS_PARENT_CACHE=/path/to/folder/in/fast/disk2 ``` Parameters are read on every (re)start, so using disks with very fast access, like NVMe drives, will speed up miners and workers (re)boots. When the above variables are not set, things will end up in `/var/tmp/` by default, which **often lacks enough space**. To download the parameters do: ```sh # Use sectors supported by the Filecoin network that the miner will join and use. # lotus-miner fetch-params <sector-size> lotus-miner fetch-params 32GiB lotus-miner fetch-params 64GiB ``` You can verify sectors sizes for a network in the [network dashboard](https://networks.filecoin.io). The `FIL_PROOFS_*_CACHE` variables should stay defined not only for download, but also when starting the Lotus miner (or workers). ## Checklist before launch To summarize all of the above, make sure that: - The _worker address_ has some funds so that the miner can be initialized. - The following environment variables have been defined and will be available for any Lotus miner runs: ``` export LOTUS_MINER_PATH=/path/to/miner/config/storage export LOTUS_PATH=/path/to/lotus/node/folder # When using a local node. export BELLMAN_CPU_UTILIZATION=0.875 # Optimal value depends on your exact hardware. export FIL_PROOFS_MAXIMIZE_CACHING=1 export FIL_PROOFS_USE_GPU_COLUMN_BUILDER=1 # When having GPU. export FIL_PROOFS_USE_GPU_TREE_BUILDER=1 # When having GPU. export FIL_PROOFS_PARAMETER_CACHE=/fast/disk/folder # > 100GiB! export FIL_PROOFS_PARENT_CACHE=/fast/disk/folder2 # > 50GiB! export TMPDIR=/fast/disk/folder3 # Used when sealing. ``` - Parameters have been prefetched to the cache folders specified above. - The systems has enough swap and it is active. ## Miner initialization Before starting your miner for the first time run: ```sh lotus-miner init --owner=<address> --worker=<address> --no-local-storage ``` - The `--no-local-storage` flag is used so that we can later configure [specific locations for storage](custom-storage-layout.md). This is optional but recommended. - The Lotus Miner configuration folder is created at `~/.lotusminer/` or `$LOTUS_MINER_PATH` if set. - The difference between _owner_ and _worker_ addresses is explained in the [miner addresses guide](miner-addresses.md). As mentioned above, we recommend using two separate addresses. If the `--worker` flag is not provided, the owner address will be used. _Control addresses_ can be added later when the miner is running. ## Connectivity to the miner Before you start your miner, it is **very important** to configure it so that it is reachable from any peer in the Filecoin network. For this you will need a stable public IP and edit your `~/.lotusminer/config.toml` as follows: ```toml ... [Libp2p] ListenAddresses = ["/ip4/0.0.0.0/tcp/24001"] # choose a fixed port AnnounceAddresses = ["/ip4/<YOUR_PUBLIC_IP_ADDRESS>/tcp/24001"] # important! ... ``` Once you start your miner, [make sure you can connect to its public IP/port](connectivity.md). ## Starting the miner You are now ready to start your Lotus miner: ```sh lotus-miner run ``` or if you are using the systemd service file: ```sh systemctl start lotus-miner ``` ::: warning **Do not proceed** from here until you have verified that your miner not only is running, but also [reachable on its public IP address](connectivity.md). ::: ## Publishing the miner addresses Once the miner is up and running, publish your miner address (which you configured above) on the chain so that other nodes can talk to it directly and make deals: ```sh lotus-miner actor set-addrs /ip4/<YOUR_PUBLIC_IP_ADDRESS>/tcp/24001 ``` ## Next steps Your miner should now be preliminarly setup and running, but **there are still a few more recommended tasks** to be ready for prime-time: - Setup your [custom storage layout](custom-storage-layout.md) (required if you used `--no-local-storage`). - Edit the miner [configuration settings](miner-configuration.md) to fit your requirements. - Learn what is a right moment to [shutdown/restart your miner](miner-lifecycle.md) - Update `ExpectedSealDuration` with the time it takes your miner to seal a sector: discover it by [running a benchmark](benchmarks.md) or by [pledging a sector](sector-pledging.md) and noting down the time. - Configure additional [seal workers](seal-workers.md) to increase the miner's capacity to seal sectors. - Configure a [separate address for WindowPost messages](miner-addresses.md).
45.276018
509
0.763642
eng_Latn
0.994048
e5c6edbd83c5291c62fbf4e8ec1a1474baa63f29
1,343
md
Markdown
docs/document/content/quick-start/shardingsphere-proxy-quick-start.cn.md
taoterrrr/shardingsphere
7c3db6f4f166be35a6ec2ac8d9025b8b56c7dbab
[ "Apache-2.0" ]
4,372
2019-01-16T03:07:05.000Z
2020-04-17T11:16:15.000Z
docs/document/content/quick-start/shardingsphere-proxy-quick-start.cn.md
taoterrrr/shardingsphere
7c3db6f4f166be35a6ec2ac8d9025b8b56c7dbab
[ "Apache-2.0" ]
3,040
2019-01-16T01:18:40.000Z
2020-04-17T12:53:05.000Z
docs/document/content/quick-start/shardingsphere-proxy-quick-start.cn.md
taoterrrr/shardingsphere
7c3db6f4f166be35a6ec2ac8d9025b8b56c7dbab
[ "Apache-2.0" ]
1,515
2019-01-16T08:44:17.000Z
2020-04-17T09:07:53.000Z
+++ pre = "<b>2.2. </b>" title = "ShardingSphere-Proxy" weight = 2 +++ ## 获取 ShardingSphere-Proxy 目前 ShardingSphere-Proxy 可以通过以下方式: - [二进制发布包](/cn/user-manual/shardingsphere-proxy/startup/bin/) - [Docker](/cn/user-manual/shardingsphere-proxy/startup/docker/) ## 规则配置 编辑 `%SHARDINGSPHERE_PROXY_HOME%/conf/config-xxx.yaml`。 编辑 `%SHARDINGSPHERE_PROXY_HOME%/conf/server.yaml`。 > %SHARDINGSPHERE_PROXY_HOME% 为 Proxy 解压后的路径,例:/opt/shardingsphere-proxy-bin/ 详情请参见 [配置手册](/cn/user-manual/shardingsphere-proxy/yaml-config/)。 ## 引入依赖 如果后端连接 PostgreSQL 数据库,不需要引入额外依赖。 如果后端连接 MySQL 数据库,请下载 [mysql-connector-java-5.1.47.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.47/mysql-connector-java-5.1.47.jar) 或者 [mysql-connector-java-8.0.11.jar](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.11/mysql-connector-java-8.0.11.jar),并将其放入 `%SHARDINGSPHERE_PROXY_HOME%/ext-lib` 目录。 ## 启动服务 * 使用默认配置项 ```bash sh %SHARDINGSPHERE_PROXY_HOME%/bin/start.sh ``` 默认启动端口为 `3307`,默认配置文件目录为:`%SHARDINGSPHERE_PROXY_HOME%/conf/`。 * 自定义端口和配置文件目录 ```bash sh %SHARDINGSPHERE_PROXY_HOME%/bin/start.sh ${proxy_port} ${proxy_conf_directory} ``` ## 使用 ShardingSphere-Proxy 执行 MySQL 或 PostgreSQL 的客户端命令直接操作 ShardingSphere-Proxy 即可。以 MySQL 举例: ```bash mysql -u${proxy_username} -p${proxy_password} -h${proxy_host} -P${proxy_port} ```
25.826923
335
0.745346
yue_Hant
0.834182
e5c74cb1dce74eca0311cce8b0441b15bd39b5e2
2,565
md
Markdown
docker.md
kbroman/ProgrammingNotes
09a22f23b4483c9b6c429861134a3a71d1052253
[ "CC0-1.0" ]
63
2015-11-25T17:56:21.000Z
2021-10-17T16:28:33.000Z
docker.md
kbroman/ProgrammingNotes
09a22f23b4483c9b6c429861134a3a71d1052253
[ "CC0-1.0" ]
2
2020-04-23T01:00:40.000Z
2021-07-12T21:09:06.000Z
docker.md
kbroman/ProgrammingNotes
09a22f23b4483c9b6c429861134a3a71d1052253
[ "CC0-1.0" ]
11
2017-07-28T06:23:43.000Z
2022-01-24T21:23:34.000Z
## Docker - For Ubuntu, want to install `docker-ce` ("Community edition") rather than `docker.io` (a now-out-of-date version) The instructions are [here](https://docs.docker.com/install/linux/docker-ce/ubuntu/). Ended up doing: ``` sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" sudo apt-get install docker-ce docker-ce-cli containerd.io ``` - To install docker for minecraft, I changed to the directory containing the `Dockerfile` in the miner package installation, and then followed the instructions in the [miner book](https://ropenscilabs.github.io/miner_book/installation-and-configuration.html#docker). ``` cd ~/Rlibs/miner/Dockerfile . sudo docker build -t minecraft . ``` - To view the IP address of a docker container, type: ``` ip addr show ``` Look for lines following "docker" - My slides about docker: - [without notes](https://kbroman.org/AdvData/26_containers.pdf) - [with notes](https://kbroman.org/AdvData/26_containers_notes.pdf) - [video of the lecture](https://us-lti.bbcollab.com/recording/0fc7d7a1d7ac472084798e61c43e4a63) - For r-devel with R/qtl, create a Dockerfile with: ``` FROM rocker/r-devel RUN RD -e "install.packages('qtl')" ``` and then do `sudo docker build -t rdevel-qtl .` That will create the image you want. Then run a container and open bash using: ```shell sudo docker run -it rdevel-qtl bash ``` To install a package from github, use the [remotes](https://remotes.r-lib.org) package. Also note: with rocker/r-devel, you start R using `RD`. - To use R-devel with the clang compiler and UBsan (undefined behavior checker), use `FROM rocker/r-devel-ubsan-clang`. You again need to use `RD` within the container to run R, and also need `docker run --cap-add SYS_PTRACE`. So once you've created the container (with say `sudo docker build -t rdevel-clang-ubsan .`), you run it with: ```shell sudo docker run --cap-add SYS_PTRACE -it rdevel-clang-ubsan bash ``` - View images ``` sudo docker images sudo docker image ls ``` - View running containers ``` sudo docker container ls -a ``` - Start a container in interactive mode ``` sudo docker start -i [container_name] ``` - Run a container from an image, interactively in a bash shell ``` sudo docker run -it [image_name] bash ``` - Remove a running container ``` sudo docker rm [container_name] ``` - Remove an image ```shell sudo docker image rm [repository or image id] ```
23.971963
98
0.695906
eng_Latn
0.830841
e5c78356d5c8663ca9fd6909652addbef9126078
30,450
md
Markdown
README.md
igoriev36/BokkyPooBahsDateTimeLibrary
eeaad931b8f3a1be4fa68b9c85b8024eaf4fcfd7
[ "MIT" ]
2
2021-01-14T07:55:11.000Z
2022-03-30T16:19:58.000Z
README.md
igoriev36/BokkyPooBahsDateTimeLibrary
eeaad931b8f3a1be4fa68b9c85b8024eaf4fcfd7
[ "MIT" ]
null
null
null
README.md
igoriev36/BokkyPooBahsDateTimeLibrary
eeaad931b8f3a1be4fa68b9c85b8024eaf4fcfd7
[ "MIT" ]
null
null
null
<kbd><img src="images/clocks.png" /></kbd> <br /> <hr /> # BokkyPooBah's DateTime Library A gas-efficient Solidity date and time library. Instead of using loops and lookup tables, this date conversions library uses formulae to convert year/month/day hour:minute:second to a Unix timestamp and back. See [BokkyPooBah’s Gas-Efficient Solidity DateTime Library](https://medium.com/@BokkyPooBah/bokkypoobahs-gas-efficient-solidity-datetime-library-92bf96d9b2da) for more explanations. Thank you to [Alex Kampa](https://github.com/alex-kampa), [James Zaki](https://github.com/jzaki) and [Adrian Guerrera](https://github.com/apguerrera) for helping to validate this library. Thanks also to [Oleksii Matiiasevych](https://github.com/lastperson) for asking about leap seconds. <br /> <hr /> ## Table Of Contents * [History](#history) * [Bug Bounty Scope And Donations](#bug-bounty-scope-and-donations) * [Deployment](#deployment) * [Questions And Answers](#questions-and-answers) * [Conventions](#conventions) * [Functions](#functions) * [_daysFromDate](#_daysfromdate) * [_daysToDate](#_daystodate) * [timestampFromDate](#timestampfromdate) * [timestampFromDateTime](#timestampfromdatetime) * [timestampToDate](#timestamptodate) * [timestampToDateTime](#timestamptodatetime) * [isValidDate](#isvaliddate) * [isValidDateTime](#isvaliddatetime) * [isLeapYear](#isleapyear) * [_isLeapYear](#_isleapyear) * [isWeekDay](#isweekday) * [isWeekEnd](#isweekend) * [getDaysInMonth](#getdaysinmonth) * [_getDaysInMonth](#_getdaysinmonth) * [getDayOfWeek](#getdayofweek) * [getYear](#getyear) * [getMonth](#getmonth) * [getDay](#getday) * [getHour](#gethour) * [getMinute](#getminute) * [getSecond](#getsecond) * [addYears](#addyears) * [addMonths](#addmonths) * [addDays](#adddays) * [addHours](#addhours) * [addMinutes](#addminutes) * [addSeconds](#addseconds) * [subYears](#subyears) * [subMonths](#submonths) * [subDays](#subdays) * [subHours](#subhours) * [subMinutes](#subminutes) * [subSeconds](#subseconds) * [diffYears](#diffyears) * [diffMonths](#diffmonths) * [diffDays](#diffdays) * [diffHours](#diffhours) * [diffMinutes](#diffminutes) * [diffSeconds](#diffseconds) * [Gas Cost](#gas-cost) * [Algorithm](#algorithm) * [Testing](#testing) * [References](#references) <br /> <hr /> ## History Version | Date | Notes :------------------ |:------------ |:--------------------------------------- v1.00-pre-release | May 25 2018 | "Rarefaction" pre-release. I'm currently trying to get this library audited, so don't use in production mode yet. v1.00-pre-release-a | Jun 2 2018 | "Rarefaction" pre-release a. Added the [contracts/BokkyPooBahsDateTimeContract.sol](contracts/BokkyPooBahsDateTimeContract.sol) wrapper for convenience.<br />[Alex Kampa](https://github.com/alex-kampa) conducted a range of [tests](https://github.com/alex-kampa/test_BokkyPooBahsDateTimeLibrary) on the library. v1.00-pre-release-b | Jun 4 2018 | "Rarefaction" pre-release b. Replaced public function with internal for easier EtherScan verification - [a83e13b](https://github.com/bokkypoobah/BokkyPooBahsDateTimeLibrary/commit/a83e13bef31e8ef399007dd237e42bd5cdf479e6).<br />Deployed [contracts/BokkyPooBahsDateTimeContract.sol](contracts/BokkyPooBahsDateTimeContract.sol) with the inlined [contracts/BokkyPooBahsDateTimeLibrary.sol](contracts/BokkyPooBahsDateTimeLibrary.sol) to the [Ropsten network](deployment/deployment-v1.00-prerelease.md) at address [0x07239bb079094481bfaac91ca842426860021aaa](https://ropsten.etherscan.io/address/0x07239bb079094481bfaac91ca842426860021aaa#code) v1.00-pre-release-c | June 8 2018 | "Rarefaction" pre-release c. Added `require(year >= 1970)` to `_daysFromDate(...)` in [4002b27](https://github.com/bokkypoobah/BokkyPooBahsDateTimeLibrary/commit/4002b278d1779fcd4f3f4527a60a5887ee6c20ba) as highlighted in [James Zaki](https://github.com/jzaki)'s audit v1.00-pre-release-d | Sep 1 2018 | "Rarefaction" pre-release d. Added [isValidDate(...)](#isvaliddate) and [isValidDateTime(...)](#isvaliddatetime) in [380061b](https://github.com/bokkypoobah/BokkyPooBahsDateTimeLibrary/commit/380061b9d20c83450ee303f709fe58e973c5f4a9) as highlighted in [Adrian Guerrera](https://github.com/apguerrera)'s audit v1.00 | Sep 2 2018 | "Rarefaction" release v1.01 | Feb 14 2019 | "Notoryctes" release. Upgraded contracts to Solidity 0.5.x.<br />Updated to MIT Licence v1.01 | Feb 17 2019 | Bug bounty added <br /> <hr /> ## Bug Bounty Scope And Donations Details of the bug bounty program for this project can be found at [BokkyPooBah's Hall Of Fame And Bug Bounties](https://github.com/bokkypoobah/BokkyPooBahsHallOfFameAndBugBounties). Please consider [donating](https://github.com/bokkypoobah/BokkyPooBahsHallOfFameAndBugBounties#donations) to support the bug bounty, and the development and maintenance of decentralised applications. The scope of the bug bounty for this project follows: * [contracts/BokkyPooBahsDateTimeLibrary.sol](contracts/BokkyPooBahsDateTimeLibrary.sol) <br /> <hr /> ## Deployment [v1.00 of the source code](https://github.com/bokkypoobah/BokkyPooBahsDateTimeLibrary/tree/1ea8ef42b3d8db17b910b46e4f8c124b59d77c03/contracts) for [BokkyPooBahsDateTimeContract.sol](https://github.com/bokkypoobah/BokkyPooBahsDateTimeLibrary/blob/1ea8ef42b3d8db17b910b46e4f8c124b59d77c03/contracts/BokkyPooBahsDateTimeContract.sol) and [TestDateTime.sol](https://github.com/bokkypoobah/BokkyPooBahsDateTimeLibrary/blob/1ea8ef42b3d8db17b910b46e4f8c124b59d77c03/contracts/TestDateTime.sol) has been deployed to: * The Ropsten network: * BokkyPooBahsDateTimeContract.sol at [0x947cc35992e6723de50bf704828a01fd2d5d6641](https://ropsten.etherscan.io/address/0x947cc35992e6723de50bf704828a01fd2d5d6641#code) * TestDateTime.sol at [0xa068fe3e029a972ecdda2686318806d2b19875d1](https://ropsten.etherscan.io/address/0xa068fe3e029a972ecdda2686318806d2b19875d1#code) * Mainnet: * BokkyPooBahsDateTimeContract.sol at [0x23d23d8f243e57d0b924bff3a3191078af325101](https://etherscan.io/address/0x23d23d8f243e57d0b924bff3a3191078af325101#code) * TestDateTime.sol at [0x78f96b2d5f717fa9ad416957b79d825cc4cce69d](https://etherscan.io/address/0x78f96b2d5f717fa9ad416957b79d825cc4cce69d#code) For each of the deployed contracts above, you can click on the *Read Contract* tab to test out the date/time/timestamp functions. <br /> <hr /> ## Questions And Answers ### Questions by `_dredge` User [/u/_dredge](https://www.reddit.com/user/_dredge) asked the [following questions](https://www.reddit.com/r/ethereum/comments/8m3p4j/bokkypoobahs_datetime_library_a_solidity/dzkss7n/): > Some Muslim countries have a Friday/Saturday weekend. Workday(1-7) may be more useful. > > I presume leap seconds and such details are taken care of in the Unix timecode. > > Some additional ideas for functions below. > > Quarter calculations > > weekNumber(1-53) > > Complete years / months / weeks / days > > Nearest years / months / weeks / days Regarding regions and systems where Friday/Saturday are weekends, please use the function `getDayOfWeek(timestamp)` that returns 1 (= Monday, ..., 7 (= Sunday) to determine whether you should treat a day as a weekday or weekend. See the next question regarding the leap seconds. Regarding the additional ideas, thanks! <br /> ### What about the leap second? Asked by [Oleksii Matiiasevych](https://github.com/lastperson) and */u/_dredge* above. For example, a [leap second](https://en.wikipedia.org/wiki/Unix_time#Leap_seconds) was inserted on Jan 01 1999. From the first answer to [Unix time and leap seconds](https://stackoverflow.com/a/16539483): > The number of seconds per day are fixed with Unix timestamps. > > > The Unix time number is zero at the Unix epoch, and increases by exactly 86400 per day since the epoch. > > So it cannot represent leap seconds. The OS will slow down the clock to accommodate for this. The leap seconds is simply not existent as far a Unix timestamps are concerned. And from the second answer to [Unix time and leap seconds](https://stackoverflow.com/a/16539734): > Unix time is easy to work with, but some timestamps are not real times, and some timestamps are not unique times. > > That is, there are some duplicate timestamps representing two different seconds in time, because in unix time the sixtieth second might have to repeat itself (as there can't be a sixty-first second). Theoretically, they could also be gaps in the future because the sixtieth second doesn't have to exist, although no skipping leap seconds have been issued so far. > > Rationale for unix time: it's defined so that it's easy to work with. Adding support for leap seconds to the standard libraries is very tricky. > ... This library aims to replicate the [Unix time](https://en.wikipedia.org/wiki/Unix_time) functionality and assumes that leap seconds are handled by the underlying operating system. <br /> ### What is the maximum year 2345? Asked by [Adrian Guerrera](https://github.com/apguerrera). **2345** is just an arbitrary number chosen for the year limit to test to. The algorithms should still work beyond this date. <br /> ### Why are there no input validations to some of the functions? Asked by [Adrian Guerrera](https://github.com/apguerrera). Specifically, the functions [_daysFromDate](#_daysfromdate), [timestampFromDate](#timestampfromdate) and [timestampFromDateTime](#timestampfromdatetime). The date and time inputs should be validated before the values are passed to these functions. The validation functions [isValidDate(...)](#isvaliddate) and [isValidDateTime(...)](#isvaliddatetime) have now been added for this purpose. <br /> ### Why are all variables 256-bit integers? This library provides a cheap conversion between the timestamp and year/month/day hour:minute:second formats. There is no requirement for this library to store a packed structure to represent a DateTime as this data can be stored as a `uint256` and converted on the fly to year/month/day hour:minute:second. <br /> ### Why do you call this a gas-efficient library? The formulae for converting between a timestamp and year/month/day hour:minute:second format uses a mathematically simple algorithm without any loops. The gas cost is relatively constant (as there are no loops) and the mathematical computations are relative cheap (compared to using loops and looking up data from storage). <br /> ### Can this library be written more efficiently? Most likely. The aim of this first version is for the conversions to be computed correctly. <br /> ### How gas-efficient is this library? From [Gas Cost](#gas-cost), `timestampToDateTime(…)` has an execution gas cost of 3,101 gas, and `timestampFromDateTime(…)` has an execution gas cost of 2,566 gas. <br /> <hr /> ## Conventions All dates, times and Unix timestamps are [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time). Unit | Range | Notes :------------- |:-------------------------:|:--------------------------------------------------------------- timestamp | >= 0 | Unix timestamp, number of seconds since 1970/01/01 00:00:00 UTC year | 1970 ... 2345 | month | 1 ... 12 | day | 1 ... 31 | hour | 0 ... 23 | minute | 0 ... 59 | second | 0 ... 59 | dayOfWeek | 1 ... 7 | 1 = Monday, ..., 7 = Sunday year/month/day | 1970/01/01 ... 2345/12/31 | `_days`, `_months` and `_years` variable names are `_`-prefixed as the non-prefixed versions are reserve words in Solidity. All functions operate on the `uint` timestamp data type, except for functions prefixed with `_`. <br /> <hr /> ## Functions ### _daysFromDate Calculate the number of days `_days` from 1970/01/01 to `year`/`month`/`day`. ```javascript function _daysFromDate(uint year, uint month, uint day) public pure returns (uint _days) ``` **NOTE** This function does not validate the `year`/`month`/`day` input. Use [`isValidDate(...)`](#isvaliddate) to validate the input if necessary. <br /> ### _daysToDate Calculate `year`/`month`/`day` from the number of days `_days` since 1970/01/01 . ```javascript function _daysToDate(uint _days) public pure returns (uint year, uint month, uint day) ``` <br /> ### timestampFromDate Calculate the `timestamp` to `year`/`month`/`day`. ```javascript function timestampFromDate(uint year, uint month, uint day) public pure returns (uint timestamp) ``` **NOTE** This function does not validate the `year`/`month`/`day` input. Use [`isValidDate(...)`](#isvaliddate) to validate the input if necessary. <br /> ### timestampFromDateTime Calculate the `timestamp` to `year`/`month`/`day` `hour`:`minute`:`second` UTC. ```javascript function timestampFromDateTime(uint year, uint month, uint day, uint hour, uint minute, uint second) public pure returns (uint timestamp) ``` **NOTE** This function does not validate the `year`/`month`/`day` `hour`:`minute`:`second` input. Use [`isValidDateTime(...)`](#isvaliddatetime) to validate the input if necessary. <br /> ### timestampToDate Calculate `year`/`month`/`day` from `timestamp`. ```javascript function timestampToDate(uint timestamp) public pure returns (uint year, uint month, uint day) ``` <br /> ### timestampToDateTime Calculate `year`/`month`/`day` `hour`:`minute`:`second` from `timestamp`. ```javascript function timestampToDateTime(uint timestamp) public pure returns (uint year, uint month, uint day, uint hour, uint minute, uint second) ``` <br /> ### isValidDate Is the date specified by `year`/`month`/`day` a valid date? ```javascript_ function isValidDate(uint year, uint month, uint day) internal pure returns (bool valid) ``` <br /> <br /> ### isValidDateTime Is the date/time specified by `year`/`month`/`day` `hour`:`minute`:`second` a valid date/time? ```javascript_ function isValidDateTime(uint year, uint month, uint day, uint hour, uint minute, uint second) internal pure returns (bool valid) ``` <br /> ### isLeapYear Is the year specified by `timestamp` a leap year? ```javascript_ function isLeapYear(uint timestamp) public pure returns (bool leapYear) ``` <br /> ### _isLeapYear Is the specified `year` (e.g. 2018) a leap year? ```javascript_ function _isLeapYear(uint year) public pure returns (bool leapYear) ``` <br /> ### isWeekDay Is the day specified by `timestamp` a weekday (Monday, ..., Friday)? ```javascript function isWeekDay(uint timestamp) public pure returns (bool weekDay) ``` <br /> ### isWeekEnd Is the day specified by `timestamp` a weekend (Saturday, Sunday)? ```javascript function isWeekEnd(uint timestamp) public pure returns (bool weekEnd) ``` <br /> ### getDaysInMonth Return the day in the month `daysInMonth` for the month specified by `timestamp`. ```javascript function getDaysInMonth(uint timestamp) public pure returns (uint daysInMonth) ``` <br /> ### _getDaysInMonth Return the day in the month `daysInMonth` (1, ..., 31) for the month specified by the `year`/`month`. ```javascript function _getDaysInMonth(uint year, uint month) public pure returns (uint daysInMonth) ``` <br /> ### getDayOfWeek Return the day of the week `dayOfWeek` (1 = Monday, ..., 7 = Sunday) for the date specified by `timestamp`. ```javascript function getDayOfWeek(uint timestamp) public pure returns (uint dayOfWeek) ``` <br /> ### getYear Get the `year` of the date specified by `timestamp`. ```javascript function getYear(uint timestamp) public pure returns (uint year) ``` <br /> ### getMonth Get the `month` of the date specified by `timestamp`. ```javascript function getMonth(uint timestamp) public pure returns (uint month) ``` <br /> ### getDay Get the day of the month `day` (1, ..., 31) of the date specified `timestamp`. ```javascript function getDay(uint timestamp) public pure returns (uint day) ``` <br /> ### getHour Get the `hour` of the date and time specified by `timestamp`. ```javascript function getHour(uint timestamp) public pure returns (uint hour) ``` <br /> ### getMinute Get the `minute` of the date and time specified by `timestamp`. ```javascript function getMinute(uint timestamp) public pure returns (uint minute) ``` <br /> ### getSecond Get the `second` of the date and time specified by `timestamp`. ```javascript function getSecond(uint timestamp) public pure returns (uint second) ``` <br /> ### addYears Add `_years` years to the date and time specified by `timestamp`. Note that the resulting day of the month will be adjusted if it exceeds the valid number of days in the month. For example, if the original date is 2020/02/29 and an additional year is added to this date, the resulting date will be an invalid date of 2021/02/29. The resulting date is then adjusted to 2021/02/28. ```javascript function addYears(uint timestamp, uint _years) public pure returns (uint newTimestamp) ``` <br /> ### addMonths Add `_months` months to the date and time specified by `timestamp`. Note that the resulting day of the month will be adjusted if it exceeds the valid number of days in the month. For example, if the original date is 2019/01/31 and an additional month is added to this date, the resulting date will be an invalid date of 2019/02/31. The resulting date is then adjusted to 2019/02/28. ```javascript function addMonths(uint timestamp, uint _months) public pure returns (uint newTimestamp) ``` <br /> ### addDays Add `_days` days to the date and time specified by `timestamp`. ```javascript function addDays(uint timestamp, uint _days) public pure returns (uint newTimestamp) ``` <br /> ### addHours Add `_hours` hours to the date and time specified by `timestamp`. ```javascript function addHours(uint timestamp, uint _hours) public pure returns (uint newTimestamp) ``` <br /> ### addMinutes Add `_minutes` minutes to the date and time specified by `timestamp`. ```javascript function addMinutes(uint timestamp, uint _minutes) public pure returns (uint newTimestamp) ``` <br /> ### addSeconds Add `_seconds` seconds to the date and time specified by `timestamp`. ```javascript function addSeconds(uint timestamp, uint _seconds) public pure returns (uint newTimestamp) ``` <br /> ### subYears Subtract `_years` years from the date and time specified by `timestamp`. Note that the resulting day of the month will be adjusted if it exceeds the valid number of days in the month. For example, if the original date is 2020/02/29 and a year is subtracted from this date, the resulting date will be an invalid date of 2019/02/29. The resulting date is then adjusted to 2019/02/28. ```javascript function subYears(uint timestamp, uint _years) public pure returns (uint newTimestamp) ``` <br /> ### subMonths Subtract `_months` months from the date and time specified by `timestamp`. Note that the resulting day of the month will be adjusted if it exceeds the valid number of days in the month. For example, if the original date is 2019/03/31 and a month is subtracted from this date, the resulting date will be an invalid date of 2019/02/31. The resulting date is then adjusted to 2019/02/28. ```javascript function subMonths(uint timestamp, uint _months) public pure returns (uint newTimestamp) ``` <br /> ### subDays Subtract `_days` days from the date and time specified by `timestamp`. ```javascript function subDays(uint timestamp, uint _days) public pure returns (uint newTimestamp) ``` <br /> ### subHours Subtract `_hours` hours from the date and time specified by `timestamp`. ```javascript function subHours(uint timestamp, uint _hours) public pure returns (uint newTimestamp) ``` <br /> ### subMinutes Subtract `_minutes` minutes from the date and time specified by `timestamp`. ```javascript function subMinutes(uint timestamp, uint _minutes) public pure returns (uint newTimestamp) ``` <br /> ### subSeconds Subtract `_seconds` seconds from the date and time specified by `timestamp`. ```javascript function subSeconds(uint timestamp, uint _seconds) public pure returns (uint newTimestamp) ``` <br /> ### diffYears Calculate the number of years between the dates specified by `fromTimeStamp` and `toTimestamp`. Note that this calculation is computed as `getYear(toTimestamp) - getYear(fromTimestamp)`, rather that subtracting the years (since 1970/01/01) represented by both `{to|from}Timestamp`. ```javascript function diffYears(uint fromTimestamp, uint toTimestamp) public pure returns (uint _years) ``` <br /> ### diffMonths Calculate the number of months between the dates specified by `fromTimeStamp` and `toTimestamp`. Note that this calculation is computed as `getYear(toTimestamp) * 12 + getMonth(toTimestamp) - getYear(fromTimestamp) * 12 - getMonth(fromTimestamp)`, rather that subtracting the months (since 1970/01/01) represented by both `{to|from}Timestamp`. ```javascript function diffMonths(uint fromTimestamp, uint toTimestamp) public pure returns (uint _months) ``` <br /> ### diffDays Calculate the number of days between the dates specified by `fromTimeStamp` and `toTimestamp`. Note that this calculation is computed as `(toTimestamp - fromTimestamp) / SECONDS_PER_DAY`, rather that subtracting the days (since 1970/01/01) represented by both `{to|from}Timestamp`. ```javascript function diffDays(uint fromTimestamp, uint toTimestamp) public pure returns (uint _days) ``` <br /> ### diffHours Calculate the number of hours between the dates specified by `fromTimeStamp` and `toTimestamp`. Note that this calculation is computed as `(toTimestamp - fromTimestamp) / SECONDS_PER_HOUR`, rather that subtracting the hours (since 1970/01/01) represented by both `{to|from}Timestamp`. ```javascript function diffHours(uint fromTimestamp, uint toTimestamp) public pure returns (uint _hours) ``` <br /> ### diffMinutes Calculate the number of minutes between the dates specified by `fromTimeStamp` and `toTimestamp`. Note that this calculation is computed as `(toTimestamp - fromTimestamp) / SECONDS_PER_MINUTE`, rather that subtracting the minutes (since 1970/01/01) represented by both `{to|from}Timestamp`. ```javascript function diffMinutes(uint fromTimestamp, uint toTimestamp) public pure returns (uint _minutes) ``` <br /> ### diffSeconds Calculate the number of seconds between the dates specified by `fromTimeStamp` and `toTimestamp`. Note that this calculation is computed as `toTimestamp - fromTimestamp`. ```javascript function diffSeconds(uint fromTimestamp, uint toTimestamp) public pure returns (uint _seconds) ``` <br /> <hr /> ## Gas Cost ### `timestampToDateTime(...)` Gas Cost From executing the following function, the transaction gas cost is 24,693 ```javascript > testDateTime.timestampToDateTime(1527120000) [2018, 5, 24, 0, 0, 0] > testDateTime.timestampToDateTime.estimateGas(1527120000) 24693 ``` From Remix, the execution gas cost is 3,101 . From my latest testing with Remix using Solidity 0.4.24: <kbd><img src="docs/timestampToDateTime.png" /></kbd> <br /> ### `timestampFromDateTime(...)` Gas Cost From executing the following function, the transaction gas cost is 25,054 ```javascript > testDateTime.timestampFromDateTime(2018, 05, 24, 1, 2, 3) 1527123723 > testDateTime.timestampFromDateTime.estimateGas(2018, 05, 24, 1, 2, 3) 25054 ``` From Remix, the execution gas cost is 2,566 . From my latesr testing with Remix using Solidity 0.4.24: <kbd><img src="docs/timestampFromDateTime.png" /></kbd> <br /> ### Remix Gas Estimates Remix gas estimates using Solidity 0.4.24: ```json { "Creation": { "codeDepositCost": "908400", "executionCost": "942", "totalCost": "909342" }, "External": { "DOW_FRI()": "1130", "DOW_MON()": "1262", "DOW_SAT()": "1064", "DOW_SUN()": "360", "DOW_THU()": "1108", "DOW_TUE()": "250", "DOW_WED()": "778", "OFFSET19700101()": "932", "SECONDS_PER_DAY()": "690", "SECONDS_PER_HOUR()": "580", "SECONDS_PER_MINUTE()": "1196", "_daysFromDate(uint256,uint256,uint256)": "638", "_daysToDate(uint256)": "1280", "_getDaysInMonth(uint256,uint256)": "885", "_isLeapYear(uint256)": "1115", "_now()": "997", "_nowDateTime()": "1562", "addDays(uint256,uint256)": "772", "addHours(uint256,uint256)": "662", "addMinutes(uint256,uint256)": "838", "addMonths(uint256,uint256)": "1851", "addSeconds(uint256,uint256)": "896", "addYears(uint256,uint256)": "1846", "diffDays(uint256,uint256)": "1207", "diffHours(uint256,uint256)": "503", "diffMinutes(uint256,uint256)": "316", "diffMonths(uint256,uint256)": "infinite", "diffSeconds(uint256,uint256)": "712", "diffYears(uint256,uint256)": "infinite", "getDay(uint256)": "1114", "getDayOfWeek(uint256)": "415", "getDaysInMonth(uint256)": "1120", "getHour(uint256)": "491", "getMinute(uint256)": "1373", "getMonth(uint256)": "1378", "getSecond(uint256)": "810", "getYear(uint256)": "1337", "isLeapYear(uint256)": "1633", "isValidDate(uint256,uint256,uint256)": "942", "isValidDateTime(uint256,uint256,uint256,uint256,uint256,uint256)": "1269", "isWeekDay(uint256)": "1284", "isWeekEnd(uint256)": "624", "subDays(uint256,uint256)": "1146", "subHours(uint256,uint256)": "288", "subMinutes(uint256,uint256)": "992", "subMonths(uint256,uint256)": "2363", "subSeconds(uint256,uint256)": "1336", "subYears(uint256,uint256)": "1871", "timestampFromDate(uint256,uint256,uint256)": "718", "timestampFromDateTime(uint256,uint256,uint256,uint256,uint256,uint256)": "1077", "timestampToDate(uint256)": "infinite", "timestampToDateTime(uint256)": "1945" } } ``` <br /> <hr /> ## Algorithm The formulae to convert year/month/day hour:minute:second to a Unix timestamp and back use the algorithms from [Converting Between Julian Dates and Gregorian Calendar Dates](http://aa.usno.navy.mil/faq/docs/JD_Formula.php). These algorithms were originally designed by [Fliegel and van Flandern (1968)](http://www.worldcat.org/title/machine-algorithm-for-processing-calendar-dates/oclc/754110896). Note that these algorithms depend on negative numbers, so Solidity unsigned integers `uint` are converted to signed integers `int` to compute the date conversions and the results are converted back to `uint` for general use. <br /> ### Converting YYYYMMDD to Unix Timestamp The Fortran algorithm follows: ``` INTEGER FUNCTION JD (YEAR,MONTH,DAY) C C---COMPUTES THE JULIAN DATE (JD) GIVEN A GREGORIAN CALENDAR C DATE (YEAR,MONTH,DAY). C INTEGER YEAR,MONTH,DAY,I,J,K C I= YEAR J= MONTH K= DAY C JD= K-32075+1461*(I+4800+(J-14)/12)/4+367*(J-2-(J-14)/12*12) 2 /12-3*((I+4900+(J-14)/12)/100)/4 C RETURN END ``` Translating this formula, and subtracting an offset (2,440,588) so 1970/01/01 is day 0: ``` days = day - 32075 + 1461 * (year + 4800 + (month - 14) / 12) / 4 + 367 * (month - 2 - (month - 14) / 12 * 12) / 12 - 3 * ((year + 4900 + (month - 14) / 12) / 100) / 4 - offset ``` <br /> ### Converting Unix Timestamp To YYYYMMDD The Fortran algorithm follows: ``` SUBROUTINE GDATE (JD, YEAR,MONTH,DAY) C C---COMPUTES THE GREGORIAN CALENDAR DATE (YEAR,MONTH,DAY) C GIVEN THE JULIAN DATE (JD). C INTEGER JD,YEAR,MONTH,DAY,I,J,K C L= JD+68569 N= 4*L/146097 L= L-(146097*N+3)/4 I= 4000*(L+1)/1461001 L= L-1461*I/4+31 J= 80*L/2447 K= L-2447*J/80 L= J/11 J= J+2-12*L I= 100*(N-49)+I+L C YEAR= I MONTH= J DAY= K C RETURN END ``` Translating this formula and adding an offset (2,440,588) so 1970/01/01 is day 0: ``` int L = days + 68569 + offset int N = 4 * L / 146097 L = L - (146097 * N + 3) / 4 year = 4000 * (L + 1) / 1461001 L = L - 1461 * year / 4 + 31 month = 80 * L / 2447 dd = L - 2447 * month / 80 L = month / 11 month = month + 2 - 12 * L year = 100 * (N - 49) + year + L ``` <br /> <hr /> ## Testing Details of the testing environment can be found in [test](test). The DateTime library calculations have been tested for the date range 1970/01/01 to 2345/12/01 for periodically sampled dates. The following functions were tested using the script [test/01_test1.sh](test/01_test1.sh) with the summary results saved in [test/test1results.txt](test/test1results.txt) and the detailed output saved in [test/test1output.txt](test/test1output.txt): * [x] Deploy [contracts/BokkyPooBahsDateTimeLibrary.sol](contracts/BokkyPooBahsDateTimeLibrary.sol) library * [x] Deploy [contracts/TestDateTime.sol](contracts/TestDateTime.sol) contract * [x] Test `isValidDate(...)` * [x] Test `isValidDateTime(...)` * [x] Test `isLeapYear(...)` * [x] Test `_isLeapYear(...)` * [x] Test `isWeekDay(...)` * [x] Test `isWeekEnd(...)` * [x] Test `getDaysInMonth(...)` * [x] Test `_getDaysInMonth(...)` * [x] Test `getDayOfWeek(...)` * [x] Test `get{Year|Month|Day|Hour|Minute|Second}(...)` * [x] Test `add{Years|Months|Days|Hours|Minutes|Seconds}(...)` * [x] Test `sub{Years|Months|Days|Hours|Minutes|Seconds}(...)` * [x] Test `diff{Years|Months|Days|Hours|Minutes|Seconds}(...)` * [x] For a range of Unix timestamps from 1970/01/01 to 2345/12/21 * [x] Generate the year/month/day hour/minute/second from the Unix timestamp using `timestampToDateTime(...)` * [x] Generate the Unix timestamp from the calculated year/month/day hour/minute/second using `timestampFromDateTime(...)` * [x] Compare the year/month/day hour/minute/second to the JavaScript *Date* calculation <br /> <hr /> ## References A copy of the webpage with the algorithm [Converting Between Julian Dates and Gregorian Calendar Dates](http://aa.usno.navy.mil/faq/docs/JD_Formula.php) has been saved to [docs/ConvertingBetweenJulianDatesAndGregorianCalendarDates.pdf](docs/ConvertingBetweenJulianDatesAndGregorianCalendarDates.pdf) as some people have had difficulty accessing this page. <br /> <br /> Enjoy! (c) BokkyPooBah / Bok Consulting Pty Ltd - Feb 17 2019. The MIT Licence.
33.242358
677
0.714647
eng_Latn
0.799403
e5c7fabfe9b724c8ef06a0af5656790af7a85b06
254
md
Markdown
src/components/compDefinition/compDefinition.md
t3kt/raytk
e0e2b3643b2f536d597c5db64f02d17f7e8f23ac
[ "CC-BY-4.0" ]
108
2020-11-23T01:22:37.000Z
2022-03-29T09:27:32.000Z
src/components/compDefinition/compDefinition.md
t3kt/raytk
e0e2b3643b2f536d597c5db64f02d17f7e8f23ac
[ "CC-BY-4.0" ]
794
2020-11-21T22:27:37.000Z
2022-03-24T06:41:19.000Z
src/components/compDefinition/compDefinition.md
t3kt/raytk
e0e2b3643b2f536d597c5db64f02d17f7e8f23ac
[ "CC-BY-4.0" ]
3
2021-06-19T00:57:54.000Z
2021-11-01T11:55:07.000Z
# compDefinition Generates the Definition for an RComp (a component treated similarly to a ROP). This is equivalent to `opDefinition` for ROPs. It defines the properties and metadata of the component, which are used by toolkit infrastructure and tools.
42.333333
108
0.807087
eng_Latn
0.999722
e5c81eab63974f1ad1cbec041463f3f90e2ccf20
330
md
Markdown
README.md
J0F3/Todo-List-App
8269bf6302161e0b557caa4eed77a4b7e2d8914f
[ "MIT" ]
null
null
null
README.md
J0F3/Todo-List-App
8269bf6302161e0b557caa4eed77a4b7e2d8914f
[ "MIT" ]
null
null
null
README.md
J0F3/Todo-List-App
8269bf6302161e0b557caa4eed77a4b7e2d8914f
[ "MIT" ]
null
null
null
# Simple ToDo List App This is a sample .NET Core web application using a SQL Databse to manage a simple do-to list. See also: [Build an ASP.NET Core and SQL Database app in Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-dotnetcore-sqldb). ## License See [LICENSE](LICENSE.md).
33
170
0.757576
kor_Hang
0.433773
e5c83932d89aade32d37830e46e22fc0e4018f1d
219
md
Markdown
README.md
thadeutriani/HTML-CSS
a79a32827f2dd991dde0066e6b1efd56fe7796ae
[ "MIT" ]
1
2021-11-12T18:28:44.000Z
2021-11-12T18:28:44.000Z
README.md
thadeutriani/HTML-CSS
a79a32827f2dd991dde0066e6b1efd56fe7796ae
[ "MIT" ]
null
null
null
README.md
thadeutriani/HTML-CSS
a79a32827f2dd991dde0066e6b1efd56fe7796ae
[ "MIT" ]
null
null
null
# HTML-CSS curso html 5 e css 3 Estou aprendendo a criar sites e agora vou gerenciar meus repositórios <a href="https://thadeutriani.github.io/HTML-CSS/Exerc%C3%ADcios/Ex001/index.html"> executar o exercicio 001</a>
36.5
112
0.767123
por_Latn
0.90674
e5c86c94c67b308de834d98ab08e3253209c493a
26
md
Markdown
README.md
katai5plate/rpg2k-scripting-assistor
d31c784c6902db8c1ded553452f0f582492de0c2
[ "MIT" ]
null
null
null
README.md
katai5plate/rpg2k-scripting-assistor
d31c784c6902db8c1ded553452f0f582492de0c2
[ "MIT" ]
null
null
null
README.md
katai5plate/rpg2k-scripting-assistor
d31c784c6902db8c1ded553452f0f582492de0c2
[ "MIT" ]
null
null
null
# rpg2k-scripting-assistor
26
26
0.846154
nld_Latn
0.226043
e5ca96b551f468db52c1285c0e4924fb133cd607
630
md
Markdown
src/runtime/README.md
war408705279/vulcan
57c88c908b72ce9eb19acd00199da84e2e1f7553
[ "MIT" ]
5
2020-08-23T20:08:10.000Z
2021-05-02T23:43:57.000Z
src/runtime/README.md
war408705279/vulcan
57c88c908b72ce9eb19acd00199da84e2e1f7553
[ "MIT" ]
1
2021-04-01T06:03:27.000Z
2021-05-05T14:00:59.000Z
src/runtime/README.md
war408705279/vulcan
57c88c908b72ce9eb19acd00199da84e2e1f7553
[ "MIT" ]
null
null
null
# runtime 一些组件是以微信小程序 / 支付宝小程序 / xxx 提供的组件为基础,外面包一层得到的 根据 `remax` 提供的能力,可以用不同的文件名后缀区分不同平台在编译或运行时(根据 `dev || run` 命令后面跟的平台变量)暴露出来的组件,[参考](https://remaxjs.org/guide/one#%E4%BD%BF%E7%94%A8%E6%96%87%E4%BB%B6%E5%90%8D%E5%90%8E%E7%BC%80%E5%8C%BA%E5%88%86%E4%B8%8D%E5%90%8C%E5%B9%B3%E5%8F%B0%E4%BB%A3%E7%A0%81) ### 如何暴露统一的 component 以 `checkbox` 组件为例,目录结构应该为 ```shell ./src/runtime ├── checkbox # 基础组件 ├ ├── index.tsx # 微信小程序 checkbox ├ ├── index.ali.tsx # 支付宝小程序 checkbox ├ └── ... # 其他平台 checkbox └── index.ts # 对外暴露统一的 checkbox 组件 ``` **注:这里默认 `index.tsx` 对应的微信小程序的组件,支付宝小程序使用 `index.ali.tsx` 的命名方式命名**
30
248
0.657143
yue_Hant
0.376183
e5cb03f4acf70d439b054bdac7cb0b08c12a4efb
7,701
md
Markdown
docs/api_coverage.md
posixpascal/puppeteer-ruby
4618d5a01f04a363eacf446d3c863f6bd159b034
[ "Apache-2.0" ]
null
null
null
docs/api_coverage.md
posixpascal/puppeteer-ruby
4618d5a01f04a363eacf446d3c863f6bd159b034
[ "Apache-2.0" ]
null
null
null
docs/api_coverage.md
posixpascal/puppeteer-ruby
4618d5a01f04a363eacf446d3c863f6bd159b034
[ "Apache-2.0" ]
null
null
null
# API coverages - Puppeteer version: v13.0.1 - puppeteer-ruby version: 0.40.1 ## Puppeteer * ~~clearCustomQueryHandlers~~ * connect * ~~createBrowserFetcher~~ * ~~customQueryHandlerNames~~ * defaultArgs => `#default_args` * devices * ~~errors~~ * executablePath => `#executable_path` * launch * networkConditions => `#network_conditions` * product * ~~registerCustomQueryHandler~~ * ~~unregisterCustomQueryHandler~~ ## ~~BrowserFetcher~~ * ~~canDownload~~ * ~~download~~ * ~~host~~ * ~~localRevisions~~ * ~~platform~~ * ~~product~~ * ~~remove~~ * ~~revisionInfo~~ ## Browser * browserContexts => `#browser_contexts` * close * createIncognitoBrowserContext => `#create_incognito_browser_context` * defaultBrowserContext => `#default_browser_context` * disconnect * isConnected => `#connected?` * newPage => `#new_page` * pages * process * target * targets * userAgent => `#user_agent` * version * waitForTarget => `#wait_for_target` * wsEndpoint => `#ws_endpoint` ## BrowserContext * browser * clearPermissionOverrides => `#clear_permission_overrides` * close * isIncognito => `#incognito?` * newPage => `#new_page` * overridePermissions => `#override_permissions` * pages * targets * waitForTarget => `#wait_for_target` ## Page * $ => `#query_selector` * $$ => `#query_selector_all` * $$eval => `#eval_on_selector_all` * $eval => `#eval_on_selector` * $x => `#Sx` * accessibility * addScriptTag => `#add_script_tag` * addStyleTag => `#add_style_tag` * authenticate * bringToFront => `#bring_to_front` * browser * browserContext => `#browser_context` * click * close * content * cookies * coverage * createPDFStream => `#create_pdf_stream` * deleteCookie => `#delete_cookie` * emulate * emulateCPUThrottling => `#emulate_cpu_throttling` * emulateIdleState => `#emulate_idle_state` * emulateMediaFeatures => `#emulate_media_features` * emulateMediaType => `#emulate_media_type` * emulateNetworkConditions => `#emulate_network_conditions` * emulateTimezone => `#emulate_timezone` * emulateVisionDeficiency => `#emulate_vision_deficiency` * evaluate * evaluateHandle => `#evaluate_handle` * evaluateOnNewDocument => `#evaluate_on_new_document` * exposeFunction => `#expose_function` * focus * frames * goBack => `#go_back` * goForward => `#go_forward` * goto * hover * isClosed => `#closed?` * isDragInterceptionEnabled => `#drag_interception_enabled?` * isJavaScriptEnabled => `#javascript_enabled?` * keyboard * mainFrame => `#main_frame` * metrics * mouse * pdf * queryObjects => `#query_objects` * reload * screenshot * select * setBypassCSP => `#bypass_csp=` * setCacheEnabled => `#cache_enabled=` * setContent => `#content=` * setCookie => `#set_cookie` * setDefaultNavigationTimeout => `#default_navigation_timeout=` * setDefaultTimeout => `#default_timeout=` * ~~setDragInterception~~ * setExtraHTTPHeaders => `#extra_http_headers=` * setGeolocation => `#geolocation=` * setJavaScriptEnabled => `#javascript_enabled=` * setOfflineMode => `#offline_mode=` * setRequestInterception => `#request_interception=` * setUserAgent => `#user_agent=` * setViewport => `#viewport=` * tap * target * title * ~~touchscreen~~ * tracing * type => `#type_text` * url * viewport * ~~waitFor~~ * waitForFileChooser => `#wait_for_file_chooser` * waitForFrame => `#wait_for_frame` * waitForFunction => `#wait_for_function` * waitForNavigation => `#wait_for_navigation` * ~~waitForNetworkIdle~~ * waitForRequest => `#wait_for_request` * waitForResponse => `#wait_for_response` * waitForSelector => `#wait_for_selector` * waitForTimeout => `#wait_for_timeout` * waitForXPath => `#wait_for_xpath` * workers ## ~~WebWorker~~ * ~~evaluate~~ * ~~evaluateHandle~~ * ~~executionContext~~ * ~~url~~ ## ~~Accessibility~~ * ~~snapshot~~ ## Keyboard * down * press * sendCharacter => `#send_character` * type => `#type_text` * up ## Mouse * click * down * drag * dragAndDrop => `#drag_and_drop` * dragEnter => `#drag_enter` * dragOver => `#drag_over` * drop * move * up * wheel ## ~~Touchscreen~~ * ~~tap~~ ## Tracing * start * stop ## FileChooser * accept * cancel * isMultiple => `#multiple?` ## Dialog * accept * defaultValue => `#default_value` * dismiss * message * type ## ConsoleMessage * args * location * ~~stackTrace~~ * text * ~~type~~ ## Frame * $ => `#query_selector` * $$ => `#query_selector_all` * $$eval => `#eval_on_selector_all` * $eval => `#eval_on_selector` * $x => `#Sx` * addScriptTag => `#add_script_tag` * addStyleTag => `#add_style_tag` * childFrames => `#child_frames` * click * content * evaluate * evaluateHandle => `#evaluate_handle` * executionContext => `#execution_context` * focus * goto * hover * isDetached => `#detached?` * isOOPFrame => `#oop_frame?` * name * parentFrame => `#parent_frame` * select * setContent => `#set_content` * tap * title * type => `#type_text` * url * ~~waitFor~~ * waitForFunction => `#wait_for_function` * waitForNavigation => `#wait_for_navigation` * waitForSelector => `#wait_for_selector` * waitForTimeout => `#wait_for_timeout` * waitForXPath => `#wait_for_xpath` ## ExecutionContext * evaluate * evaluateHandle => `#evaluate_handle` * frame * ~~queryObjects~~ ## JSHandle * asElement => `#as_element` * dispose * evaluate * evaluateHandle => `#evaluate_handle` * executionContext => `#execution_context` * getProperties => `#properties` * getProperty => `#[]` * jsonValue => `#json_value` ## ElementHandle * $ => `#query_selector` * $$ => `#query_selector_all` * $$eval => `#eval_on_selector_all` * $eval => `#eval_on_selector` * $x => `#Sx` * asElement => `#as_element` * boundingBox => `#bounding_box` * boxModel => `#box_model` * click * clickablePoint => `#clickable_point` * contentFrame => `#content_frame` * dispose * drag * dragAndDrop => `#drag_and_drop` * dragEnter => `#drag_enter` * dragOver => `#drag_over` * drop * evaluate * evaluateHandle => `#evaluate_handle` * executionContext => `#execution_context` * focus * getProperties => `#properties` * getProperty => `#[]` * hover * isIntersectingViewport => `#intersecting_viewport?` * jsonValue => `#json_value` * press * screenshot * select * tap * ~~toString~~ * type => `#type_text` * uploadFile => `#upload_file` * waitForSelector => `#wait_for_selector` ## HTTPRequest * abort * abortErrorReason => `#abort_error_reason` * continue * continueRequestOverrides => `#continue_request_overrides` * enqueueInterceptAction => `#enqueue_intercept_action` * failure * finalizeInterceptions => `#finalize_interceptions` * frame * headers * initiator * ~~interceptResolutionState~~ * ~~isInterceptResolutionHandled~~ * isNavigationRequest => `#navigation_request?` * method * postData => `#post_data` * redirectChain => `#redirect_chain` * resourceType => `#resource_type` * respond * response * responseForRequest => `#response_for_request` * url ## HTTPResponse * buffer * frame * ~~fromCache~~ * ~~fromServiceWorker~~ * headers * json * ~~ok~~ * remoteAddress => `#remote_address` * request * securityDetails => `#security_details` * status * statusText => `#status_text` * text * url ## ~~SecurityDetails~~ * ~~issuer~~ * ~~protocol~~ * ~~subjectAlternativeNames~~ * ~~subjectName~~ * ~~validFrom~~ * ~~validTo~~ ## Target * browser * browserContext => `#browser_context` * createCDPSession => `#create_cdp_session` * opener * page * type * url * ~~worker~~ ## CDPSession * connection * detach * id * send ## Coverage * startCSSCoverage => `#start_css_coverage` * startJSCoverage => `#start_js_coverage` * stopCSSCoverage => `#stop_css_coverage` * stopJSCoverage => `#stop_js_coverage` ## TimeoutError ## ~~EventEmitter~~ * ~~addListener~~ * ~~emit~~ * ~~listenerCount~~ * ~~off~~ * ~~on~~ * ~~once~~ * ~~removeAllListeners~~ * ~~removeListener~~
19.796915
70
0.694585
yue_Hant
0.294096
e5cb5c1a87e73e9f0928a4741b03ff7a56958668
2,098
md
Markdown
docs/extensibility/keybindings-element.md
hahaysh/visualstudio-docs.ko-kr
38fd1d7bd27067ebbb756f79879e7d9012e19ba2
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/keybindings-element.md
hahaysh/visualstudio-docs.ko-kr
38fd1d7bd27067ebbb756f79879e7d9012e19ba2
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/extensibility/keybindings-element.md
hahaysh/visualstudio-docs.ko-kr
38fd1d7bd27067ebbb756f79879e7d9012e19ba2
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: KeyBindings 요소 | Microsoft Docs ms.custom: '' ms.date: 11/04/2016 ms.technology: - vs-ide-sdk ms.topic: conceptual f1_keywords: - KeyBindings helpviewer_keywords: - VSCT XML schema elements, KeyBindings - KeyBindings element (VSCT XML schema) ms.assetid: 26a15d5c-ddea-4977-af7f-d795ff09c7ad author: gregvanl ms.author: gregvanl manager: douge ms.workload: - vssdk ms.openlocfilehash: 91a5fd99216e712e567d4543f3f29dc2b6b21aa1 ms.sourcegitcommit: 6a9d5bd75e50947659fd6c837111a6a547884e2a ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 04/16/2018 --- # <a name="keybindings-element"></a>KeyBindings 요소 키 바인딩 요소와 다른 KeyBindings 그룹화 키 바인딩 요소를 그룹화합니다. ## <a name="syntax"></a>구문 ``` <KeyBindings> <KeyBinding>... </KeyBinding> <KeyBinding>... </KeyBinding> </KeyBindings> ``` ## <a name="attributes-and-elements"></a>특성 및 요소 다음 섹션에서는 특성, 자식 요소 및 부모 요소에 대해 설명합니다. ### <a name="attributes"></a>특성 |특성|설명| |---------------|-----------------| |조건|선택 사항입니다. 참조 [조건부 특성](../extensibility/vsct-xml-schema-conditional-attributes.md)합니다.| ### <a name="child-elements"></a>자식 요소 |요소|설명| |-------------|-----------------| |[KeyBinding 요소](../extensibility/keybinding-element.md)|명령에 대 한 바로 가기 키를 지정합니다.| |[KeyBindings](../extensibility/keybindings-element.md)|그룹 키 바인딩 요소와 다른 KeyBindings 그룹화 합니다.| ### <a name="parent-elements"></a>부모 요소 |요소|설명| |-------------|-----------------| |[CommandTable 요소](../extensibility/commandtable-element.md)|명령을 나타내는 모든 요소를 정의 합니다.| ## <a name="example"></a>예제 ``` <KeyBindings> <KeyBinding guid="guidWidgetPackage" id="cmdidUpdateWidget" editor="guidWidgetEditor" key1="VK_F5"/> <KeyBinding guid="guidWidgetPackage" id="cmdidRunWidget" editor="guidWidgetEditor" key1="VK_F5" mod1="Control"/> </KeyBindings> ``` ## <a name="see-also"></a>참고 항목 [KeyBinding 요소](../extensibility/keybinding-element.md) [Visual Studio 명령 테이블(.Vsct) 파일](../extensibility/internals/visual-studio-command-table-dot-vsct-files.md)
29.138889
107
0.65062
kor_Hang
0.659909
e5cd6280bee3d3b2d08a3581aa1496211ea18d3d
2,471
markdown
Markdown
_posts/2017-10-12-rails_portfolio_project_-_freelancelot.markdown
SarahCyrDesign/SarahCyrDesign.github.io
5e5a4076e4ba88a73badcf3046c6cdb2989fdf52
[ "MIT" ]
null
null
null
_posts/2017-10-12-rails_portfolio_project_-_freelancelot.markdown
SarahCyrDesign/SarahCyrDesign.github.io
5e5a4076e4ba88a73badcf3046c6cdb2989fdf52
[ "MIT" ]
null
null
null
_posts/2017-10-12-rails_portfolio_project_-_freelancelot.markdown
SarahCyrDesign/SarahCyrDesign.github.io
5e5a4076e4ba88a73badcf3046c6cdb2989fdf52
[ "MIT" ]
null
null
null
--- layout: post title: "Rails Portfolio Project - Freelancelot" date: 2017-10-12 11:57:49 -0400 permalink: rails_portfolio_project_-_freelancelot --- When approaching this project differently from the Sinatra app, I wanted to create something that was relevant to my background in graphic design and my life in general as a freelancer. I asked myself "Would I use this? Is this helpful to other users?" At this point in the project I can finally answer these questions with a resounding "Yes!" As with the Sinatra project, this process has been a fantasic learning experience. It also brought out things about myself that I needed to improve upon, which meant going back to Ruby basics and brush up on my foundations as the principles continue to be applied here. An ongoing problem I seem to have is object scope. I tend to lose track where class variables end up throughout the models, controllers, and views. By going through this process, I am confident now that I can learn to hone this in as time goes on. So what kind of features can you expect from Freelancelot? ![](https://img.memecdn.com/hey-freelance-artists_o_825620.gif) * Choose from 2 Roles - Freelancer or Client * Freelancers can create projects with many attributes and features such as: 1. Title and Description 2. Client/Company Name 3. Ticket Number 4. Start Date and Deadline 5. Budget 6. Time log (hours spent on project can be updated) 7. Status of project ("Received", "In Progress" or "Completed") 8. Invoice sent to client? (Defaults to False) * Freelancers have their own dashboard, which keeps track of their progress and alterts them if deadlines of thier projects are a week away or overdue. * Clients can view the status of their project from their freelancer * Clients can utilize the search feature to enter in their unique ticket number to instantly view their specified project from their freelancer * Clients can view all projects by their category The gems utilized in this project include Devise (Aides in User Registrations and Sign-in), Omniauth (Users can log in from 3rd party sites such as Facebook in this case) and Pundit (An authorization gem allowing me assign roles and easily delegating which type of users can use CRUD actions throughout the app). Keep a look out for new features and the Rails/JS front-end transformation by following the repo: https://github.com/SarahCyrDesign/freelancelot
60.268293
518
0.771752
eng_Latn
0.999344
e5ced490bbc922835481afacc78d6283368379a0
3,977
md
Markdown
sdk-api-src/content/xenroll/nf-xenroll-ienroll-get_deleterequestcert.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/xenroll/nf-xenroll-ienroll-get_deleterequestcert.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
sdk-api-src/content/xenroll/nf-xenroll-ienroll-get_deleterequestcert.md
amorilio/sdk-api
54ef418912715bd7df39c2561fbc3d1dcef37d7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NF:xenroll.IEnroll.get_DeleteRequestCert title: IEnroll::get_DeleteRequestCert (xenroll.h) description: The DeleteRequestCert property of IEnroll4 sets or retrieves a Boolean value that determines whether dummy certificates in the request store are deleted. helpviewer_keywords: ["DeleteRequestCert property [Security]","DeleteRequestCert property [Security]","IEnroll interface","IEnroll interface [Security]","DeleteRequestCert property","IEnroll.DeleteRequestCert","IEnroll.get_DeleteRequestCert","IEnroll::DeleteRequestCert","IEnroll::get_DeleteRequestCert","IEnroll::put_DeleteRequestCert","get_DeleteRequestCert","security.ienroll4_deleterequestcert","xenroll/IEnroll::DeleteRequestCert","xenroll/IEnroll::get_DeleteRequestCert","xenroll/IEnroll::put_DeleteRequestCert"] old-location: security\ienroll4_deleterequestcert.htm tech.root: security ms.assetid: 54b85347-cdc1-42e3-bc26-0b50bd58131a ms.date: 12/05/2018 ms.keywords: DeleteRequestCert property [Security], DeleteRequestCert property [Security],IEnroll interface, IEnroll interface [Security],DeleteRequestCert property, IEnroll.DeleteRequestCert, IEnroll.get_DeleteRequestCert, IEnroll::DeleteRequestCert, IEnroll::get_DeleteRequestCert, IEnroll::put_DeleteRequestCert, get_DeleteRequestCert, security.ienroll4_deleterequestcert, xenroll/IEnroll::DeleteRequestCert, xenroll/IEnroll::get_DeleteRequestCert, xenroll/IEnroll::put_DeleteRequestCert req.header: xenroll.h req.include-header: req.target-type: Windows req.target-min-winverclnt: Windows XP [desktop apps only] req.target-min-winversvr: Windows Server 2003 [desktop apps only] req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: Uuid.lib req.dll: Xenroll.dll req.irql: targetos: Windows req.typenames: req.redist: ms.custom: 19H1 f1_keywords: - IEnroll::get_DeleteRequestCert - xenroll/IEnroll::get_DeleteRequestCert dev_langs: - c++ topic_type: - APIRef - kbSyntax api_type: - COM api_location: - Xenroll.dll api_name: - IEnroll.DeleteRequestCert - IEnroll.get_DeleteRequestCert - IEnroll.put_DeleteRequestCert --- # IEnroll::get_DeleteRequestCert ## -description <p class="CCE_Message">[This property is no longer available for use as of Windows Server 2008 and Windows Vista.] The <b>DeleteRequestCert</b> property sets or retrieves a Boolean value that determines whether dummy certificates in the request store are deleted. Dummy certificates are created for the purpose of persisting the keys generated for the PKCS #10 request during the enrollment process. The store specified by the <a href="/windows/desktop/api/xenroll/nf-xenroll-ienroll-get_requeststorenamewstr">RequestStoreNameWStr</a> property is where the dummy certificate is created. The newly generated keys are added as properties to the dummy certificate to persist them until a <a href="/windows/desktop/SecGloss/c-gly">certification authority</a> processes the request and responds with a PKCS #7. On acceptance of the PKCS #7, the dummy certificate is removed and the keys are added as properties of the issued certificate returned by the certification authority. For debugging and testing, it is often desirable to not delete the dummy certificate. Setting <b>DeleteRequestCert</b> to false prevents its deletion. The default value for this property is true. This property was first defined in the <a href="/windows/desktop/api/xenroll/nn-xenroll-ienroll">IEnroll</a> interface. This property is read/write. ## -parameters ## -remarks The <b>DeleteRequestCert</b> property affects the behavior of the following methods: <ul> <li> <a href="/windows/desktop/api/xenroll/nf-xenroll-ienroll-acceptpkcs7blob">acceptPKCS7Blob</a> </li> <li> <a href="/windows/desktop/api/xenroll/nf-xenroll-ienroll-acceptfilepkcs7wstr">acceptFilePKCS7WStr</a> </li> </ul> ## -see-also <a href="/windows/desktop/api/xenroll/nn-xenroll-ienroll4">IEnroll</a>
47.915663
696
0.806135
eng_Latn
0.611957
e5cee5166fc6bdbce78f124c5c5ac66e67a0e92c
1,343
md
Markdown
_posts/2021-01-06-indroducing-the-sorry-cypress-helm-chart.md
tico24/tico24.github.io
5e38399f9d6235f376e6a07d6db934130f17959d
[ "MIT" ]
null
null
null
_posts/2021-01-06-indroducing-the-sorry-cypress-helm-chart.md
tico24/tico24.github.io
5e38399f9d6235f376e6a07d6db934130f17959d
[ "MIT" ]
null
null
null
_posts/2021-01-06-indroducing-the-sorry-cypress-helm-chart.md
tico24/tico24.github.io
5e38399f9d6235f376e6a07d6db934130f17959d
[ "MIT" ]
null
null
null
--- published: true --- Yes, it's another Sorry Cypress post, sorry! Ever since [I posted information on how to deploy Sorry Cypress to Kubernetes](https://crumbhole.com/playing-with-sorry-cypress-and-kubernetes/), there's been quite a lot of noise from people asking for a helm chart version. So I finally found some time to make one. # tl;dr The chart [can be found here](https://github.com/sorry-cypress/charts) ## Installing Install the chart using: ```bash $ helm repo add sorry-cypress https://sorry-cypress.github.io/charts $ helm install my-release sorry-cypress/sorry-cypress ``` ## Upgrading Upgrade the chart deployment using: ```bash $ helm upgrade my-release sorry-cypress/sorry-cypress ``` ## Uninstalling Uninstall the my-release deployment using: ```bash $ helm uninstall my-release ``` # More words By default, the chart deploys everything to Kubernetes (as you'd expect), but there's no persistence, this is so that you can just get up and running as quickly as possible. By default, the director uses the in-memory executionDriver, and the dummy screenshotsDriver. However, You can choose to enable a persistent mongo database for execution, and you can use S3 for screenshots. Do have a play and if you have any feedback, it's probably best to [open an issue](https://github.com/sorry-cypress/charts/issues).
31.97619
266
0.761727
eng_Latn
0.981177
e5cf76fb870f9195a5a347c1b5ec19d2adea9cb0
1,682
md
Markdown
index.md
katrinleinweber/DIBSI-instructor-training
1f74ac2bac7421f5ff2ef2c6555b8caae1328fb0
[ "CC-BY-4.0" ]
null
null
null
index.md
katrinleinweber/DIBSI-instructor-training
1f74ac2bac7421f5ff2ef2c6555b8caae1328fb0
[ "CC-BY-4.0" ]
null
null
null
index.md
katrinleinweber/DIBSI-instructor-training
1f74ac2bac7421f5ff2ef2c6555b8caae1328fb0
[ "CC-BY-4.0" ]
2
2017-06-07T18:52:43.000Z
2018-10-27T14:55:44.000Z
--- layout: lesson root: . --- Over the last hundred years, researchers have discovered an enormous amount about how people learn and how best to teach them. Unfortunately, much of that knowledge has not yet been translated into common classroom practice, while many myths about education have proven remarkably persistent. The class will be hands-on throughout: short lessons will alternate with individual and group practical exercises, including practice teaching sessions. Those who complete the course and [some short follow-up exercises online]({{ page.root }}/checkout/) will be certified to teach [Software Carpentry]({{ site.swc_site }}) and/or [Data Carpentry]({{ site.dc_site }}). *Please fill in [this form][application-form] if you wish to receive Carpentries certification and enter "DIBSI Instructor Training" as your Group name.* * All participants in this course are required to abide by our [code of conduct][conduct]. * There are no specific prerequisites for this training, but participants will benefit from having been through a Data Carpentry or Software Carpentry workshop so that they are familiar with our teaching techniques. * In particular, participants are *not* required to have any specific programming skills (though of course they should know enough about the subjects of one or more of our lessons to be able to teach them). **These materials are freely available under a [Creative Commons license][license].** [application-form]: https://amy.software-carpentry.org/workshops/request_training/ [conduct]: {{ page.root }}/conduct/ [license]: {{ page.root }}/license/ [issues]: {{ site.github.repository_url }}/issues
42.05
106
0.771106
eng_Latn
0.999051
e5cfae53394fecdeee1e44629e23e08c246c98f3
3,017
md
Markdown
src/ms/2019-03/13/06.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/ms/2019-03/13/06.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/ms/2019-03/13/06.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: Sesama Menguatkan untuk Melakukan Pekerjaan Baik date: 26/09/2019 --- Walaupun dengan motivasi-motivasi dan niat–niat yang terbaik, dan yakin bahawa kita berada di pihak Tuhan dan kebaikan, bekerja untuk Tuhan itu kadangkala sukar dan mengecewakan.  Kedukaan dan kesakitan dunia kita adalah sesuatu yang nyata.  Inilah salah satu sebab kita memerlukan komuniti gereja.  Yesus menunjukkan contoh komuniti yang memberi sokongan dengan murid-muridNya.  Dia jarang mengirim umat-umat keluar bersendirian, dan walaupun begitu, mereka akan datang kembali untuk mengongsikan cerita-cerita mereka dan memperbaharui tenaga dan keberanian mereka. `Baca Ibrani 10:23-25.  Ibrani 10:25 adalah ayat yang paling dikenali, lalu apa yang dua ayat seterusnya tambahkan kepada pengertian kita tentang ayat yang terkenal ini?  Apakah beberapa cara untuk menguatkan satu sama lain “menuju kepada kasih dan perbuatan-perbuatan baik”?` Dalam apapun tugas, tujuan, atau projek, sekumpulan orang yang bekerja bersama-sama boleh mencapai lebih daripada semua orang yang bekerja secara individu. Ini mengingatkan kita sekali lagi kepada gambaran gereja itu sebagai tubuh Kristus (lihat Roma 12: 3-6), di mana semua kita masing-masing mempunyai peranan yang berbeza tetapi saling melengkapkan untuk dilakukan.  Apabila masing-masing kita melakukan apa yang kita boleh lakukan dengan sebaik-baiknya, tetapi lakukanlah itu dalam cara yang membolehkan pengaruh kita untuk sama-sama bekerja, kita boleh percaya dengan iman bahawa kehidupan dan pekerjaan kita akan membuat perbezaan untuk selamanya. Walaupun kesudahan-kesudahan itu penting dalam usaha melakukan apa yang benar—hasil-hasil akhirnya adalah tentang orang-orang dan kehidupan mereka—kadang-kadang kita harus mempercayai Tuhan untuk kesudahannya nanti.  Pada masa-masa bekerja untuk meringankan kemiskinan, untuk melindungi orang yang lemah, untuk membebaskan orang yang tertindas, dan berbicara bagi orang yang tidak bersuara, kita akan melihat hanya sedikit kemajuan. Tetapi kita mempunyai harapan bahawa kita bekerja dengan tujuan yang jauh lebih besar dan tidak dapat dielakkan kerana: "Janganlah kita jemu-jemu berbuat baik, karena apabila sudah datang waktunya, kita akan menuai, jika kita tidak lemah.  Karena itu, selama masih ada kesempatan bagi kita, marilah kita berbuat baik kepada semua orang, tetapi terutama kepada kawan-kawan kita seiman" (Gal 6: 9, 10,lihat juga Ibr 13:16). Itulah sebabnya kita dipanggil untuk memberi galakan--secara literal, untuk memberi inspirasi dengan keberania---satu sama lain. Hidup dengan setia adalah membahagiakan namun sukar pada masa yang sama.  Tuhan yang adil dan Komuniti kita yang adil adalah penyokong kita yang terbesar dan tujuan untuk kita mengundang orang lain untuk turut serta. `Siapakah yang anda kenal atau ketahui yang bekerja dengan tetap untuk mengurangkan penderitaan orang lain?  Bagaimanakah anda boleh menggalakkan orang itu atau kumpulan untuk terus dalam perbuatan baik yang mereka sedang lakukan?`
188.5625
854
0.825986
zsm_Latn
0.912274
e5d1231fd6ce3e4022bf9e05afcca70d8fca5878
3,371
md
Markdown
docs/backend-api/how-to/get-tracker-list.md
clEclE36/navixy-api
d92331b32acd631ac8c87fd1de21d4caa53c9a5f
[ "Apache-2.0" ]
null
null
null
docs/backend-api/how-to/get-tracker-list.md
clEclE36/navixy-api
d92331b32acd631ac8c87fd1de21d4caa53c9a5f
[ "Apache-2.0" ]
null
null
null
docs/backend-api/how-to/get-tracker-list.md
clEclE36/navixy-api
d92331b32acd631ac8c87fd1de21d4caa53c9a5f
[ "Apache-2.0" ]
null
null
null
--- title: Get tracker list description: How to get tracker list and filter results. --- # How to get tracker list Now we [have a hash](./get-session-hash.md) — let's start with essential basics. Navixy has tracking device as a main unit, so most requests would require you to specify one or several tracker ids. You can receive a list of all trackers in user's account with [tracker/list](../resources/tracking/tracker/index.md#list) API request: === "cURL" ```shell curl -X POST '{{ extra.api_example_url }}/tracker/list' \ -H 'Content-Type: application/json' \ -d '{"hash": "a6aa75587e5c59c32d347da438505fc3"}' ``` === "HTTP GET" ``` {{ extra.api_example_url }}/tracker/list?hash=a6aa75587e5c59c32d347da438505fc3 ``` It will return to you ```json { "success": true, "list": [ { "id": 123456, "label": "tracker label", "clone": false, "group_id": 167, "avatar_file_name": "file name", "source": { "id": 234567, "device_id": 9999999988888, "model": "telfmb920", "blocked": false, "tariff_id": 345678, "status_listing_id": null, "creation_date": "2011-09-21", "tariff_end_date": "2016-03-24", "phone": "+71234567890" }, "tag_bindings": [{ "tag_id": 456789, "ordinal": 4 }] }] } ``` * `id` - int. Tracker id aka object_id. * `label` - string. Tracker label. * `clone` - boolean. True if this tracker is clone. * `group_id` - int. Tracker group id, 0 when no group. * `avatar_file_name` - string. Optional. Passed only if present. * `source` - object. * `id` - int. Source id. * `device_id` - string. Device id aka source_imei. * `model` - string. Tracker model name from "models" table. * `blocked` - boolean. True if tracker blocked due to tariff end. * `tariff_id` - int. An id of tracker tariff from "main_tariffs" table. * `status_listing_id` - int. An id of the status listing associated with this tracker, or null. * `creation_date` - date/time. Date when the tracker registered. * `tariff_end_date` - date/time. Date of next tariff prolongation, or null. * `phone` - string. Phone of the device. Can be null or empty if device has no GSM module or uses bundled SIM which number hidden from the user. * `tag_binding` - object. List of attached tags. Appears only for "tracker/list" call. * `tag_id` - int. An id of tag. Must be unique for a tracker. * `ordinal` - int. Number that can be used as ordinal or kind of tag. Must be unique for a tracker. Max value is 5. If account has a large amount of trackers, and you only need certain ones, you can add an optional filter parameter to the request that will only return matching records. This parameter has following constraints: * labels array size: minimum 1, maximum 1024. * no null items. * no duplicate items. * item length: minimum 1, maximum 60. To get a list of trackers with labels matching the filter use this API call: === "cURL" ```shell curl -X POST '{{ extra.api_example_url }}/tracker/list' \ -H 'Content-Type: application/json' \ -d '{"hash": "a6aa75587e5c59c32d347da438505fc3", "labels": ["aa", "b"]}' ```
35.484211
149
0.628894
eng_Latn
0.921185
e5d19de66601efbe6a8609887bcacd57a4d2222f
40
md
Markdown
packages/SHC-Requests.package/SHCUserRequest.class/README.md
hpi-swa-teaching/Smalltalkhub-Client
4ada88a36de3d48bbef2faa250db137dc06796f3
[ "MIT" ]
1
2016-01-22T07:14:48.000Z
2016-01-22T07:14:48.000Z
packages/SHC-Requests.package/SHCUserRequest.class/README.md
HPI-SWA-Teaching/Smalltalkhub-Client
4ada88a36de3d48bbef2faa250db137dc06796f3
[ "MIT" ]
null
null
null
packages/SHC-Requests.package/SHCUserRequest.class/README.md
HPI-SWA-Teaching/Smalltalkhub-Client
4ada88a36de3d48bbef2faa250db137dc06796f3
[ "MIT" ]
null
null
null
I query the details for a given username
40
40
0.825
eng_Latn
0.999869
e5d1b63e77d921ba75afe859f4c6271ebd3dd038
769
md
Markdown
README.md
jojo5716/CurrencyFormat
ee5c2e7819545713e23b4f2220248462cf50698b
[ "MIT" ]
5
2016-05-18T15:43:48.000Z
2018-09-20T07:39:18.000Z
README.md
jojo5716/CurrencyFormat
ee5c2e7819545713e23b4f2220248462cf50698b
[ "MIT" ]
null
null
null
README.md
jojo5716/CurrencyFormat
ee5c2e7819545713e23b4f2220248462cf50698b
[ "MIT" ]
null
null
null
GitHub Markup ============= Is a tiny JavaScript library, providing simple and advanced money and currency formatting 1. Each currency is formatted depending on the locale Installation ----------- ```html <script type="text/javascript" src="path/to/file/js/currency-format.js"></script> ``` Usage ----- ```javascript var price = currencies.formatMoney(); // € 0.00 var price = currencies.formatMoney(2000.65, 'EUR'); // € 2,000.65 var price = currencies.formatMoney(2000.65, 'EUR', 'es_ES'); // 2.000,65 € var price = currencies.formatMoney(2000.65, 'EUR', 'en_EN'); // 2.000,65 EUR var price = currencies.formatMoney(2000.65, 'EUR', 'es_ES', 3, ','); // 2,000,650 € var price = currencies.formatMoney(2000.65, 'EUR', 'es_ES', 3, ' ', '.'); // 2 000.650 € ```
23.30303
89
0.654096
eng_Latn
0.358632
e5d1bfe11dc473a0984ce6a236f484811d53e11c
95
md
Markdown
test/cases/attribution.md
rwallace-kabam/blocked-at
0d033ce5eb70fc964dffe367696e5b937d203fb5
[ "MIT" ]
266
2017-08-25T20:20:24.000Z
2022-03-24T10:16:40.000Z
test/cases/attribution.md
rwallace-kabam/blocked-at
0d033ce5eb70fc964dffe367696e5b937d203fb5
[ "MIT" ]
24
2017-08-25T12:11:21.000Z
2020-09-18T15:55:19.000Z
test/cases/attribution.md
rwallace-kabam/blocked-at
0d033ce5eb70fc964dffe367696e5b937d203fb5
[ "MIT" ]
22
2017-09-11T21:51:33.000Z
2021-09-07T09:03:29.000Z
Test cases based on test scripts from https://github.com/AndreasMadsen/dprof by Andreas Madsen
47.5
94
0.821053
eng_Latn
0.379793
e5d224c34c5b81550d1eba4124120e54e088c04b
1,645
md
Markdown
docs/cas-server-documentation/installation/Whitelist-Authentication.md
zhongqbin/cas
73e7dd28768970eb3188424f34abe76902884b45
[ "Apache-2.0" ]
null
null
null
docs/cas-server-documentation/installation/Whitelist-Authentication.md
zhongqbin/cas
73e7dd28768970eb3188424f34abe76902884b45
[ "Apache-2.0" ]
null
null
null
docs/cas-server-documentation/installation/Whitelist-Authentication.md
zhongqbin/cas
73e7dd28768970eb3188424f34abe76902884b45
[ "Apache-2.0" ]
null
null
null
--- layout: default title: CAS - Whitelist Authentication category: Authentication --- # Whitelist Authentication Whitelist authentication components fall into two categories: Those that accept a set of credentials stored directly in the configuration and those that accept a set of credentials from a file resource on the server. ## Configuration Support is enabled by including the following dependency in the WAR overlay: ```xml <dependency> <groupId>org.apereo.cas</groupId> <artifactId>cas-server-support-generic</artifactId> <version>${cas.version}</version> </dependency> ``` To see the relevant list of CAS properties, please [review this guide](../configuration/Configuration-Properties.html#file-whitelist-authentication). ## Example Password File ```bash scott::password bob::password2 ``` ## JSON File The password file may also be specified as a JSON resource instead which allows one to specify additional account details mostly useful for development and basic testing. The outline of the file may be defined as: ```json { "@class" : "java.util.LinkedHashMap", "casuser" : { "@class" : "org.apereo.cas.adaptors.generic.CasUserAccount", "password" : "Mellon", "attributes" : { "@class" : "java.util.LinkedHashMap", "firstName" : ["Apereo"], "lastName" : ["CAS"] }, "status" : "OK", "expirationDate" : "2050-01-01" } } ``` The accepted statuses are `OK`, `LOCKED`, `DISABLED`, `EXPIRED` and `MUST_CHANGE_PASSWORD`. To see the relevant list of CAS properties, please [review this guide](../configuration/Configuration-Properties.html#json-whitelist-authentication).
29.909091
241
0.728267
eng_Latn
0.94085
e5d2d026d6781f574b4cf6cd9f632dfb42981ffb
71
md
Markdown
IBM DB2 LUW Inventory Scripts and Artifacts/readme.md
sheffercool/DataMigrationTeam
04c894842455ea9b41c616556638ea6e8b0bb421
[ "MIT" ]
1
2021-02-10T10:08:16.000Z
2021-02-10T10:08:16.000Z
IBM DB2 LUW Inventory Scripts and Artifacts/readme.md
sheffercool/DataMigrationTeam
04c894842455ea9b41c616556638ea6e8b0bb421
[ "MIT" ]
null
null
null
IBM DB2 LUW Inventory Scripts and Artifacts/readme.md
sheffercool/DataMigrationTeam
04c894842455ea9b41c616556638ea6e8b0bb421
[ "MIT" ]
null
null
null
/***This Artifact belongs to the Data SQL Ninja Engineering Team***/
71
71
0.71831
eng_Latn
0.571855
e5d322e317b96311c64ea5de7169f2a9786e8f10
3,278
md
Markdown
minesweeper/README.md
runhui2010/LeetHub
2cc576529e5d4984b9d2ba6027039b9fa0d0c716
[ "MIT" ]
null
null
null
minesweeper/README.md
runhui2010/LeetHub
2cc576529e5d4984b9d2ba6027039b9fa0d0c716
[ "MIT" ]
null
null
null
minesweeper/README.md
runhui2010/LeetHub
2cc576529e5d4984b9d2ba6027039b9fa0d0c716
[ "MIT" ]
null
null
null
<h2>529. Minesweeper</h2><h3>Medium</h3><hr><div><p>Let's play the minesweeper game (<a href="https://en.wikipedia.org/wiki/Minesweeper_(video_game)" target="_blank">Wikipedia</a>, <a href="http://minesweeperonline.com" target="_blank">online game</a>)!</p> <p>You are given an <code>m x n</code> char matrix <code>board</code> representing the game board where:</p> <ul> <li><code>'M'</code> represents an unrevealed mine,</li> <li><code>'E'</code> represents an unrevealed empty square,</li> <li><code>'B'</code> represents a revealed blank square that has no adjacent mines (i.e., above, below, left, right, and all 4 diagonals),</li> <li>digit (<code>'1'</code> to <code>'8'</code>) represents how many mines are adjacent to this revealed square, and</li> <li><code>'X'</code> represents a revealed mine.</li> </ul> <p>You are also given an integer array <code>click</code> where <code>click = [click<sub>r</sub>, click<sub>c</sub>]</code> represents the next click position among all the unrevealed squares (<code>'M'</code> or <code>'E'</code>).</p> <p>Return <em>the board after revealing this position according to the following rules</em>:</p> <ol> <li>If a mine <code>'M'</code> is revealed, then the game is over. You should change it to <code>'X'</code>.</li> <li>If an empty square <code>'E'</code> with no adjacent mines is revealed, then change it to a revealed blank <code>'B'</code> and all of its adjacent unrevealed squares should be revealed recursively.</li> <li>If an empty square <code>'E'</code> with at least one adjacent mine is revealed, then change it to a digit (<code>'1'</code> to <code>'8'</code>) representing the number of adjacent mines.</li> <li>Return the board when no more squares will be revealed.</li> </ol> <p>&nbsp;</p> <p><strong>Example 1:</strong></p> <img src="https://assets.leetcode.com/uploads/2018/10/12/minesweeper_example_1.png" style="width: 500px; max-width: 400px; height: 269px;"> <pre><strong>Input:</strong> board = [["E","E","E","E","E"],["E","E","M","E","E"],["E","E","E","E","E"],["E","E","E","E","E"]], click = [3,0] <strong>Output:</strong> [["B","1","E","1","B"],["B","1","M","1","B"],["B","1","1","1","B"],["B","B","B","B","B"]] </pre> <p><strong>Example 2:</strong></p> <img src="https://assets.leetcode.com/uploads/2018/10/12/minesweeper_example_2.png" style="width: 500px; max-width: 400px; height: 275px;"> <pre><strong>Input:</strong> board = [["B","1","E","1","B"],["B","1","M","1","B"],["B","1","1","1","B"],["B","B","B","B","B"]], click = [1,2] <strong>Output:</strong> [["B","1","E","1","B"],["B","1","X","1","B"],["B","1","1","1","B"],["B","B","B","B","B"]] </pre> <p>&nbsp;</p> <p><strong>Constraints:</strong></p> <ul> <li><code>m == board.length</code></li> <li><code>n == board[i].length</code></li> <li><code>1 &lt;= m, n &lt;= 50</code></li> <li><code>board[i][j]</code> is either <code>'M'</code>, <code>'E'</code>, <code>'B'</code>, or a digit from <code>'1'</code> to <code>'8'</code>.</li> <li><code>click.length == 2</code></li> <li><code>0 &lt;= click<sub>r</sub> &lt; m</code></li> <li><code>0 &lt;= click<sub>c</sub> &lt; n</code></li> <li><code>board[click<sub>r</sub>][click<sub>c</sub>]</code> is either <code>'M'</code> or <code>'E'</code>.</li> </ul> </div>
65.56
257
0.620195
eng_Latn
0.715559
e5d339a0d37800fd24eb41747f4dd37f37f95f45
3,117
md
Markdown
_posts/2019-08-18-Download-nissan-power-window-wiring-diagram.md
Anja-Allende/Anja-Allende
4acf09e3f38033a4abc7f31f37c778359d8e1493
[ "MIT" ]
2
2019-02-28T03:47:33.000Z
2020-04-06T07:49:53.000Z
_posts/2019-08-18-Download-nissan-power-window-wiring-diagram.md
Anja-Allende/Anja-Allende
4acf09e3f38033a4abc7f31f37c778359d8e1493
[ "MIT" ]
null
null
null
_posts/2019-08-18-Download-nissan-power-window-wiring-diagram.md
Anja-Allende/Anja-Allende
4acf09e3f38033a4abc7f31f37c778359d8e1493
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Nissan power window wiring diagram book " Then I lay with her that night and there befell what befell between us till the morning, for. "Come on in. ii. 2020LeGuin20-20Tales20From20Earthsea. Then the king sought his son Belehwan, as it should be, "Shoot the bitch. They were so heavy that forty men were Nissan power window wiring diagram sighed. Admittedly, in small numbers and moved faster, needlingly, he could be mistaken for no "Well. This was passed in very open water, now more than one hill away, dear?" she asked. She was tired and stepped out of the bay, How to Have a Healthier Life through Autohypnosis, well. "We need all our wits about us. as magnificent as possible. " "Yes. There were many inquiries for gunpowder, there is no statute of limitations on murder. Please don't look at me like that. It's just not something I know how With a jolt, HAL CLEMENT An absurd thought; nevertheless, notwithstanding the frequent rain showers accompanied Junior suspected Magusson never had any client but himself. for he had memorized tens of thousands of facts about the worst natural place, now. "Worms," said the helmsman, Birch was sending a carter down to Kembermouth with six barrels of ten-year-old Fanian ordered by the wine merchant there. "When will we do it?" "You're a regular little detective. As the stream from the spout diminishes, though slightly pale as if he didn't get out in the sun much, head cocked either left or right, for sure, and Peter Lorre had been put in a blender and then poured into one suit. So I accompanied him to his house] and when I came up [into his sitting-chamber] he locked the door on me and went forth to fetch what we might eat and drink. CHAPTER I? Her bathing cap. Stand with your feet apart and put your gun down. " rough but unmistakable lineaments, should have been scorching tunnels of clear dry air through the cold fog, providing a correct answer in as little as twenty vessel. It wasn't much in the way of a home; they were crowded against each other on rough pads made of insulating material. When they nissan power window wiring diagram to the palace, eating and drinking. the breath nissan power window wiring diagram her lungs. I am. to the heart. They must be real. And if it did. Morred was the first man, and threatened to tear off Curtis finds the window latch and slides one pane aside, and he swung forcefully. " He also concluded arrangements to open an account for Gammoner in a Grand canvas flaps like the Reaper's robe. Nissan power window wiring diagram had seen slaves and their masters Old Yeller has not assumed a submissive posture, are distinguished from true icebergs not only for them. the squashed-shag carpet, but I heard. " bottom of the trailer. I'm not like Renee and you. Gabby doesn't need to know what type of experiments Curtis would be subjected More often than not, a widower, he watched Angel as she nissan power window wiring diagram the eyeless boy, number-one ceremonial uniforms will be Worn. " Quoth the prefect, maybe even hard enough to kill her.
346.333333
3,009
0.786975
eng_Latn
0.99985
e5d348a553d2188edb124d912c95d1fcdb95fa53
1,340
md
Markdown
theme/ailment.md
lsieun/EnglishDictionary
5ad881da2d06835d1150e7076955c82be83fa127
[ "MIT" ]
null
null
null
theme/ailment.md
lsieun/EnglishDictionary
5ad881da2d06835d1150e7076955c82be83fa127
[ "MIT" ]
null
null
null
theme/ailment.md
lsieun/EnglishDictionary
5ad881da2d06835d1150e7076955c82be83fa127
[ "MIT" ]
null
null
null
# ailment ailment用来收集“小病”,而illness用来收集“大病”。 - allergic: 过敏的 having an allergy to sth - allergy: 过敏反应 a medical condition that causes you to react badly or feel ill/sick when you eat or touch a particular substance - ailment: 轻病;小恙 an illness that is not very serious - cramp: 痛性痉挛;抽筋 a sudden pain that you get when the muscles in a particular part of your body contract, usually caused by cold or too much exercise - sore: 痛处;伤处;疮 a painful, often red, place on your body where there is a wound or an infection - stiff: 僵硬的;一动就疼的 when a person is stiff, their muscles hurt when they move them ## 流行病 - contagious: (疾病)接触传染的 a contagious disease spreads by people touching each other - epidemic: 流行病 a large number of cases of a particular disease happening at the same time in a particular community - flu: 流行性感冒;流感 an infectious disease like a very bad cold, that causes fever, pains and weakness ## 具体身体部位的疼痛 ### head - headache: 头痛 a continuous pain in the head ### Nose - sneeze: 打喷嚏 to have air come suddenly and noisily out through your nose and mouth in a way that you cannot control, for example because you have a cold ### tooth - toothache: 牙痛 a pain in your teeth or in one tooth ### stomach - stomachache: 胃痛,肚子痛 a pain in the abdominal region, caused by a minor condition such as indigestion or an infection
33.5
153
0.758209
eng_Latn
0.999921
e5d3b4d7371d869c5f4b70c5132627f2e9ff3089
8,353
md
Markdown
_posts/2012-04-08-book-safe.md
axlan/robopenguins-blog
0fca51ba2ebb00ef9c9c25959f69e461c1cb6e62
[ "MIT" ]
null
null
null
_posts/2012-04-08-book-safe.md
axlan/robopenguins-blog
0fca51ba2ebb00ef9c9c25959f69e461c1cb6e62
[ "MIT" ]
3
2020-08-16T23:33:26.000Z
2021-05-28T07:41:59.000Z
_posts/2012-04-08-book-safe.md
axlan/robopenguins-blog
0fca51ba2ebb00ef9c9c25959f69e461c1cb6e62
[ "MIT" ]
null
null
null
--- title: Book Safe date: 2012-04-08T19:01:02+00:00 author: jon layout: post categories: - Hardware - Personal image: 2012/04/P1140231-177x300.webp --- A hollowed out book that works as a keyboard controlled locking safe. <iframe width="524" height="295" src="https://www.youtube.com/embed/cUY-xZqdRK8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ## Introduction: This project was used  to introduce a friend of mine to microcontrollers and circuits. I wanted something that used a variety of microcontroller functionalities and used only parts that I had lying around. I ended up using a PS2 numpad as the key entry mechanism since it was what I happened to see in the thrift store, and it happened to come with the connector it mated to. ## Parts: * Large unwanted book * Key input (We used a PS2 numpad) * Microcontroller of choice (We used Atmega328p so we could reuse some Arduino libraries. You could use an entire arduino as well) * Piezo Speaker * Small servo motor * Latching mechanism (we got the  [Gatehouse Polished Brass Latch](http://www.lowes.com/pd_311957-1277-890269_0__?catalogId=10051) from Lowes) * Power Source (We used a 9V battery, clip, and a LM340 5V regulator with capacitors) * 10k resistor * 220uF capacitor * Power transistor (irf1405) ## Tools: * Wood glue *  Coping saw+drill and/or exacto knife * Soldering iron * AVR programmer, or a microcontroller with a bootloader (like an Arduino). ## Circuit Design: Here is the schematic of the circuit we ended up using: [<img title="schematic" src="{{ site.image_host }}/2012/04/schematic.webp" alt="" width="788" height="563" />]({{ site.image_host }}/2012/04/schematic.png) I left the decoupling capacitors off without any ill effect, and you could most likely get away without a pull up resistor on the reset circuit. The 220uF capacitor was a must though. Without it the servo would cause large dips in the voltage causing the microcontroller to reset. Originally, I had the microcontroller drive the speaker through a resistor, but this turned out to be too quiet to hear through the book. To fix this I added a MOSFET to allow a larger current than the microcontroller could output directly. ## Software Design: Writing the code turned out to be extremely easy. The arduino IDE comes with libraries to handle producing tones and controlling servos. I was also able to find a library for PS2 keyboard [http://www.pjrc.com/teensy/td\_libs\_PS2Keyboard.html](http://www.pjrc.com/teensy/td_libs_PS2Keyboard.html) . A couple of things to note using these libraries: * The IRQpin specified for the keyboard corresponds to the external interrupt number of the pin you attach the keyboard&#8217;s clock signal to. I originally specified the digital pin number which was incorrect. * I found that my servo would oscillate around its set point. To prevent this from happening I detached the servo in software when it wasn&#8217;t in use. Click here for the full commented code [Arduino Code]({{ site.image_host }}/2012/04/booklock.zip) Since I didn&#8217;t want to actually waste an Arduino on this project, I ended up soldering the components directly to an Atmega328p. I had an AVR programmer so I could have just directly programmed the hex file produced by the Arduino IDE, but I decided to burn on a boot loader to make future updates easier. I followed the guide here <http://arduino.cc/en/Tutorial/ArduinoToBreadboard> using the minimal configuration. Unfortunately, the board.txt file that site supplies seems to be for an older version of the software. Here is an updated version that I made:  [boards.txt]({{ site.image_host }}/2012/04/boards.txt). To use it add the contents of that file to the end of the hardwarearduinoboards.txt in your Arduino IDE folder and restart the IDE. Then select &#8220;ATmega328 on a breadboard (8 MHz internal clock)&#8221; as your board. ## Build: Glue the first inch or so of the book&#8217;s pages together. This will be the space that will house the electronics, so make it big enough for the motor and microcontroller. Clamps help a lot. [<img class="size-medium wp-image-89 alignleft" title="Clamped Book" src="{{ site.image_host }}/2012/04/P1130796-288x300.webp" alt="" width="288" height="300" />]({{ site.image_host }}/2012/04/P1130796.jpg)[<img class="alignleft size-medium wp-image-90" title="P1130797" src="{{ site.image_host }}/2012/04/P1130797-300x189.webp" alt="" width="300" height="189" />]({{ site.image_host }}/2012/04/P1130797.jpg) <br style="clear: both;" /> Once the glue is dry, cut out the inner section of the pages. We left about an inch border. We drilled into the pages so we could thread a coping saw. Cutting through the pages was slow going, and for the second cavity, we ended up using an exacto knife. Each cavity took several hours. A dremel or band saw might make this easier. [<img class="size-medium wp-image-91 alignleft" title="Cutting" src="{{ site.image_host }}/2012/04/P1140158-264x300.webp" alt="" width="264" height="300" />]({{ site.image_host }}/2012/04/P1140158.jpg)[<img class="size-medium wp-image-92 alignleft" title="First Cavity" src="{{ site.image_host }}/2012/04/P1140162-300x252.webp" alt="" width="300" height="252" />]({{ site.image_host }}/2012/04/P1140162.jpg) <br style="clear: both;" /> Repeat this process for the remaining bottom pages. This should leave you with the cover, the hollow top pages, the hollow bottom pages, and the back still able to move independently. The next step we took was to get the electronics working outside of the safe. We initially used an Arduino. [<img class="aligncenter size-medium wp-image-95" title="P1140170" src="{{ site.image_host }}/2012/04/P1140170-300x234.webp" alt="" width="300" height="234" />]({{ site.image_host }}/2012/04/P1140170.jpg) Once we got the code working, we started on the electrical build. First we added the voltage regulator to the keyboard. We used a LM340 with the capacitors suggested in its datasheet&#8217;s application notes. We were able to fit the circuit inside the keyboard and soldered it to a 5V and ground pad on the circuit board. We removed the numlock LED and fed the 9V battery connection in through the hole. [<img class="alignleft size-medium wp-image-93" title="Keyboard Open" src="{{ site.image_host }}/2012/04/P1140165-291x300.webp" alt="" width="291" height="300" />]({{ site.image_host }}/2012/04/P1140165.jpg) [<img class="alignleft size-medium wp-image-94" title="Keyboard Closed" src="{{ site.image_host }}/2012/04/P1140166-300x255.webp" alt="" width="300" height="255" />]({{ site.image_host }}/2012/04/P1140166.jpg) <br style="clear: both;" /> Next we attached the latch to the servo motor. This connection is probably the weakest point in the project, but we couldn&#8217;t come up with a better idea than super glue. <p style="text-align: center;"> <a href="{{ site.image_host }}/2012/04/P1140173.jpg"><img class="size-medium wp-image-96 aligncenter" title="servo gluing" src="{{ site.image_host }}/2012/04/P1140173-282x300.webp" alt="" width="282" height="300" /></a> </p> After that, we soldered together the electronics. We did this &#8220;dead bug style&#8221; but this should probably be done more robustly with a prototyping board or a PCB if you want to be super fancy. [<img class="aligncenter size-medium wp-image-97" title="dead bug" src="{{ site.image_host }}/2012/04/P1140207-300x191.webp" alt="" width="300" height="191" />]({{ site.image_host }}/2012/04/P1140207.jpg)  Afterward we finished the project by gluing everything down and aligning the two halves of the latch. Here is the finished product. [<img class="alignleft size-medium wp-image-130" title="P1140231" src="{{ site.image_host }}/2012/04/P1140231-177x300.webp" alt="" width="177" height="300" />]({{ site.image_host }}/2012/04/P1140231.jpg)[<img class="alignleft size-medium wp-image-131" title="P1140232" src="{{ site.image_host }}/2012/04/P1140232-300x235.webp" alt="" width="300" height="235" />]({{ site.image_host }}/2012/04/P1140232.jpg)[<img class="alignleft size-medium wp-image-132" title="P1140233" src="{{ site.image_host }}/2012/04/P1140233-300x287.webp" alt="" width="300" height="287" />]({{ site.image_host }}/2012/04/P1140233.jpg)
97.127907
844
0.748833
eng_Latn
0.981028
e5d3e8c5196897cf92af4831c462ec67b4de78f5
17,596
md
Markdown
docs/deployment/deploying-com-components-with-clickonce.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/deployment/deploying-com-components-with-clickonce.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/deployment/deploying-com-components-with-clickonce.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Bereitstellen von Com_komponenten mit ClickOnce | Microsoft-Dokumentation ms.date: 11/04/2016 ms.topic: conceptual dev_langs: - VB - CSharp - C++ helpviewer_keywords: - registration-free COM deployment - ClickOnce deployment, COM components - COM components, deploying - deploying applications [ClickOnce], COM components - components, deploying ms.assetid: 1a4c7f4c-7a41-45f2-9af4-8b1666469b89 author: mikejo5000 ms.author: mikejo manager: jillfra ms.workload: - multiple ms.openlocfilehash: d3be7039995c27990a1c5c55bf173ef06e5fee0c ms.sourcegitcommit: 2193323efc608118e0ce6f6b2ff532f158245d56 ms.translationtype: MTE95 ms.contentlocale: de-DE ms.lasthandoff: 01/25/2019 ms.locfileid: "54984703" --- # <a name="deploy-com-components-with-clickonce"></a>Bereitstellen von COM-Komponenten mit ClickOnce Bereitstellung von älteren COM-Komponenten wurde normalerweise eine schwierige Aufgabe. Komponenten müssen global registriert werden und daher können dazu führen, dass unerwünschte Nebeneffekte zwischen überlappende Anwendungen. Dies ist in der Regel kein Problem in .NET Framework-Anwendungen, da Komponenten vollständig isoliert zu einer Anwendung werden oder Seite-an-Seite kompatibel sind. Visual Studio können Sie isolierte COM-Komponenten auf dem Windows XP oder neueren Betriebssystemen bereitgestellt. [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)] bietet eine einfache und sichere Methode für die Bereitstellung Ihrer Anwendungen für .NET. Wenn Ihre Anwendungen auf ältere COM-Komponenten verwenden, müssen Sie jedoch weitere Schritte ausführen, für deren Bereitstellung. Dieses Thema beschreibt, wie Sie isolierte COM-Komponenten bereitstellen und systemeigene Komponenten (z. B. von Visual Basic 6.0 oder Visual C++) verweisen. Weitere Informationen zum Bereitstellen von isolierte COM-Komponenten finden Sie unter [vereinfachen der Anwendungsbereitstellung mit ClickOnce und registrierungsfreiem COM](https://web.archive.org/web/20050326005413/msdn.microsoft.com/msdnmag/issues/05/04/RegFreeCOM/default.aspx). ## <a name="registration-free-com"></a>COM ohne Registrierung COM ohne Registrierung ist eine neue Technologie für die Bereitstellung und isolierte COM-Komponenten aktivieren. Dies erfolgt durch das Platzieren aller der Komponente Typbibliothek und Registrierungsinformationen, die in der Regel in der Registrierung des Systems in eine XML-Datei mit dem ein Manifest installiert wird im gleichen Ordner wie die Anwendung gespeichert. Isolieren von COM-Komponente ist erforderlich, dass es auf dem Computer des Entwicklers registriert werden, aber es muss nicht auf dem Computer des Endbenutzers registriert werden. Um eine COM-Komponente zu isolieren, müssen Sie, festlegen, die des Verweis des **isoliert** Eigenschaft **"true"**. Standardmäßig ist diese Eigenschaft auf festgelegt **"false"**, gibt an, dass es als eine registrierte COM-Verweis behandelt werden soll. Wenn diese Eigenschaft **"true"**, wird ein Manifest für diese Komponente, die zum Zeitpunkt der Erstellung generiert werden soll. Es wird auch die entsprechenden Dateien in den Anwendungsordner, die während der Installation kopiert werden sollen. Wenn der manifest-Generator eine isolierte COM-Verweises auftritt, listet er alle die `CoClass` Einträge in die Typbibliothek der Komponente, gleicht alle Einträge mit den jeweiligen Registrierungsdaten und das Generieren von Manifesten Definitionen für die COM- die Klassen in der Typbibliothek. ## <a name="deploy-registration-free-com-components-using-clickonce"></a>Registrierungsfreie COM-Komponenten mit ClickOnce bereitstellen [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)] Technologie für die Bereitstellung eignet sich gut für die Bereitstellung von isolierten COM-Komponenten, da beide [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)] und COM ohne Registrierung erfordern, dass eine Komponente ein Manifest verfügen, um bereitgestellt werden. In der Regel sollte der Autor der Komponente ein Manifest bereitstellen. Wenn dies nicht der Fall ist, ist jedoch Visual Studio automatisch ein Manifest für eine COM-Komponente generieren kann. Die Generierung von Manifesten wird ausgeführt, während die [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)] Veröffentlichungsprozess; Weitere Informationen finden Sie unter [Veröffentlichen von ClickOnce-Anwendungen](../deployment/publishing-clickonce-applications.md). Dieses Feature ermöglicht es auch ältere Komponenten zu nutzen, die Sie in früheren entwicklungsumgebungen wie z. B. Visual Basic 6.0 erstellt werden soll. Es gibt zwei Möglichkeiten, [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)] COM-Komponenten bereitgestellt: - Verwenden Sie den Bootstrapper, um COM-Komponenten bereitzustellen; Dies funktioniert auf allen unterstützten Plattformen. - Verwenden Sie die systemeigene Komponente domänenisolationsbereitstellung (auch bekannt als COM ohne Registrierung). Allerdings funktioniert dies nur auf einem Windows XP oder höheren Betriebssystem. ### <a name="example-of-isolating-and-deploying-a-simple-com-component"></a>Beispiel zu isolieren und Bereitstellen einer einfachen COM-Komponente Um die Bereitstellung von COM-Komponenten ohne Registrierung zu veranschaulichen, in diesem Beispiel erstellen Sie eine Windows-basierte Anwendung, in Visual Basic, die auf eine isolierte systemeigene COM-Komponente, die mit Visual Basic 6.0 erstellt verweist, und stellen Sie sie mithilfe [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)]. Zunächst müssen Sie die systemeigene COM-Komponente zu erstellen: ##### <a name="to-create-a-native-com-component"></a>Um eine systemeigene COM-Komponente zu erstellen. 1. Mithilfe von Visual Basic 6.0 aus der **Datei** Menü klicken Sie auf **neu**, klicken Sie dann **Projekt**. 2. In der **neues Projekt** wählen Sie im Dialogfeld die **Visual Basic** Knoten, und wählen ein **ActiveX DLL** Projekt. Geben Sie im Feld **Name** `VB6Hello`ein. > [!NOTE] > Nur ActiveX DLL und ActiveX-Steuerelement die Projekttypen werden mit COM ohne Registrierung unterstützt. ActiveX-EXE und ActiveX-Dokument-Projekttypen werden nicht unterstützt. 3. In **Projektmappen-Explorer**, doppelklicken Sie auf **"Class1.vb"** um den Text-Editor zu öffnen. 4. In "Class1.vb", fügen Sie den folgenden Code nach dem generierten Code für die `New` Methode: ```vb Public Sub SayHello() MsgBox "Message from the VB6Hello COM component" End Sub ``` 5. Erstellen Sie die Komponente an. Von der **erstellen** Menü klicken Sie auf **Projektmappe**. > [!NOTE] > COM ohne Registrierung unterstützt nur die DLLs und COM steuert Projekttypen zur Verfügung. Sie können keine EXE-Dateien mit COM ohne Registrierung verwenden. Jetzt können Sie eine Windows-basierte Anwendung erstellen und fügen einen Verweis auf die COM-Komponente hinzu. ##### <a name="to-create-a-windows-based-application-using-a-com-component"></a>Zum Erstellen einer Windows-basierten Anwendung mit einer COM-Komponente 1. Mit Visual Basic, aus der **Datei** Menü klicken Sie auf **neu**, klicken Sie dann **Projekt**. 2. In der **neues Projekt** wählen Sie im Dialogfeld die **Visual Basic** Knoten, und wählen **Windows-Anwendung**. Geben Sie im Feld **Name** `RegFreeComDemo`ein. 3. In **Projektmappen-Explorer**, klicken Sie auf die **alle Dateien anzeigen** Schaltfläche, um die Projektverweise anzuzeigen. 4. Mit der rechten Maustaste die **Verweise** Knoten, und wählen **Verweis hinzufügen** aus dem Kontextmenü. 5. In der **Verweis hinzufügen** Dialogfeld klicken Sie auf die **Durchsuchen** Registerkarte, navigieren Sie zu VB6Hello.dll, und wählen Sie diese. Ein **VB6Hello** Verweis wird in der Liste der Verweise angezeigt. 6. Zeigen Sie auf die **Toolbox**, wählen eine **Schaltfläche** steuern, und ziehen Sie dann auf die **Form1** Formular. 7. In der **Eigenschaften** Fenster legen Sie die **Text** Eigenschaft **Hello**. 8. Doppelklicken Sie auf die Schaltfläche, um Ereignishandlercode hinzufügen, und fügen Sie in der Codedatei Code hinzu, so, dass der Handler wie folgt aussieht: ```vb Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim VbObj As New VB6Hello.Class1 VbObj.SayHello() End Sub ``` 9. Führen Sie die Anwendung aus. Klicken Sie im Menü **Debuggen** auf **Debuggen starten**. Als Nächstes müssen Sie das Steuerelement zu isolieren. Jede COM-Komponente, die Ihre Anwendung verwendet wird als COM-Verweis in Ihrem Projekt dargestellt. Diese Verweise werden unter der **Verweise** Knoten in der **Projektmappen-Explorer** Fenster. (Beachten Sie, die Sie hinzufügen können die Verweise entweder direkt die **Verweis hinzufügen** Befehl die **Projekt** Menü oder indirekt durch ein ActiveX-Steuerelement auf das Formular ziehen.) Die folgenden Schritte zeigen, wie zum Isolieren von COM-Komponente, und veröffentlichen die aktualisierte Anwendung mit dem isolierten Steuerelement: ##### <a name="to-isolate-a-com-component"></a>Um eine COM-Komponente zu isolieren. 1. In **Projektmappen-Explorer**in die **Verweise** Knoten die **VB6Hello** Verweis. 2. In der **Eigenschaften** Fenster ändern Sie den Wert von der **isoliert** Eigenschaft **"false"** zu **"true"**. 3. Von der **erstellen** Menü klicken Sie auf **Projektmappe**. Jetzt auf, wenn Sie F5 drücken, wird die Anwendung funktioniert wie erwartet, aber es wird jetzt unter registrierungsfreies COM ausgeführt Um nachzuweisen, Aufheben der Registrierung der Komponente VB6Hello.dll, und führen RegFreeComDemo1.exe außerhalb von Visual Studio-IDE. Dieses Mal, wenn die Schaltfläche geklickt wird, funktioniert es weiterhin aus. Wenn Sie vorübergehend das Anwendungsmanifest umbenennen, tritt ein Fehler erneut. > [!NOTE] > Sie können das Fehlen einer COM-Komponente simulieren, indem Sie vorübergehend deren Registrierung aufheben. Öffnen Sie eine Eingabeaufforderung, wechseln Sie zu Ihrem Ordner "System", durch Eingabe `cd /d %windir%\system32`, dann heben Sie die Komponente, indem Sie eingeben `regsvr32 /u VB6Hello.dll`. Sie können es erneut registrieren, indem Sie die Eingabe `regsvr32 VB6Hello.dll`. Der letzte Schritt ist zum Veröffentlichen der Anwendung mit [!INCLUDE[ndptecclick](../deployment/includes/ndptecclick_md.md)]: ##### <a name="to-publish-an-application-update-with-an-isolated-com-component"></a>Veröffentlichen Sie ein Anwendungsupdate mit dem eine isolierte COM-Komponente 1. Von der **erstellen** Menü klicken Sie auf **veröffentlichen RegFreeComDemo**. Der Webpublishing-Assistent wird angezeigt. 2. Geben Sie im Veröffentlichungs-Assistenten einen Speicherort auf dem lokalen Computer, Datenträger, in dem Sie zugreifen können, und überprüfen die veröffentlichten Dateien. 3. Klicken Sie auf **Fertig stellen**, um die Anwendung zu veröffentlichen. Wenn Sie die veröffentlichten Dateien untersuchen, werden Sie feststellen, dass die Datei dann enthalten ist. Das Steuerelement ist völlig isoliert ist, klicken Sie auf diese Anwendung, was bedeutet, dass wenn der Endbenutzer-Computer eine andere Version des Steuerelements mit einer anderen Anwendung verfügt, sie mit dieser Anwendung beeinflusst werden kann. ## <a name="reference-native-assemblies"></a>Native Verweisassemblys Visual Studio unterstützt es sich um Verweise auf systemeigene Visual Basic 6.0- oder C++-Assemblys. solche Verweise werden als native Verweise bezeichnet. Aufschluss darüber, ob ein Verweis native ist, indem Sie überprüfen, die die **Dateityp** -Eigenschaftensatz auf **Native** oder **ActiveX**. Verwenden Sie zum Hinzufügen eines systemeigenen Verweises den **Verweis hinzufügen** Befehl aus, und navigieren Sie zu dem Manifest. Einige Komponenten platzieren Sie das Manifest in der DLL. In diesem Fall können Sie einfach die DLL selbst, und Visual Studio wird es als Hinzufügen eines systemeigenen Verweises, wenn er erkennt, dass die Komponente ein eingebettetes Manifest enthält. Visual Studio berücksichtigt auch automatisch alle abhängigen Dateien oder Assemblys, die im Manifest aufgelisteten, werden im gleichen Ordner wie die Komponente verwiesen wird. Isolation von COM-Steuerelement erleichtert das COM-Komponenten bereitgestellt, die noch nicht über Manifeste verfügen. Wenn eine Komponente mit einem Manifest angegeben ist, können Sie das Manifest jedoch direkt verweisen. Tatsächlich sollten Sie immer das Manifest, die vom Autor der Komponente bereitgestellt wird, nach Möglichkeit anstelle von verwenden die **isoliert** Eigenschaft. ## <a name="limitations-of-registration-free-com-component-deployment"></a>Einschränkungen der Bereitstellung von COM-Komponenten ohne Registrierung COM ohne Registrierung bietet klare Vorteile gegenüber herkömmlichen Bereitstellungsverfahren. Es gibt jedoch einige Einschränkungen und Einschränkungen, die auch darauf hingewiesen werden sollte. Die größte Einschränkung besteht, dass es nur unter Windows XP oder höher funktioniert. Die Implementierung von COM ohne Registrierung erforderlich, Änderungen an die Möglichkeit, die in der Komponenten in der Core-Betriebssystem geladen werden. Leider steht keine Unterstützung für kompatible-Ebene für COM ohne Registrierung Nicht jede Komponente ist ein geeigneter Kandidat für die registrierungsfreie COM Eine Komponente ist nicht gut geeignet, wenn eine der folgenden Bedingungen zutrifft: - Die Komponente ist ein Out-of-Process-Server. EXE-Server werden nicht unterstützt. nur die DLLs werden unterstützt. - Die Komponente ist Teil des Betriebssystems oder einer Systemkomponente ein, z. B. XML, Internet Explorer oder Microsoft Data Access Components (MDAC) ist. Befolgen Sie die Richtlinie Redistribution des Autors der Komponente ein; Überprüfen Sie den Anbieter. - Die Komponente ist Teil einer Anwendung, z. B. Microsoft Office. Beispielsweise sollten Sie nicht versuchen, Microsoft Excel-Objektmodell zu isolieren. Dies ist Teil von Office und kann nur auf einem Computer mit dem vollständigen Office-Produkt installiert verwendet werden. - Die Komponente dient zur Verwendung als ein Add-in oder einem Snap-in, z. B. von einem Office-Add-in oder ein Steuerelement in einem Webbrowser. Solche Komponenten erfordern in der Regel eine Art der Registrierung des Partitionsschemas definiert, die von der hostumgebung, die das Manifest selbst nicht eingegangen ist. - Die Komponente verwaltet einen physischen oder virtuellen Gerät für das System, z. B. einen Gerätetreiber für eine Druckwarteschlange. - Die Komponente ist einer verteilbaren Datenzugriff. Data-Anwendungen erfordern in der Regel eine separate Data Access redistributable installiert werden, bevor sie ausgeführt werden können. Sie sollten nicht versuchen, zum Isolieren von Komponenten wie z. B. die Microsoft ADO-Datensteuerelement, Microsoft OLE DB oder Microsoft Data Access Components (MDAC). Wenn Ihre Anwendung MDAC oder SQL Server Express verwendet, sollten Sie diese stattdessen als erforderliche Komponenten festlegen; finden Sie unter [Vorgehensweise: Installieren von erforderlichen Komponenten mit einer ClickOnce-Anwendung](../deployment/how-to-install-prerequisites-with-a-clickonce-application.md). In einigen Fällen kann es möglich, dass der Entwickler es für die registrierungsfreie COM Umgestalten der Komponente sein Wenn dies nicht möglich ist, können Sie weiterhin erstellen und Veröffentlichen von Anwendungen, die über das standard-Registrierung-Schema mithilfe des Bootstrappers abhängig. Weitere Informationen finden Sie unter [Bootstrapperpakete erstellen](../deployment/creating-bootstrapper-packages.md). Eine COM-Komponente kann nur einmal pro Anwendung isoliert sein. Angenommen, Sie die gleiche COM-Komponente aus zwei verschiedenen isolieren können nicht **Klassenbibliothek** Projekte, die Teil derselben Anwendung sind. Dies führt zu einer Warnung, und die Anwendung kann nicht zur Laufzeit geladen. Um dieses Problem zu vermeiden, empfiehlt Microsoft, dass Sie die COM-Komponenten in einer einzigen Klassenbibliothek einkapseln. Es gibt mehrere Szenarios, die auf die COM-Registrierung auf dem Computer des Entwicklers, erforderlich ist, obwohl die Bereitstellung der Anwendung des keine Registrierung erforderlich ist. Die `Isolated` Eigenschaft ist erforderlich, dass die COM-Komponente auf dem Computer des Entwicklers registriert werden, um das Manifest während des Buildvorgangs automatisch generieren. Sind keine Registrierung-Erfassen von Funktionen, die die selbstregistrierung während des Buildvorgangs aufrufen. Darüber hinaus werden alle Klassen, die nicht explizit definiert, in der Typbibliothek nicht im Manifest übernommen. Bei Verwendung eine COM-Komponente mit einem bereits vorhandenen Manifest, z. B. einen systemeigenen Verweis müssen die Komponente möglicherweise nicht zum Zeitpunkt der Entwicklung registriert werden. Registrierung ist jedoch erforderlich, wenn die Komponente ein ActiveX-Steuerelement ist, und es in die enthalten sein sollen die **Toolbox** und Windows Forms-Designer. ## <a name="see-also"></a>Siehe auch [ClickOnce security and deployment (ClickOnce-Sicherheit und -Bereitstellung)](../deployment/clickonce-security-and-deployment.md)
100.548571
985
0.793362
deu_Latn
0.998039
e5d49bfe131018995142c132f96d2c0c4c5b0837
5,600
md
Markdown
_posts/2019-08-27-Download-video-user-manuals.md
Bunki-booki/29
7d0fb40669bcc2bafd132f0991662dfa9e70545d
[ "MIT" ]
null
null
null
_posts/2019-08-27-Download-video-user-manuals.md
Bunki-booki/29
7d0fb40669bcc2bafd132f0991662dfa9e70545d
[ "MIT" ]
null
null
null
_posts/2019-08-27-Download-video-user-manuals.md
Bunki-booki/29
7d0fb40669bcc2bafd132f0991662dfa9e70545d
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Video user manuals book " species, rougher. Worse, 1977 of feet high, watched Nephrite among the Eskimo, him When the king heard this from his son, and thought about the roots of the trees down in the darkness of the earth, a yellow as pale as Chinese mustard! " Establishing a new identity isn't merely a matter of acquiring a convincing set of ID documents; you aren't convinced his playmates that it is a better toy. "It's the power, but it displayed So I made one. And we'd let them go. " port, to clandestine leading from this space suggest additional rooms beyond, before he could duck? Nevertheless, He'll give her Now he feared [to return to the pot then and there], "Slay me not. What had he done wrong in the last few days. " Crouching beside the boy as he rubbed a brighter shine onto the granite, the cruel ones who hold together and strengthen each other, ignored, between this headland and video user manuals Selenetz Islands into the Chapter 59 enough to win Earl a place in Polly's let-him-vote-but-don't-let-him-run-for- from which the trawl net brought up no animals, "the second question is easy to answer. " convinced by this report that the sea route to China was actually The Scandinavian race first migrated to Finmark and settled there in thinner than a winter-starved crow. The cattleman Alder expected him to stay out in these           In my tears I have a witness; when I call thee to my mind, and we're just living to video user manuals, and well learned. Sitting on the railing of the ship was a sailor splicing a rope. by video user manuals dashing that it contained liquid. "Men who have no art at all, until she threw "Hal, and probably also carbonic acid! He had to be involved unless the laws of went back to Partyland with fifteen hundred dollars in cash, sometimes farther back. She drank the wine, people of my troth. 189 Vernon, and this impotence suggested that video user manuals might never in the middle. with all my little puppies squirming against me, to welcome the Expedition. I had some kin, Colman told himself again. Using a clean rag that they had brought to polish the engraved face of the "I'm video user manuals good there, an ocean coming down; WAITING Video user manuals DR, and approached the Arctic really dead. that, one of the most highly esteemed men of the tribe. "By the time I have heard you out, he has been largely conventional plumb video user manuals. After her binge the previous night, K, for he prefers the to collect a very large number of them, yes, he went to the door. I promise I will. The natives had a video user manuals dogs "Sexual abuse?" Her timidity was only partly due to shyness. Back in the cell room, "My words are nothing, "whatever's equivalent to a cow on their planet, and sailed video user manuals to 75 deg. He was trembling uncontrollably and his teeth chattered. de during her voyage incite to new exploratory expeditions to the sea, who have been driven by foreign and the water between the pieces of drift-ice was covered with a very An affecting but difficult-to-define note in Dr. " species, I don't want any trouble. "First chicken to be come with first egg inside already. A pair of sheriff's deputies had taught him a painful lesson in "respect" in a cell at the town jailhouse, people! ) ] "This woman be to ask me about chickens--" While Jacob had shuffled, whether those -nor cruel, Havnor was better placed for trade and for sending out fleets to protect the Hardic islands against Kargish raids and forays. Yehya ben Khalid and the Poor Man dclvi room. I passed the glass annex. If you will "Precious. Without ceremony or prayer, I would fain drink, during video user manuals lives with her mother and stepfather. Her fingers fought to hold on to the knife, now so cool. Although the only light on the back porch came from the pale beams that filtered out through the curtains on the kitchen windows, in the low fields where he spoke "What do you want to learn?" video user manuals the taller woman in her mild voice, Paul made himself useful by assisting Grace with food yourself, video user manuals than Presently, sir. "Nobody but video user manuals dog. Specialists with the scientific-investigation division. He opened "Vomiting. too, except at one crucial point. Pots and pans hanging from a ceiling rack! (One and a half the natural size. Yes, this meticulously arranged by a master mechanic-unless the effect of the jacks was rapidly. " course of these negotiations, sat down at one end of the sofa, "Arise and come down and show us the contract. Brandt, she was also left with a vague vegetation. She stood without moving? lt's okay. "The two of you are Lipscomb women now, the nearness of those searching for him doesn't matter. interview with confidence. Otherwise, flexing his cramped limbs. The mechanism creaks and rasps. At last they pulled themselves "Not that trains are any better. Instead, but she knew the way in the dark, whilst he himself hid in a place where Aboulhusn could not see him. Guilt in fact gave him the power to become his own Pygmalion, please do not use again the expression you video user manuals just uttered, declaring psychologically and physicallyвand yet she had survived, and all the emeralds you could haul up from a video user manuals in a "There is the problem of the motor. But that's a hectic existence, were two small coast rivers which debouch from Yalmal "I have no idea, lance in hand, that the whalers the dark night brings forth the moon!" he actually heard them spoken.
622.222222
5,508
0.785
eng_Latn
0.999923
e5d4d1309d0b805341cc038bd4d088133ac34acb
2,868
md
Markdown
README.md
AmeerSalahaldeen/laravel-parser
5d2b0d517d3677c41fed55790ee859f0a4aefdb2
[ "MIT" ]
null
null
null
README.md
AmeerSalahaldeen/laravel-parser
5d2b0d517d3677c41fed55790ee859f0a4aefdb2
[ "MIT" ]
null
null
null
README.md
AmeerSalahaldeen/laravel-parser
5d2b0d517d3677c41fed55790ee859f0a4aefdb2
[ "MIT" ]
null
null
null
laravel-parser ============== [![Build Status](https://travis-ci.org/nathanmac/laravel-parser.svg?branch=master)](https://travis-ci.org/nathanmac/laravel-parser) [![Still Maintained](http://stillmaintained.com/nathanmac/laravel-parser.png)](http://stillmaintained.com/nathanmac/laravel-parser) > Project no longer maintained see the [Parser](https://github.com/nathanmac/Parser) project for a replacement. Simple Format Parser For Laravel 4 Installation ------------ Begin by installing this package through Composer. Edit your project's `composer.json` file to require `Nathanmac/laravel-parser`. "require": { "nathanmac/laravel-parser": "dev-master" } Next, update Composer from the Terminal: composer update Once this operation completes, the final step is to add the service provider. Open `app/config/app.php`, and add a new item to the providers array. 'Nathanmac\Parser\ParserServiceProvider' ##### Parsing Functions ```php Parse::json($payload); // JSON > Array Parse::xml($payload); // XML > Array Parse::yaml($payload); // YAML > Array Parse::querystr($payload); // Query String > Array Parse::serialize($payload); // Serialized Object > Array ``` ##### Parse Input/Payload (PUT/POST) ```php Parse::payload(); // Auto Detect Type - 'Content Type' HTTP Header Parse::payload('application/json'); // Specifiy the content type ``` ##### Parse JSON ```php $parsed = Parse::json(' { "message": { "to": "Jack Smith", "from": "Jane Doe", "subject": "Hello World", "body": "Hello, whats going on..." } }'); ``` ##### Parse XML ```php $parsed = Parse::xml(' <?xml version="1.0" encoding="UTF-8"?> <xml> <message> <to>Jack Smith</to> <from>Jane Doe</from> <subject>Hello World</subject> <body>Hello, whats going on...</body> </message> </xml>'); ``` ##### Parse Query String ```php $parsed = Parse::querystr('to=Jack Smith&from=Jane Doe&subject=Hello World&body=Hello, whats going on...'); ``` ##### Parse Serialized Object ```php $parsed = Parse::serialize('a:1:{s:7:"message";a:4:{s:2:"to";s:10:"Jack Smith";s:4:"from";s:8:"Jane Doe";s:7:"subject";s:11:"Hello World";s:4:"body";s:24:"Hello, whats going on...";}}'); ``` ##### Parse YAML ```php $parsed = Parse::yaml(' --- message: to: "Jack Smith" from: "Jane Doe" subject: "Hello World" body: "Hello, whats going on..." '); ``` ###### Supported Content-Types ``` XML --- application/xml > XML text/xml > XML JSON ---- application/json > JSON application/x-javascript > JSON text/javascript > JSON text/x-javascript > JSON text/x-json > JSON YAML ---- text/yaml > YAML text/x-yaml > YAML application/yaml > YAML application/x-yaml > YAML MISC ---- application/vnd.php.serialized > Serialized Object application/x-www-form-urlencoded' > Query String ``` # laravel-parser
23.508197
186
0.652371
kor_Hang
0.358239
e5d528be185fd06ffd949d4a5db87976c5612984
58
md
Markdown
README.md
AgladeJesus/Bootcamp-Santander
5027ca0a47cf2f6f146025f7cd557f6a4342918d
[ "MIT" ]
1
2021-07-28T12:12:16.000Z
2021-07-28T12:12:16.000Z
README.md
AgladeJesus/Bootcamp-Santander
5027ca0a47cf2f6f146025f7cd557f6a4342918d
[ "MIT" ]
null
null
null
README.md
AgladeJesus/Bootcamp-Santander
5027ca0a47cf2f6f146025f7cd557f6a4342918d
[ "MIT" ]
null
null
null
# Bootcamp Santander Bootcamp | Fullstack Developer 2021
19.333333
46
0.810345
ita_Latn
0.214228
e5d6869016eb505f757bdb30d4a34de86eace2ec
2,129
md
Markdown
docs/docs/cmd/spo/propertybag/propertybag-remove.md
RedMokeys/cli-microsoft365
03e4b1e3567b0e4c78047be7a158822b0c622ffb
[ "MIT" ]
null
null
null
docs/docs/cmd/spo/propertybag/propertybag-remove.md
RedMokeys/cli-microsoft365
03e4b1e3567b0e4c78047be7a158822b0c622ffb
[ "MIT" ]
null
null
null
docs/docs/cmd/spo/propertybag/propertybag-remove.md
RedMokeys/cli-microsoft365
03e4b1e3567b0e4c78047be7a158822b0c622ffb
[ "MIT" ]
1
2020-10-22T14:14:04.000Z
2020-10-22T14:14:04.000Z
# spo propertybag remove Removes specified property from the property bag ## Usage ```sh m365 spo propertybag remove [options] ``` ## Options Option|Description ------|----------- `--help`|output usage information `-u, --webUrl <webUrl>`|The URL of the site from which the property should be removed `-k, --key <key>`|Key of the property to be removed. Case-sensitive `-f, --folder [folder]`|Site-relative URL of the folder from which to remove the property bag value `--confirm`|Don't prompt for confirming removal of property bag value `--query [query]`|JMESPath query string. See [http://jmespath.org/](http://jmespath.org/) for more information and examples `-o, --output [output]`|Output type. `json,text`. Default `text` `--verbose`|Runs command with verbose logging `--debug`|Runs command with debug logging ## Examples Removes the value of the _key1_ property from the property bag located in site _https://contoso.sharepoint.com/sites/test_ ```sh m365 spo propertybag remove --webUrl https://contoso.sharepoint.com/sites/test --key key1 ``` Removes the value of the _key1_ property from the property bag located in site root folder _https://contoso.sharepoint.com/sites/test_ ```sh m365 spo propertybag remove --webUrl https://contoso.sharepoint.com/sites/test --key key1 --folder / --confirm ``` Removes the value of the _key1_ property from the property bag located in site document library _https://contoso.sharepoint.com/sites/test_ ```sh m365 spo propertybag remove --webUrl https://contoso.sharepoint.com/sites/test --key key1 --folder '/Shared Documents' ``` Removes the value of the _key1_ property from the property bag located in folder in site document library _https://contoso.sharepoint.com/sites/test_ ```sh m365 spo propertybag remove --webUrl https://contoso.sharepoint.com/sites/test --key key1 --folder '/Shared Documents/MyFolder' ``` Removes the value of the _key1_ property from the property bag located in site list _https://contoso.sharepoint.com/sites/test_ ```sh m365 spo propertybag remove --webUrl https://contoso.sharepoint.com/sites/test --key key1 --folder /Lists/MyList ```
38.709091
149
0.751996
eng_Latn
0.832575
e5d711eb2928895a92c60527517bfaeb6d57abde
164
md
Markdown
_pages/cv.md
malin-hu/malin-hu.github.io
ef6cbef279c3e308cfb6963c314c06d44cd7c966
[ "MIT" ]
null
null
null
_pages/cv.md
malin-hu/malin-hu.github.io
ef6cbef279c3e308cfb6963c314c06d44cd7c966
[ "MIT" ]
null
null
null
_pages/cv.md
malin-hu/malin-hu.github.io
ef6cbef279c3e308cfb6963c314c06d44cd7c966
[ "MIT" ]
6
2019-09-26T03:56:48.000Z
2021-07-29T05:33:56.000Z
--- layout: single title: "CV" permalink: /cv/ author_profile: true redirect_from: - /resume --- --- [[download CV](http://malin-hu.github.io/files/HuCV.pdf)]
13.666667
59
0.658537
eng_Latn
0.169523
e5d7daac20de03a314ebcd56a19e033e191b489a
238
md
Markdown
README.md
jackinloadup/rainmeter
bc33ded63daf79da3db290bdfe9ef6b7a570b574
[ "MIT" ]
null
null
null
README.md
jackinloadup/rainmeter
bc33ded63daf79da3db290bdfe9ef6b7a570b574
[ "MIT" ]
null
null
null
README.md
jackinloadup/rainmeter
bc33ded63daf79da3db290bdfe9ef6b7a570b574
[ "MIT" ]
null
null
null
#Programing 101 ## creating a rainmeter #Tasks * create an html document for the visualizer to be displayed on * create code to read an audio file * pull out waveform from audio * create bars from various frequencies * color bars
21.636364
64
0.752101
eng_Latn
0.998307
e5d8a94938a7ba242c7777550817a81085e99dfc
1,225
md
Markdown
src/docs/common_tasks/thrift_gen.md
lahosken/pants
1b0340987c9b2eab9411416803c75b80736716e4
[ "Apache-2.0" ]
1
2021-11-11T14:04:24.000Z
2021-11-11T14:04:24.000Z
src/docs/common_tasks/thrift_gen.md
lahosken/pants
1b0340987c9b2eab9411416803c75b80736716e4
[ "Apache-2.0" ]
null
null
null
src/docs/common_tasks/thrift_gen.md
lahosken/pants
1b0340987c9b2eab9411416803c75b80736716e4
[ "Apache-2.0" ]
1
2021-11-11T14:04:12.000Z
2021-11-11T14:04:12.000Z
# Generate Code from Thrift Definitions ## Problem You've created Thrift definitions (structs, services, etc.) and you need to generated either Thrift-based * **classes** for use within your Scala or Java project, or * **libraries** that can be used by your project or other projects. ## Solution Use the `gen` goal to generate code from Thrift definitions. Here's an example: ::bash $ ./pants gen myproject/src/thrift:thrift-scala If you need to compile a Scala or Java [[library target|pants('src/docs/common_tasks:jvm_library')]] instead, use the `compile` goal instead. ## Discussion There are two types of Thrift target definitions that you will find in `BUILD` files in existing projects: * `java_thrift_library` (for Scala and Java) and `python_thrift_library` * `create_thrift_libraries` You can use the `gen` and `compile` goals directly with `java_thrift_library` targets. Thus, you could target a `BUILD` file containing this definition... ::python java_thrift_library(name='thrift-java', # Other parameters ) ...like this using Pants: ::bash $ ./pants gen myproject/src/main/thrift:thrift-java `create_thrift_libraries` targets work somewhat differently, however.
31.410256
154
0.743673
eng_Latn
0.987436
e5d8b5ba53675ff5c32a634ed4db104837e053a5
76
md
Markdown
content/index.md
bmcminn/throwdown
429c7b52a8e40435eca93546628cb4941fa5f01b
[ "MIT" ]
1
2018-02-20T08:45:29.000Z
2018-02-20T08:45:29.000Z
content/index.md
bmcminn/throwdown
429c7b52a8e40435eca93546628cb4941fa5f01b
[ "MIT" ]
7
2018-01-24T21:18:01.000Z
2018-03-21T17:47:59.000Z
content/index.md
bmcminn/throwdown
429c7b52a8e40435eca93546628cb4941fa5f01b
[ "MIT" ]
1
2016-07-01T09:30:48.000Z
2016-07-01T09:30:48.000Z
--- title: Homepage published: true pants: true --- This is the homepage!
8.444444
21
0.684211
eng_Latn
0.998635
e5d93def4ac2bbeb43f64525bcd6203ff9072ece
12,570
md
Markdown
docs/parallel/concrt/reference/ischedulerproxy-structure.md
heckad/cpp-docs.ru-ru
c365cf61152ad8e2ac8d79dc19035920eb92215a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/parallel/concrt/reference/ischedulerproxy-structure.md
heckad/cpp-docs.ru-ru
c365cf61152ad8e2ac8d79dc19035920eb92215a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/parallel/concrt/reference/ischedulerproxy-structure.md
heckad/cpp-docs.ru-ru
c365cf61152ad8e2ac8d79dc19035920eb92215a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: 'Дополнительные сведения о: структура ISchedulerProxy' title: Структура ISchedulerProxy ms.date: 11/04/2016 f1_keywords: - ISchedulerProxy - CONCRTRM/concurrency::ISchedulerProxy - CONCRTRM/concurrency::ISchedulerProxy::ISchedulerProxy::BindContext - CONCRTRM/concurrency::ISchedulerProxy::ISchedulerProxy::CreateOversubscriber - CONCRTRM/concurrency::ISchedulerProxy::ISchedulerProxy::RequestInitialVirtualProcessors - CONCRTRM/concurrency::ISchedulerProxy::ISchedulerProxy::Shutdown - CONCRTRM/concurrency::ISchedulerProxy::ISchedulerProxy::SubscribeCurrentThread - CONCRTRM/concurrency::ISchedulerProxy::ISchedulerProxy::UnbindContext helpviewer_keywords: - ISchedulerProxy structure ms.assetid: af416973-7a1c-4c30-aa3b-4161c2aaea54 ms.openlocfilehash: 4c3c488136c2b41a76b3080b2162fbf95dcb5ea8 ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098 ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 12/11/2020 ms.locfileid: "97334455" --- # <a name="ischedulerproxy-structure"></a>Структура ISchedulerProxy Интерфейс, по которому планировщики взаимодействуют с диспетчером ресурсов среды выполнения с параллелизмом для согласования выделения ресурсов. ## <a name="syntax"></a>Синтаксис ```cpp struct ISchedulerProxy; ``` ## <a name="members"></a>Члены ### <a name="public-methods"></a>Открытые методы |name|Описание| |----------|-----------------| |[ISchedulerProxy:: Биндконтекст](#bindcontext)|Связывает контекст выполнения с прокси-сервером потока, если он еще не связан с одним потоком.| |[ISchedulerProxy:: Креатеоверсубскрибер](#createoversubscriber)|Создает новый корень виртуального процессора в аппаратном потоке, связанном с существующим ресурсом выполнения.| |[ISchedulerProxy:: Рекуестинитиалвиртуалпроцессорс](#requestinitialvirtualprocessors)|Запрашивает начальное выделение корней виртуального процессора. Каждый корень виртуального процессора представляет возможность выполнения одного потока, который может выполнять работу с планировщиком.| |[ISchedulerProxy:: Shutdown](#shutdown)|Уведомляет диспетчер ресурсов о завершении работы планировщика. Это приведет к тому, что диспетчер ресурсов немедленно освободит все ресурсы, предоставленные планировщику.| |[ISchedulerProxy:: Субскрибекуррентсреад](#subscribecurrentthread)|Регистрирует текущий поток с диспетчер ресурсов, связывая его с этим планировщиком.| |[ISchedulerProxy:: Унбиндконтекст](#unbindcontext)|Отменяет связь прокси-сервера потока с контекстом выполнения, указанным в `pContext` параметре, и возвращает его в свободный пул фабрики прокси-сервера потока. Этот метод может быть вызван только для контекста выполнения, который был привязан через метод [ISchedulerProxy:: биндконтекст](#bindcontext) и еще не был запущен с помощью `pContext` параметра вызова метода [исреадпрокси:: свитчто](ithreadproxy-structure.md#switchto) .| ## <a name="remarks"></a>Комментарии Диспетчер ресурсов передает `ISchedulerProxy` интерфейс каждому планировщику, который регистрируется с ним с помощью метода [метод IResourceManager:: регистерсчедулер](iresourcemanager-structure.md#registerscheduler) . ## <a name="inheritance-hierarchy"></a>Иерархия наследования `ISchedulerProxy` ## <a name="requirements"></a>Требования **Заголовок:** concrtrm. h **Пространство имен:** параллелизм ## <a name="ischedulerproxybindcontext-method"></a><a name="bindcontext"></a> Метод ISchedulerProxy:: Биндконтекст Связывает контекст выполнения с прокси-сервером потока, если он еще не связан с одним потоком. ```cpp virtual void BindContext(_Inout_ IExecutionContext* pContext) = 0; ``` ### <a name="parameters"></a>Параметры *pContext*<br/> Интерфейс к контексту выполнения, связываемый с прокси-потоком. ### <a name="remarks"></a>Комментарии Как правило, метод [исреадпрокси:: свитчто](ithreadproxy-structure.md#switchto) будет привязывать прокси-сервер потока к контексту выполнения по запросу. Однако существуют обстоятельства, когда необходимо привязать контекст заранее, чтобы `SwitchTo` метод переключились на уже привязанный контекст. Это происходит в контексте планирования UMS, так как он не может вызывать методы, выделяющие память, и привязка прокси-сервера потока может привести к выделению памяти, если прокси-сервер потока недоступен в свободном пуле фабрики прокси потока. `invalid_argument` Если параметр имеет значение, создается исключение `pContext` `NULL` . ## <a name="ischedulerproxycreateoversubscriber-method"></a><a name="createoversubscriber"></a> Метод ISchedulerProxy:: Креатеоверсубскрибер Создает новый корень виртуального процессора в аппаратном потоке, связанном с существующим ресурсом выполнения. ```cpp virtual IVirtualProcessorRoot* CreateOversubscriber(_Inout_ IExecutionResource* pExecutionResource) = 0; ``` ### <a name="parameters"></a>Параметры *пексекутионресаурце*<br/> `IExecutionResource`Интерфейс, представляющий аппаратный поток, который нужно переподписать. ### <a name="return-value"></a>Возвращаемое значение Интерфейс `IVirtualProcessorRoot`. ### <a name="remarks"></a>Комментарии Используйте этот метод, если вашему планировщику требуется отменить подписывание определенного аппаратного потока в течение ограниченного периода времени. Завершив работу с корнем виртуального процессора, необходимо вернуть его в Resource Manager, вызвав метод [Remove](iexecutionresource-structure.md#remove) `IVirtualProcessorRoot` интерфейса. Поскольку интерфейс `IVirtualProcessorRoot` наследует от интерфейса `IExecutionResource`, можно даже переподписать существующий корневой виртуальный процессор. ## <a name="ischedulerproxyrequestinitialvirtualprocessors-method"></a><a name="requestinitialvirtualprocessors"></a> Метод ISchedulerProxy:: Рекуестинитиалвиртуалпроцессорс Запрашивает начальное выделение корней виртуального процессора. Каждый корень виртуального процессора представляет возможность выполнения одного потока, который может выполнять работу с планировщиком. ```cpp virtual IExecutionResource* RequestInitialVirtualProcessors(bool doSubscribeCurrentThread) = 0; ``` ### <a name="parameters"></a>Параметры *досубскрибекуррентсреад*<br/> Следует ли подписывать текущий поток и учетную запись для него во время выделения ресурсов. ### <a name="return-value"></a>Возвращаемое значение `IExecutionResource`Интерфейс для текущего потока, если параметр `doSubscribeCurrentThread` имеет значение **`true`** . Если значение равно **`false`** , метод возвращает значение null. ### <a name="remarks"></a>Комментарии Прежде чем планировщик будет выполнять какую-либо работу, он должен использовать этот метод для запроса корней виртуального процессора из диспетчер ресурсов. Диспетчер ресурсов будет получать доступ к политике планировщика с помощью [IScheduler:: Policy](ischeduler-structure.md#getpolicy) и использовать значения ключей политики `MinConcurrency` , `MaxConcurrency` а также определить, сколько `TargetOversubscriptionFactor` аппаратных потоков необходимо назначить планировщику, и сколько корней виртуального процессора следует создать для каждого аппаратного потока. Дополнительные сведения о том, как политики планировщика используются для определения первоначального выделения планировщика, см. в разделе [полициелементкэй](concurrency-namespace-enums.md). Диспетчер ресурсов предоставляет ресурсы планировщику путем вызова метода [IScheduler:: аддвиртуалпроцессорс](ischeduler-structure.md#addvirtualprocessors) со списком корней виртуального процессора. Метод вызывается как обратный вызов планировщика перед возвратом из этого метода. Если планировщик запросил подписку для текущего потока, задав для параметра значение `doSubscribeCurrentThread` **`true`** , метод возвращает `IExecutionResource` интерфейс. Подписка должна быть завершена позже с помощью метода [IExecutionResource:: Remove](iexecutionresource-structure.md#remove) . При определении выбранных аппаратных потоков диспетчер ресурсов будет пытаться выполнить оптимизацию для сходства узлов процессора. Если для текущего потока запрашивается подписка, это означает, что текущий поток намеревается участвовать в работе, назначенной этому планировщику. В этом случае корни выделенных виртуальных процессоров находятся на узле процессора, на котором выполняется текущий поток, если это возможно. Процесс подписки потока увеличивает уровень подписки базового аппаратного потока на единицу. Уровень подписки уменьшается на единицу после завершения подписки. Дополнительные сведения об уровнях подписки см. в разделе [IExecutionResource:: куррентсубскриптионлевел](iexecutionresource-structure.md#currentsubscriptionlevel). ## <a name="ischedulerproxyshutdown-method"></a><a name="shutdown"></a> Метод ISchedulerProxy:: Shutdown Уведомляет диспетчер ресурсов о завершении работы планировщика. Это приведет к тому, что диспетчер ресурсов немедленно освободит все ресурсы, предоставленные планировщику. ```cpp virtual void Shutdown() = 0; ``` ### <a name="remarks"></a>Комментарии Все `IExecutionContext` интерфейсы, полученные планировщиком в результате подписки внешнего потока с помощью методов `ISchedulerProxy::RequestInitialVirtualProcessors` или, `ISchedulerProxy::SubscribeCurrentThread` должны возвращаться в Диспетчер ресурсов с помощью `IExecutionResource::Remove` перед завершением работы планировщика. Если у вашего планировщика есть деактивированные корни виртуального процессора, необходимо активировать их с помощью [ивиртуалпроцессоррут:: Activate](ivirtualprocessorroot-structure.md#activate), а прокси-серверы потоков, выполняющиеся на них, оставить `Dispatch` метод контекстов выполнения, который они отправляют перед вызовом `Shutdown` по прокси-серверу планировщика. Для планировщика необязательно возвращать все корневые виртуальные процессоры, выданные ему диспетчером ресурсов путем вызовов метода `Remove`, поскольку все корневые виртуальные процессоры будут возвращены диспетчеру ресурсов при завершении работы. ## <a name="ischedulerproxysubscribecurrentthread-method"></a><a name="subscribecurrentthread"></a> Метод ISchedulerProxy:: Субскрибекуррентсреад Регистрирует текущий поток с диспетчер ресурсов, связывая его с этим планировщиком. ```cpp virtual IExecutionResource* SubscribeCurrentThread() = 0; ``` ### <a name="return-value"></a>Возвращаемое значение `IExecutionResource`Взаимодействие, представляющее текущий поток в среде выполнения. ### <a name="remarks"></a>Комментарии Используйте этот метод, если необходимо, чтобы диспетчер ресурсов учетной записи для текущего потока при выделении ресурсов планировщику и другим планировщикам. Это особенно полезно, когда поток планирует участвовать в работе в очереди планировщика, а также к корням виртуальных процессоров, получаемым планировщиком от диспетчер ресурсов. Диспетчер ресурсов использует сведения, чтобы предотвратить ненужную избыточную подписку аппаратных потоков в системе. Ресурс выполнения, полученный с помощью этого метода, должен возвращаться в диспетчер ресурсов с помощью метода [IExecutionResource:: Remove](iexecutionresource-structure.md#remove) . Поток, вызывающий `Remove` метод, должен быть тем же потоком, который ранее вызвал `SubscribeCurrentThread` метод. Процесс подписки потока увеличивает уровень подписки базового аппаратного потока на единицу. Уровень подписки уменьшается на единицу после завершения подписки. Дополнительные сведения об уровнях подписки см. в разделе [IExecutionResource:: куррентсубскриптионлевел](iexecutionresource-structure.md#currentsubscriptionlevel). ## <a name="ischedulerproxyunbindcontext-method"></a><a name="unbindcontext"></a> Метод ISchedulerProxy:: Унбиндконтекст Отменяет связь прокси-сервера потока с контекстом выполнения, указанным в `pContext` параметре, и возвращает его в свободный пул фабрики прокси-сервера потока. Этот метод может быть вызван только для контекста выполнения, который был привязан через метод [ISchedulerProxy:: биндконтекст](#bindcontext) и еще не был запущен с помощью `pContext` параметра вызова метода [исреадпрокси:: свитчто](ithreadproxy-structure.md#switchto) . ```cpp virtual void UnbindContext(_Inout_ IExecutionContext* pContext) = 0; ``` ### <a name="parameters"></a>Параметры *pContext*<br/> Контекст выполнения для отсвязи от прокси-сервера потока. ## <a name="see-also"></a>См. также раздел [Пространство имен Concurrency](concurrency-namespace.md)<br/> [Структура IScheduler](ischeduler-structure.md)<br/> [Структура IThreadProxy](ithreadproxy-structure.md)<br/> [Структура IVirtualProcessorRoot](ivirtualprocessorroot-structure.md)<br/> [Структура IResourceManager](iresourcemanager-structure.md)
66.861702
759
0.821002
rus_Cyrl
0.927866
e5d97ed8ff235f7c50083f68c001d0a308c3db12
11,157
md
Markdown
articles/service-fabric/service-fabric-api-management-overview.md
fuatrihtim/azure-docs.tr-tr
6569c5eb54bdab7488b44498dc4dad397d32f1be
[ "CC-BY-4.0", "MIT" ]
16
2017-08-28T08:29:36.000Z
2022-01-02T16:46:30.000Z
articles/service-fabric/service-fabric-api-management-overview.md
fuatrihtim/azure-docs.tr-tr
6569c5eb54bdab7488b44498dc4dad397d32f1be
[ "CC-BY-4.0", "MIT" ]
470
2017-11-11T20:59:16.000Z
2021-04-10T17:06:28.000Z
articles/service-fabric/service-fabric-api-management-overview.md
fuatrihtim/azure-docs.tr-tr
6569c5eb54bdab7488b44498dc4dad397d32f1be
[ "CC-BY-4.0", "MIT" ]
25
2017-11-11T19:39:08.000Z
2022-03-30T13:47:56.000Z
--- title: API Management genel bakış ile Azure Service Fabric description: Bu makale, Azure API Management Service Fabric uygulamalarınıza yönelik bir ağ geçidi olarak kullanılmasına giriş niteliğindedir. ms.topic: conceptual ms.date: 06/22/2017 ms.openlocfilehash: 32f47d62cc9dda7cc88421dbf616bf69ffe152fc ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 03/29/2021 ms.locfileid: "96575695" --- # <a name="service-fabric-with-azure-api-management-overview"></a>Service Fabric ve Azure API Management'a genel bakış Bulut uygulamalarının normalde kullanıcılar, cihazlar ve diğer uygulamalara tek giriş noktası sağlamak için bir ön uç ağ geçidine ihtiyacı vardır. Service Fabric, bir ağ geçidi, [ASP.NET Core uygulama](service-fabric-reliable-services-communication-aspnetcore.md)veya [Event Hubs](../event-hubs/index.yml), [IoT Hub](../iot-hub/index.yml)veya [Azure API Management](../api-management/index.yml)gibi trafik girişi için tasarlanan başka bir hizmet gibi durum bilgisi olmayan herhangi bir hizmet olabilir. Bu makale, Azure API Management Service Fabric uygulamalarınıza yönelik bir ağ geçidi olarak kullanılmasına giriş niteliğindedir. API Management, arka uç Service Fabric hizmetlerinize zengin bir yönlendirme kuralları kümesiyle API 'Ler yayımlamanıza olanak tanıyan doğrudan Service Fabric ile tümleşir. ## <a name="availability"></a>Kullanılabilirlik > [!IMPORTANT] > Bu özellik, gerekli sanal ağ desteği nedeniyle API Management **Premium** ve **Geliştirici** katmanlarında kullanılabilir. ## <a name="architecture"></a>Mimari Ortak bir Service Fabric mimarisi, HTTP API 'Leri kullanıma sunan arka uç hizmetlerine HTTP çağrıları yapan tek sayfalı bir Web uygulaması kullanır. [Service Fabric Başlarken örnek uygulaması](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started) bu mimariye bir örnek gösterir. Bu senaryoda, bir durum bilgisiz Web hizmeti, Service Fabric uygulamasına ağ geçidi olarak görev yapar. Bu yaklaşım, aşağıdaki diyagramda gösterildiği gibi arka uç hizmetlerine Proxy HTTP istekleri sağlayan bir Web hizmeti yazmanızı gerektirir: ![Durum bilgisi olmayan bir Web hizmetinin Service Fabric uygulamasına ağ geçidi olarak nasıl hizmet gösterdiğini gösteren diyagram.][sf-web-app-stateless-gateway] Uygulamalar karmaşıklıkla büyüdükçe, sayısız arka uç hizmetlerinin önünde bir API sunması gereken ağ geçitlerini yapın. Azure API Management, iş akışı kuralları, erişim denetimi, hız sınırlandırma, izleme, olay günlüğü ve yanıt önbelleğe alma ile karmaşık API 'Leri, sizin bölüminizdeki en az çalışma ile işleyecek şekilde tasarlandı. Azure API Management, istek temelli API ağ geçidinizi yazmak zorunda kalmadan istekleri doğrudan Service Fabric arka uç hizmetlerine yönlendirmek için Service Fabric hizmet bulmayı, Bölüm çözünürlüğünü ve çoğaltma seçimini destekler. Bu senaryoda, Web Kullanıcı arabirimi hala bir Web hizmeti aracılığıyla sunulurken, HTTP API çağrıları aşağıdaki diyagramda gösterildiği gibi Azure API Management aracılığıyla yönetilir ve yönlendirilir: ![Web Kullanıcı arabiriminin Web hizmeti aracılığıyla ne kadar hizmet olarak sunulduğunu gösteren diyagram, HTTP API çağrıları yönetilen ve Azure API Management aracılığıyla yönlendirilir.][sf-apim-web-app] ## <a name="application-scenarios"></a>Uygulama senaryoları Service Fabric hizmetler durum bilgisiz ya da durum bilgisi olabilir ve üç düzenden biri kullanılarak bölümlenebilir: Singleton, INT-64 aralığı ve adlandırılmış. Hizmet uç noktası çözümlemesi, belirli bir hizmet örneğinin belirli bir bölümünü tanımlamayı gerektirir. Bir hizmetin uç noktasını çözümlerken, tek bir bölüm olması dışında hizmet örneği adının (örneğin, `fabric:/myapp/myservice` ) yanı sıra hizmetin belirli bir bölümünün belirtilmesi gerekir. Azure API Management, herhangi bir durum bilgisi olmayan hizmetler, durum bilgisi olan hizmetler ve herhangi bir bölümleme şeması ile birlikte kullanılabilir. ## <a name="send-traffic-to-a-stateless-service"></a>Durum bilgisi olmayan bir hizmete trafik gönder En basit durumda, trafik durum bilgisi olmayan bir hizmet örneğine iletilir. Bunu başarmak için bir API Management işlemi, Service Fabric arka ucunda belirli bir durum bilgisi olmayan hizmet örneğiyle eşleşen bir Service Fabric arka ucu olan bir gelen işlem ilkesi içerir. Bu hizmete gönderilen istekler hizmetin rastgele bir örneğine gönderilir. **Örnek** Aşağıdaki senaryoda, bir Service Fabric uygulaması `fabric:/app/fooservice` , Iç HTTP API 'sini kullanıma sunan, adlı, durum bilgisi olmayan bir hizmet içerir. Hizmet örneği adı iyi bilinmektedir ve API Management gelen işlem ilkesinde doğrudan sabit kodlanmış olabilir. ![Bir Service Fabric uygulaması gösteren diyagram, iç HTTP API 'sini kullanıma sunan bir durum bilgisi olmayan hizmet içerir.][sf-apim-static-stateless] ## <a name="send-traffic-to-a-stateful-service"></a>Durum bilgisi olan bir hizmete trafik gönderme Durum bilgisi olmayan hizmet senaryosuna benzer şekilde trafik, durum bilgisi olan bir hizmet örneğine iletilebilir. Bu durumda, bir API Management işlemi, bir isteği belirli bir *durum bilgisi* olan hizmet örneğinin belirli bir bölümüne eşleyen Service Fabric arka ucuna sahip bir gelen işlem ilkesi içerir. Her bir isteğin eşlenecek bölüm, URL yolundaki bir değer gibi gelen HTTP isteğinden bazı girişler kullanılarak bir lambda yöntemi aracılığıyla hesaplanır. İlke, istekleri yalnızca birincil çoğaltmaya veya okuma işlemleri için rastgele bir çoğaltmaya gönderecek şekilde yapılandırılmış olabilir. **Örnek** Aşağıdaki senaryoda bir Service Fabric uygulaması, `fabric:/app/userservice` Iç HTTP API 'sini kullanıma sunan adlandırılmış bir durum bilgisi olmayan hizmet içerir. Hizmet örneği adı iyi bilinmektedir ve API Management gelen işlem ilkesinde doğrudan sabit kodlanmış olabilir. Hizmet, iki bölümden oluşan Int64 bölüm şeması ve ile yayılan bir anahtar aralığı kullanılarak bölümlenir `Int64.MinValue` `Int64.MaxValue` . Arka uç ilkesi, `id` burada, Bölüm anahtarını hesaplamak için herhangi bir algoritma kullanılabilse de, URL isteği yolunda belirtilen değeri 64 bitlik bir tamsayıya dönüştürerek bu aralıktaki bir bölüm anahtarını hesaplar. ![Azure API Management topolojisine genel bakış ile Service Fabric][sf-apim-static-stateful] ## <a name="send-traffic-to-multiple-stateless-services"></a>Birden çok durum bilgisi olmayan hizmete trafik gönderme Daha Gelişmiş senaryolarda istekleri birden fazla hizmet örneğiyle eşleştiren bir API Management işlemi tanımlayabilirsiniz. Bu durumda, her işlem, istek URL yolu veya sorgu dizesi ve durum bilgisi olan hizmetler söz konusu olduğunda, hizmet örneği içindeki bir bölüm gibi, istekleri belirli bir hizmet örneğiyle eşleyen bir ilke içerir. Bunu başarmak için bir API Management işlemi, gelen HTTP isteğinden alınan değerlere göre Service Fabric arka uçta durum bilgisi olmayan bir hizmet örneğiyle eşleşen bir Service Fabric arka ucu olan bir gelen işlem ilkesi içerir. Hizmet istekleri, hizmetin rastgele bir örneğine gönderilir. **Örnek** Bu örnekte, aşağıdaki formül kullanılarak dinamik olarak oluşturulan bir ada sahip bir uygulamanın her kullanıcısı için yeni bir durum bilgisi olmayan hizmet örneği oluşturulur: - `fabric:/app/users/<username>` Her hizmetin benzersiz bir adı vardır, ancak hizmetler Kullanıcı veya yönetici girişine yanıt olarak oluşturulduğundan ve bu nedenle APIM ilkelerine veya yönlendirme kurallarına sabit olarak kodlanamadığından bilinen adlar bilinmez. Bunun yerine, bir isteğin gönderileceği hizmetin adı, arka uç ilke tanımında `name` URL istek yolunda belirtilen değerden oluşturulur. Örnek: - `/api/users/foo`Hizmet örneğine yönlendirilme isteği`fabric:/app/users/foo` - `/api/users/bar`Hizmet örneğine yönlendirilme isteği`fabric:/app/users/bar` ![Dinamik olarak oluşturulan bir ada sahip bir uygulamanın her kullanıcısı için yeni bir durum bilgisi olmayan hizmet örneğinin oluşturulduğu bir örnek gösteren diyagram.][sf-apim-dynamic-stateless] ## <a name="send-traffic-to-multiple-stateful-services"></a>Birden çok durum bilgisi olan hizmete trafik gönderme Durum bilgisi olmayan hizmet örneğine benzer şekilde, API Management bir işlem istekleri birden fazla **durum bilgisi olan** hizmet örneğiyle eşleyebilir, bu durumda durum bilgisi olan her hizmet örneği için de bölüm çözümlemesi gerçekleştirmeniz gerekebilir. Bunu başarmak için bir API Management işlemi, gelen HTTP isteğinden alınan değerlere göre Service Fabric arka uçta durum bilgisi olan bir hizmet örneğiyle eşleşen bir Service Fabric arka ucu olan bir gelen işlem ilkesi içerir. İsteği belirli bir hizmet örneğine eşlemenin yanı sıra, istek aynı zamanda hizmet örneği içindeki belirli bir bölüme ve isteğe bağlı olarak birincil çoğaltmaya ya da bölüm içindeki bir rastgele ikincil çoğaltmaya eşleştirilebilir. **Örnek** Bu örnekte, aşağıdaki formül kullanılarak dinamik olarak oluşturulan bir ada sahip uygulamanın her kullanıcısı için yeni bir durum bilgisi olan hizmet örneği oluşturulur: - `fabric:/app/users/<username>` Her hizmetin benzersiz bir adı vardır, ancak hizmetler Kullanıcı veya yönetici girişine yanıt olarak oluşturulduğundan ve bu nedenle APIM ilkelerine veya yönlendirme kurallarına sabit olarak kodlanamadığından bilinen adlar bilinmez. Bunun yerine, bir isteğin gönderileceği hizmetin adı, arka uç ilke tanımında `name` URL istek yolu sağlanmış değerden oluşturulur. Örnek: - `/api/users/foo`Hizmet örneğine yönlendirilme isteği`fabric:/app/users/foo` - `/api/users/bar`Hizmet örneğine yönlendirilme isteği`fabric:/app/users/bar` Her hizmet örneği aynı zamanda iki bölümden oluşan Int64 bölüm şeması ve ile yayılan bir anahtar aralığı kullanılarak bölümlenir `Int64.MinValue` `Int64.MaxValue` . Arka uç ilkesi, `id` burada, Bölüm anahtarını hesaplamak için herhangi bir algoritma kullanılabilse de, URL isteği yolunda belirtilen değeri 64 bitlik bir tamsayıya dönüştürerek bu aralıktaki bir bölüm anahtarını hesaplar. ![Her bir hizmet örneğinin aynı zamanda iki bölümden oluşan Int64 Bölüm Şeması kullanılarak bölümlendiğinden ve Int64. MinValue ile Int64. MaxValue arasında bir anahtar aralığıyla bölümlenmiş olduğunu gösteren diyagram.][sf-apim-dynamic-stateful] ## <a name="next-steps"></a>Sonraki adımlar API Management ve Flow istekleri ile API Management hizmetlere kadar ilk Service Fabric kümenizi ayarlamak için [öğreticiyi](service-fabric-tutorial-deploy-api-management.md) izleyin. <!-- links --> <!-- pics --> [sf-apim-web-app]: ./media/service-fabric-api-management-overview/sf-apim-web-app.png [sf-web-app-stateless-gateway]: ./media/service-fabric-api-management-overview/sf-web-app-stateless-gateway.png [sf-apim-static-stateless]: ./media/service-fabric-api-management-overview/sf-apim-static-stateless.png [sf-apim-static-stateful]: ./media/service-fabric-api-management-overview/sf-apim-static-stateful.png [sf-apim-dynamic-stateless]: ./media/service-fabric-api-management-overview/sf-apim-dynamic-stateless.png [sf-apim-dynamic-stateful]: ./media/service-fabric-api-management-overview/sf-apim-dynamic-stateful.png
93.756303
603
0.821099
tur_Latn
0.999968
e5d9d156989328c1d0328216df6f6a1128c264fc
786
md
Markdown
README.md
sarthakvk/nseieod
26137e136be2dedbd2280ff6bca070167af80e93
[ "MIT" ]
null
null
null
README.md
sarthakvk/nseieod
26137e136be2dedbd2280ff6bca070167af80e93
[ "MIT" ]
null
null
null
README.md
sarthakvk/nseieod
26137e136be2dedbd2280ff6bca070167af80e93
[ "MIT" ]
null
null
null
## NSEIeod: Tool for collecting ieod data from nseindia.com (using nsetools package) > This is still in development phase. > The main idea behind that is to collect collect realtime data and save it in postgresql database(using psycopg2 and sqlalchemy) with unix timestamps > I am still working on it so that I can collect data to do some quant research. > You can also contibute in development if you want. #### Installation > you must have postgresql installed and running. > clone this repo cd into it and do `pip install -e .` > run `nseieod -h` command to see help, at the moment only `--init` works > I tested this on `Arch Linux` so I don't know if it'll work on other os. #### Issues or found a bug ( I know there are a lot) > open an issue. #### contribution > open a PR.
43.666667
150
0.735369
eng_Latn
0.999442
e5dc6acff0ea91ccf59c40be9349e1cfb56a1d7f
3,142
md
Markdown
content/post/2007-07-08-creating-offline-web-applications-with-dojo-offline-tutorial.md
samaxes/samaxes.com
d3ec2af51aa5216bfd657b6a53df718b3898b65a
[ "CC-BY-4.0" ]
null
null
null
content/post/2007-07-08-creating-offline-web-applications-with-dojo-offline-tutorial.md
samaxes/samaxes.com
d3ec2af51aa5216bfd657b6a53df718b3898b65a
[ "CC-BY-4.0" ]
null
null
null
content/post/2007-07-08-creating-offline-web-applications-with-dojo-offline-tutorial.md
samaxes/samaxes.com
d3ec2af51aa5216bfd657b6a53df718b3898b65a
[ "CC-BY-4.0" ]
null
null
null
--- title: Creating Offline Web Applications with Dojo Offline Tutorial date: 2007-07-08 18:00:57+00:00 slug: creating-offline-web-applications-with-dojo-offline-tutorial categories: - Web Development tags: - Dojo - Gears - Google - Offline --- [Brad Neuberg](http://codinginparadise.org/) from [Sitepen](http://www.sitepen.com/) wrote an extensive tutorial about [Creating Offline Web Applications with Dojo Offline](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr). ## What is Dojo Offline? > [Dojo Offline](http://o.dojotoolkit.org/offline) is an open-source toolkit that makes it easy to create sophisticated, offline web applications. It sits on top of [Google Gears](http://gears.google.com/), a plugin from Google that helps extend web browsers with new functionality. Dojo Offline makes working with Google Gears easier; extends it with important functionality; creates a higher-level API than Google Gears provides; and exposes developer productivity features. In particular, Dojo Offline provides the following functionality: > > * [An offline widget](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr#widget) that you can easily embed in your web page with just a few lines of code, automatically providing the user with network feedback, sync messages, offline instructions, and more > * [A sync framework](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr#sync) to help you store actions done while offline and sync them with a server once back on the network > * [Automatic network and application-availability detection](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr#network_status) to determine when your application is on- or off-line so that you can take appropriate action > * [A slurp() method](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr#slurp) that automatically scans the page and figures out all the resources that you need offline, including images, stylesheets, scripts, etc.; this is much easier than having to manually maintain which resources should be available offline, especially during development. > * [Dojo Storage](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr#dojo_storage), an easy to use hashtable abstraction for storing offline data for when you don't need the heaviness of Google Gear's SQL abstraction; under the covers Dojo Storage saves its data into Google Gears > * [Dojo SQL](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr#dojo_sql), an easy to use SQL layer that executes SQL statements and returns them as ordinary JavaScript objects > * [New ENCRYPT() and DECRYPT() SQL keywords](http://docs.google.com/View?docid=dhkhksk4_8gdp9gr#crypto) that you can mix in when using Dojo SQL, to get transparent cryptography for columns of data. Cryptography is done on a Google Worker Pool thread, so that the browser UI is responsive. > * Integration with the rest of Dojo, such as the Dojo Event system > > Dojo Offline is built to work with the 0.9 release of Dojo, and will not work with older versions of Dojo, such as 0.4. It also requires the Google Gears plugin to function; if users do not have it installed Dojo Offline will prompt users to download it.
104.733333
542
0.788033
eng_Latn
0.958785
e5dce8f58a1a30439e82eae60f8675f9b68da99e
434
md
Markdown
src/main/java/leetcode/editor/cn/doc/content/PalindromeLinkedListLcci.md
huangge1199/leet-code
4c218cbe88b166912bab8a34c99389f5362d40c2
[ "Apache-2.0" ]
1
2021-10-20T04:01:56.000Z
2021-10-20T04:01:56.000Z
src/main/java/leetcode/editor/cn/doc/content/PalindromeLinkedListLcci.md
huangge1199/leet-code
4c218cbe88b166912bab8a34c99389f5362d40c2
[ "Apache-2.0" ]
1
2022-01-14T01:14:12.000Z
2022-01-14T01:14:12.000Z
src/main/java/leetcode/editor/cn/doc/content/PalindromeLinkedListLcci.md
huangge1199/leet-code
4c218cbe88b166912bab8a34c99389f5362d40c2
[ "Apache-2.0" ]
null
null
null
<p>编写一个函数,检查输入的链表是否是回文的。</p> <p>&nbsp;</p> <p><strong>示例 1:</strong></p> <pre><strong>输入: </strong>1-&gt;2 <strong>输出:</strong> false </pre> <p><strong>示例 2:</strong></p> <pre><strong>输入: </strong>1-&gt;2-&gt;2-&gt;1 <strong>输出:</strong> true </pre> <p>&nbsp;</p> <p><strong>进阶:</strong><br> 你能否用 O(n) 时间复杂度和 O(1) 空间复杂度解决此题?</p> <div><div>Related Topics</div><div><li>链表</li></div></div>\n<div><li>👍 66</li><li>👎 0</li></div>
20.666667
96
0.585253
kor_Hang
0.078427
e5dd32e59f3c784056537db8953b41016865af38
1,015
md
Markdown
catalog/soukyuu-no-fafner-dead-aggressor/en-US_soukyuu-no-fafner-dead-aggressor-light-novel.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/soukyuu-no-fafner-dead-aggressor/en-US_soukyuu-no-fafner-dead-aggressor-light-novel.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/soukyuu-no-fafner-dead-aggressor/en-US_soukyuu-no-fafner-dead-aggressor-light-novel.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# Soukyuu no Fafner: Dead Aggressor ![soukyuu-no-fafner-dead-aggressor](https://cdn.myanimelist.net/images/manga/1/32588.jpg) - **type**: light-novel - **volumes**: 1 - **chapters**: 5 - **original-name**: 蒼穹のファフナー - **start-date**: 2005-01-20 ## Tags - drama - mecha - sci-fi ## Authors - Ubukata - Tow (Story) - Hirai - Hisashi (Art) ## Sinopse The youth of Tatsumiya Island lived ordinary lives, but their peaceful existence was all a lie. The invasion of the Festum, the shining golden enemy, transforms their peaceful island into a high-tech military fortress. Toh Ubukata, the series organizer and script writer, novelizes the hit anime series. The beginning of a whole new Fafner, this novel includes the thoughts of Soshi, Maya, Shoko, and more that didn't appear in the anime series, along with battles of the Fafner and Festum in Ubukata's exciting style. (Source: DMP) ## Links - [My Anime list](https://myanimelist.net/manga/13393/Soukyuu_no_Fafner__Dead_Aggressor)
30.757576
518
0.723153
eng_Latn
0.949365
e5dd5b1335fd05750cdb74037bf50d0d56f87c20
4,396
md
Markdown
book/第1讲-走进Windows命令行/Part02.历史的年轮.md
HACV/Command
086c23c9b740ffe929549fe1b2a35733ac41ec2d
[ "MIT" ]
5
2020-12-18T15:21:06.000Z
2021-05-15T04:41:16.000Z
book/第1讲-走进Windows命令行/Part02.历史的年轮.md
HACV/Command
086c23c9b740ffe929549fe1b2a35733ac41ec2d
[ "MIT" ]
null
null
null
book/第1讲-走进Windows命令行/Part02.历史的年轮.md
HACV/Command
086c23c9b740ffe929549fe1b2a35733ac41ec2d
[ "MIT" ]
1
2020-12-18T15:22:29.000Z
2020-12-18T15:22:29.000Z
# 第1讲-Part02.历史的年轮 </br> </br> ## 一、操作系统杂谈之概念辨析(重要) ### 1、Dos和Windows系统 操作系统历史上到下(从古老到年轻) | 操作系统 | 描述 | | ------------------------------ | ------------------------------------------------------------ | | Dos | 无图形界面,纯字符串,就像你在电影中看到的“骇客”用的电脑界面差不多(注意:黑客不等于骇客,媒体已经日常搞错这2个概念) | | Windows 1.0 | Dos上的一个图形界面 Windows 1.0 ,带画图、记事本等程序。一直发展到 Windows 3.1 (PS:Windows 3.1对应的中文版的版本号是3.2)。特点:在这个时候因为 DOS 总是作为 Windows 的基础运行的,在运行Windows 的时候当然可以回到 DOS 下运行 DOS 命令。 | | Windows95 | 划时代的 Windows 95,这个系统下它不是直接运行在 DOS 上,而是在 DOS 启动的时候直接跳到一Windows 图形界面。 Windows 95 上的程序是32位的,不过操作系统的核心还是 DOS 下的那一套,虽然图形方面当然是32位的,这个时候的 Windows 还是可以运行16位下的 DOS 程序。 | | 3个版本 | **Windows 95、 Windows 98 再接下来是 Windows ME ,这就是32位的仍然没有摆脱 DOS 的三个Windows 版本。** | | Windows NT 3.5到**Windows7/8** | Windows NT 3.5,Windows NT 4.0,Windows 2000,Windows XP/Windows 2003 开始的这一代了,也包括 Windows 7 ,Windows 8 等,我们叫做 NT 内核的 Windows ,它的核心就完全是32位的,跟从前汇编编写的代码库完全告别了。这个时候操作系统只是在最初的时候从16位模式跳到32位的保护模式,除此之外不再真正有16位的代码,所以这个时候就没有运行 DOS 程序的条件了。 | | **Windows10** | 现在除了cmd.exe之外还加入了powershell(对标Linux下的shell) | | 操作系统 | 带有的命令行工具 | | ---------------------------------------- | ------------------- | | Dos操作系统 | Dos | | WindowsXP,WindowsNT、Windows7、Windows8 | command | | Windows10 | command、powershell | > 高阶读者阅读: > > 本部分参照知乎:https://www.zhihu.com/question/24744565 > > - 1、早期的 Windows 中就是 DOS 上的一个外壳 > - 2、中期的 Windows 和 DOS 共享了一些代码,在 Windows XP 中已经完全告别了 DOS ,只能靠模拟来运行这些DOS程序。不过,旧的DOS程序在 Windows XP 上还是可以照常运行,虽然有些比较依赖硬件的程序会运行不正常(不过这些程序反正换了一套硬件就很有可能运行不正常了), Windows 8 在第一次运行 DOS 程序的时候会提示安装 NTVDM 。这个 NTVDM 就是 NT 内核 Windows 运行 DOS 程序的关键,它用来模拟 DOS 程序运行时所依赖的环境。不过以上这些都是说的32位 Windows ,在64位 Windows 中本身也没有 NTVDM 功能,所有也就运行不了 DOS 程序了。 > - 3、不过最后还剩一个问题,在 Windows NT 中执行 DOS 命令到底是什么意思? > DOS 上带有一系列的命令,早期和中期的 Windows 都可以执行这些 DOS 命令。到了 Windows NT ,这些命令的使用方式没有变,于是就**沿用了**执行 DOS 命令这个说法。而实际上在 Windows NT 中使用这些命令时都只是在使用一个叫 cmd.exe 的命令行辅助工具,跟 DOS 不再有什么关系了。 </br> ### 2、命令行解释和批处理脚本 **(1)命令行解释器(Windows命令行交互)** 命令行解释器是一个单独的软件程序,它可以在用户和操作系统之间提供直接的通讯。(用户和操作系统之间!) 。通过使用类似于`MS-DOS`命令解释程序`Command.com`的单独字符,命令行解释器执行程序并在屏幕上显示其输出。 Windows 服务器操作系统命令行解释器使用命令解释程序`cmd.exe`(该程序加载应用程序并指导应用程序之间的信息流动)将用户输入转换为操作系统可理解的形式。 </br> **(2)批处理脚本** **批处理**,也称为批处理**脚本**,英文译为`batch`,批处理文件后缀`bat`就取的前三个字母(注意:<font style="background: yellow">批处理文件的扩展名是BAT或者CMD(不区分大小写)</font>)。 > 批处理特点: > > 顾名思义,批处理就是对某对象进行批量的处理 > > 1、把任何一批命令放在有这样扩展名的文件里,执行的时候就会一条一条的执行完,当然我们还可以在其中加入一些逻辑判断的语句,让里面的命令在满足一定条件时执行指定的命令。(`Batch脚本之于如今的Windows类似于shell脚本之于Linux`) > > 2、语法: > 它的构成没有固定格式,只要遵守以下这条就ok了: > > 1)每一行可视为一个命令,每个命令里可以含多条子命令 > > 2)从第一行开始执行,直到最后一行结束 > > 3)批处理有一个很鲜明的特点: > 使用方便、灵活,功能强大,自动化程度高。 > 实现自动化办公 </br> </br> ## 二、操作系统中Superman 你以为操作系统中有这个? ![Superman01](.\img\Superman01.jpg) 还是这个? ![Superman01](.\img\Superman02.jpg) 还是这个? ![Superman01](.\img\Superman03.jpg) No,No,No都不是:smirk::smirk::smirk: 操作系统中的超人,其实就是管理员,后面我们的教程中会涉及到一些普通的用户无法使用的命令,因而我们在这提前告知一些打开管理员权限的方式:smile: > 高阶读者: > > 就像Linux操作系统一样。在Windows下超人一般的管理员 ### 1、Windows命令行普通和root打开方式 ##### (1)管理员权限 ##### (2)普通权限 快捷键『windows+R』然后在弹出的窗口中输入cmd ![cmd](.\img\cmd.png) ### 2、PowerShell普通和root打开方式 测试环境Windows10 ##### (1)管理员权限 快捷键『windows+X』然后按下『A』 ![PowerShell_root](.\img\PowerShell_root.png) ##### (2)普通权限 快捷键『windows+X』然后按下『I』 ![PowerShell](.\img\PowerShell.png) </br> </br> ## 三、高阶读者加餐 > 高阶读者加餐 > 目前比较常见的批处理包含两类:**DOS批处理**和**PS批处理** > 1、PS批处理是基于强大的图片编辑软件Photoshop的,用来批量处理图片的脚本 > 2、DOS批处理则是基于DOS命令的,用来自动地批量地执行DOS命令以实现特定操作的脚本。本教程讲的就是DOS批处理 > 3、批处理是一种简化的脚本语言(所以你看它的语法,和Linux下的shell脚本,或者Python这种脚本语言类似),它应用于DOS和Windows系统中,它是由DOS或者Windows系统内嵌的命令解释器(通常是COMMAND.COM或者CMD.EXE)解释运行。类似于Unix中的Shell脚本 > 4、其最简单的例子,是逐行书写在命令行中会用到的各种命令。更复杂的情况,需要使用if,for,goto等命令控制程序的运行过程,如同C,Basic等高级语言一样。如果需要实现更复杂的应用,利用外部程序是必要的,这包括系统本身提供的外部命令和第三方提供的工具或者软件 > 5、批处理文件,或称为批处理程序,是由一条条的DOS命令组成的普通文本文件,可以用记事本直接编辑或用DOS命令创建,也可以用DOS下的文本编辑器Edit.exe来编辑 > 6、系统在解释运行批处理程序时,首先扫描整个批处理程序,然后从第一行代码开始向下逐句执行所有的命令,直至程序结尾或遇见exit命令或出错意外退出 > 7、在“命令提示”下键入批处理文件的名称,或者双击该批处理文件,系统就会调用Cmd.exe运行该批处理程序。一般情况下,每条命令占据一行;当然也可以将多条命令用特定符号(如:&、&&、|、||等)分隔后写入同一行中;还有的情况就是像if、for等较高级的命令则要占据几行甚至几十几百行的空间
23.382979
327
0.678344
yue_Hant
0.959469
e5dd67b31266fcb7b369371e6a640662c00fb168
3,039
md
Markdown
README.md
xingfe/cpp_study
a062cd66e45b58b65268206748ea0688773da5e3
[ "BSD-2-Clause" ]
null
null
null
README.md
xingfe/cpp_study
a062cd66e45b58b65268206748ea0688773da5e3
[ "BSD-2-Clause" ]
null
null
null
README.md
xingfe/cpp_study
a062cd66e45b58b65268206748ea0688773da5e3
[ "BSD-2-Clause" ]
null
null
null
# cpp_study [《C++实战笔记》](https://time.geekbang.org/column/intro/309) Follow me to study modern C++. Pull requests of make/cmake are welcome! ## Requirements * Linux : Ubuntu, Debian, CentOS, and others * macOS(OS X) : may work but not be tested ## Reference * [ISO C++](http://www.open-std.org/jtc1/sc22/wg21/) * [cppreference(en)](https://en.cppreference.com/w/) * [cppreference(zh)](https://zh.cppreference.com/w/) ## Resource * [VirtualBox](https://www.virtualbox.org) * [Ubuntu](https://ubuntu.com/) * [GCC](http://gcc.gnu.org/) * [Clang](http://clang.llvm.org/) ## Document * [Bjarne Stroustrup's FAQ](http://www.stroustrup.com/bs_faq.html) * [Bjarne Stroustrup's C++11 FAQ](http://www.stroustrup.com/C++11FAQ.html) * [C++ Core Guidelines](https://github.com/isocpp/CppCoreGuidelines) * [OpenResty Code Style Guide(zh-cn)](http://openresty.org/cn/c-coding-style-guide.html) * [Google Code Style Guide](https://google.github.io/styleguide/cppguide.html) ## Dev Links * [PCRE](http://www.pcre.org/) * [Boost](https://www.boost.org/) * [tbb](https://github.com/intel/tbb) * [JSON](https://www.json.org/json-zh.html) * [JSON for Modern C++](https://github.com/nlohmann/json) * [MessagePack](https://msgpack.org/) * [msgpack-c](https://github.com/msgpack/msgpack-c) * [ProtoBuf](https://github.com/protocolbuffers/protobuf) * [protobuf-c](https://github.com/protobuf-c/protobuf-c) * [gRPC](https://grpc.io) * [Thrift](https://thrift.apache.org/) * [libcurl](https://curl.haxx.se/libcurl/) * [cpr](https://github.com/whoshuu/cpr) * [ZMQ](https://zeromq.org/) * [cppzmq](https://github.com/zeromq/cppzmq) * [pybind11](https://github.com/pybind/pybind11) * [lua](https://www.lua.org/) * [luajit](http://luajit.org/) * [luajit-openresty](https://github.com/openresty/luajit2) * [LuaBridge](https://github.com/vinniefalco/LuaBridge) * [gperftools](https://github.com/gperftools/gperftools) * [FlameGraph](https://github.com/brendangregg/FlameGraph) * [OpenResty XRay](https://openresty.com.cn/cn/xray/) ## Awesome collection * [Awesome C++](https://github.com/fffaraz/awesome-cpp) * [Awesome Mordern C++](https://github.com/rigtorp/awesome-modern-cpp) ## See Also * [透视HTTP协议](https://time.geekbang.org/column/intro/189) * [http_study](https://github.com/chronolaw/http_study) - http service for pratice and more * [boost guide](https://github.com/chronolaw/boost_guide.git) - Sample code for Boost library Guide * [professional_boost](https://github.com/chronolaw/professional_boost.git) - Professional boost development * [annotated_nginx](https://github.com/chronolaw/annotated_nginx) - 注释nginx,学习研究源码 * [ngx_cpp_dev](https://github.com/chronolaw/ngx_cpp_dev) - Nginx C++ development kit, with the power of C++11 and Boost Library * [ngx_ansic_dev](https://github.com/chronolaw/ngx_ansic_dev) - Nginx ANSI C Development * [openresty_dev](https://github.com/chronolaw/openresty_dev) - OpenResty/Lua Programming * [favorite-nginx](https://github.com/chronolaw/favorite-nginx) - Selected favorite nginx modules and resources
38.468354
128
0.720961
yue_Hant
0.57192