hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a9055b023251b541f212a42e72fd6a986c80651d | 9,995 | md | Markdown | TEDx/Titles_starting_A_to_O/Creating_people_safe_roads_Jaimison_Sloboden_TEDxJacksonville.md | gt-big-data/TEDVis | 328a4c62e3a05c943b2a303817601aebf198c1aa | [
"MIT"
] | 91 | 2018-01-24T12:54:48.000Z | 2022-03-07T21:03:43.000Z | cleaned_tedx_data/Titles_starting_A_to_O/Creating_people_safe_roads_Jaimison_Sloboden_TEDxJacksonville.md | nadaataiyab/TED-Talks-Nutrition-NLP | 4d7e8c2155e12cb34ab8da993dee0700a6775ff9 | [
"MIT"
] | null | null | null | cleaned_tedx_data/Titles_starting_A_to_O/Creating_people_safe_roads_Jaimison_Sloboden_TEDxJacksonville.md | nadaataiyab/TED-Talks-Nutrition-NLP | 4d7e8c2155e12cb34ab8da993dee0700a6775ff9 | [
"MIT"
] | 18 | 2018-01-24T13:18:51.000Z | 2022-01-09T01:06:02.000Z |
[Applause]
I just did one of the most dangerous
things you can do in Jacksonville I rode
my bike to this event
the City of Jacksonville has been ranked
the fourth worst city in the United
States for pedestrian and bicycle
fatalities per capita according to Smart
Growth America dangerous by design
report as a traffic engineer I took a
deeper dive into that issue and the
problem is much worse than that
the map you see behind me is pedestrian
and bicycle crashes that have occurred
in the last 10 years that's 7,000 269
and Counting most of these crashes
result in some type of injury many of
them result in significant injury and
even death what I'm here to tell you
today is that a lot of what has to do
with this problem has to do with our
values and that what we need to decide
moving forward is that people matter
more than cars to further illustrate I'm
going to talk about two tragic events
that have occurred the first one only
happened less than two months ago a
bicyclist was on his way to work riding
along when he swayed into traffic he was
struck and critically injured where was
he supposed to ride this is that
location this is the traffic kind of
condition that was going on at the time
there's no extra space for that
bicyclist to be the second event
happened this summer miss Angie Sanders
aged 68 was on her way home from work
and she stopped a Mayport road at the
Mayport Plaza stopped to do some
shopping
there's no crosswalk at that location
when you step out she looked to the
right looked to the left and the safe
place to cross is a quarter mile in each
direction so for her to make the safe
journey
a quarter mile is about a ten minute
walk
stop wait for the light cross and then a
10-minute walk back to where she really
wanted to be Mayport road is a six-lane
Road with a real narrow median so
there's six lanes to cross this day
she's very tired
she looked it up either way and made the
choice to cross at that location she got
through the first lane the second lane
and then by the third lane she didn't
make it
these events are happening very
frequently in our city over 7,000
crashes with this potential why our
values and I'm going to demonstrate
examples of how our values of how we've
placed to build our space has
contributed to this problem we need to
decide that people matter more than cars
so one thing we know is we have a
problem how did we get here it really
goes back to post-world War two the
automobile came along the promise of
this American Dream the freedom that it
afforded us the suburban lifestyle it
was great freedom of movement
independence all that it was wonderful
so we started setting about building our
cities around that system in Florida I
came a little bit later than post-world
War two because we needed this how many
of us would really be living here if we
didn't have that by 1960 eighty percent
of all the cars in Florida had air
conditioning our house is rare condition
our buildings were air-conditioned our
cars were air-conditioned we had this
lifestyle of Independence and freedom
and mobility and so we never had to get
out and walk around so our economy
our highway departments got really good
at giving us that designing roads our
highway department said about giving us
a first-class highway system for cars I
relate to this really well because as a
civil engineer I was trained to give you
that okay it's really hard you think you
might have more cars add some more lanes
if you really want to make sure that
people can go fast and continue on their
trip provide no interruption to that
facility no crosswalks no lights almost
like a freeway so we have valued that
lifestyle for a very long time until we
kind of reach this tipping point where
we have this crisis of crashes and
fatalities so it's not just about the
roads we need to design our communities
a little bit differently as well so
there is kind of that land component
that we have to make in concert gonna
take a little bit of a segue and talk
about in a parallel example of how we
value and our values kind of relate 20
years ago about young traffic engineer
in Minnesota worked for the city of
Arden Hills and they had a new land
development happening and they were
going to put a road through a parkland
and this City Council was very concerned
about the animal species that were in
that Park so they wanted to make sure
that we took care of all the animals and
I learned all about Florida well before
I moved here that we are great in this
state at taking care of animals
environmentalists have figured out
understood that we need to consolidate
all of the lands and habitat so that the
animals have a place that naturally
takes them to a place where they're
gonna cross the road and then we spend a
lot of money building a lot of crossings
like this panther crossing we value the
panther as we should there's only 200 m
left in the state and there are precious
to us and so we take care of them we
take care of the bear we take care of
snakes we can't
take care of turtles and we even take
care of the little bitty / do ki beach
mouse okay
just learn this a couple weeks ago we
actually put PVC pipes underneath the
road to make sure that they can get
across another example of how we show
our values and this is a something that
really hit me in the last two years I
was working for the Jacksonville
Transportation Authority on providing
safe streets around the bus stops it was
a big program called Complete Streets
and as we were going out and doing
community meetings and talking to people
I was paying a little bit more direct
attention to what was going on in the
media and what I saw it was an article
that looked like this and this has been
repeated you know periodically but the
Jacksonville Sheriff's Office they've
seen the same media report about us
being one of the worst cities in the
country so they're doing their part to
try and fix the problem and and the
sheriff's office duty and role is to
enforce so they set about enforcing
jaywalking but ironically at the same
time I came across this article that the
city of st. Paul they were addressing
pedestrian safety issues and they were
out there enforcing vehicles that were
not yielding to pedestrians in the
crosswalk and that was their campaign so
they said that walking is important and
they value that perhaps it would be good
if Jaso had more crosswalks to enforce
we're working on this problem it's
simple
there's kind of four main areas one we
start with the width of our road for
automobiles while there's some roads
that we'll need and continue to have to
carry a lot of cars we have many streets
in the city of Jacksonville that no
longer need that number of lanes reduce
them
the other thing we can do is minimize
the width of these lanes to make the
crosswalks a shorter distance and easier
to travel so you don't have to be a
track star to make it across we also
need to modernize our bike system
actually have one as a start
but what's really transformed in our
design community and its global it's not
just United States but we're seeing more
examples that it's necessary to provide
kind of protected bike lanes not just
that three foot strip of pavement that
you're not sure if it's a shoulder or a
place you should be riding your bike my
five year old daughter should be able to
ride her bike on that on that trail on
that system on that road and then
finally we cannot ignore kind of the
land use around there because it's not
just a road problem it's a value problem
but it's also how we create the space
the Mayport Plaza strip mall kind of not
connected to anything not really
promoting walkability the things I need
to educate you on is about my industry
we try and make things easier for
everyone to intuitively understand what
we mean and I'm often reminded by my
family and friends that they have no
idea what I'm talking about
so my education to you is we in our
industry our always seeking public
consent input and an interest so what
you need to do for us to really help
move this along is look for a complete
Street project advertisement Lane
elimination study a road diet project a
context-sensitive design or even a
streetscape project all these projects
kind of we coin these different terms
but they all kind of mean the same thing
we're out there trying to reinvent the
space for better walkability and and
bike bikeability
and safety but don't ignore and you
should really the big red flag to put
out there is if you see a capacity
project the red flag on that is the
capacity may be needed
you know the additional capacity may be
needed but the way we're designing our
space now is we are not doing capacity
at the exclusion of people walking and
biking so those first border that I
mentioned you should be really active
and supportive behind and then the
capacity type projects you should be a
little bit skeptical and ready to go
after it if need be this is a political
will problem that we we face the
agencies are really making an effort
City of Jacksonville just recently
adopted what I think is a modern and
very good bicycle and pedestrian master
plan the Jacksonville Transportation
Authority has just made Complete Streets
their number one priority to make sure
that they're taking care of their
pedestrian and their transit customers
the Florida Department of Transportation
took great notice because the state of
Florida is far and away the worst state
in the country in this area and so their
way of dressing it as a policy change
and a complete rewrite of all the sacred
Scrolls of design manuals and all that
so they have become a Complete Streets
agency these are all great plans and
great ideas but they're not going to
happen unless we hear get behind that
and make it happen
so finally we the people of Jacksonville
people matter more than cars thank you
you
| 36.881919 | 47 | 0.801901 | eng_Latn | 0.999997 |
a906540522189932b41049d133fb4519c8fa6ae4 | 49 | md | Markdown | README.md | epfl-dlab/laughing-head | 80aa0b88814508fcb7d3f7bd7cb16da08e0188d5 | [
"Apache-2.0"
] | 1 | 2021-12-09T07:30:09.000Z | 2021-12-09T07:30:09.000Z | README.md | epfl-dlab/laughing-head | 80aa0b88814508fcb7d3f7bd7cb16da08e0188d5 | [
"Apache-2.0"
] | null | null | null | README.md | epfl-dlab/laughing-head | 80aa0b88814508fcb7d3f7bd7cb16da08e0188d5 | [
"Apache-2.0"
] | null | null | null | # laughing-head
Code for the laughing head paper
| 16.333333 | 32 | 0.795918 | eng_Latn | 0.995734 |
a906bf773f6e7dad1935b97db02289a1bbeb28d7 | 189 | md | Markdown | README.md | MayanMisfit/eleventy-taunt | 7f794ce126b0ce477bb5c5f59d03e748e629b075 | [
"MIT"
] | null | null | null | README.md | MayanMisfit/eleventy-taunt | 7f794ce126b0ce477bb5c5f59d03e748e629b075 | [
"MIT"
] | null | null | null | README.md | MayanMisfit/eleventy-taunt | 7f794ce126b0ce477bb5c5f59d03e748e629b075 | [
"MIT"
] | null | null | null | # Eleventy Taunt Theme
Custom 11ty portfolio theme for designer & developer blogs.
A live demo can be found here: <a href="https://taunt11ty.netlify.com/">taunt11ty.netlify.com/</a>
| 31.5 | 99 | 0.73545 | eng_Latn | 0.655181 |
a9079ed786a2effec90bfef6613d8d7fa3a47ad0 | 7,159 | md | Markdown | README.md | hjk22/smallLinearSolverBatched | 3d83dd5ce5cd6ce95c3a4d34aeade861729729ed | [
"BSD-3-Clause"
] | null | null | null | README.md | hjk22/smallLinearSolverBatched | 3d83dd5ce5cd6ce95c3a4d34aeade861729729ed | [
"BSD-3-Clause"
] | null | null | null | README.md | hjk22/smallLinearSolverBatched | 3d83dd5ce5cd6ce95c3a4d34aeade861729729ed | [
"BSD-3-Clause"
] | null | null | null | # GPU Linear Solver Small Batched
This project contains functions to solve large quantities of small square linear systems (NxN with N<32, single precision, dense), on GPU though the CUDA programming model.
## Building
The software is written for Linux, built in Release mode though the use of make files (debug building is not supported).
The software requires a number of dependencies to be installed to build:
* gcc version 6.1.0+, lower versions might work but have not been tested.
* CUDA toolkit version 9 or more, lower might work but have not been testet. For optimal performance use CUDA 10+.
A guide on installing CUDA on linux is available here: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Particular care should be given to post installation actions https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions which are required to make the cuda compiler and libraries visible.
For the Tester part, and not for the actual function these are also needed (stubbing of code is needed to avoid these dependencies).
* Lapack 3.8.0+, lower versions might work but have not been tested.
* Blas 3.8.0+, lower versions might work but have not been tested.
* gfortran (if using lapack and blas)
* OpenMP, This can easily be avoided by commenting the few lines of code where it's used.
After the dependencies are installed the following files will need to be edited to match the system configurations:
* Release/objects.mk Here the Lapack, blas and intel64(if using intel version of lapack and blas) library paths are to be redefined. In general the libraries are defined here, so if something changes regarding them this file needs to be edited accordgly.
* Release/makefile This file needs to be edited to select the correct Nvidia target achitecture. Specifically in line 61, arch= and code= need to be changed according to these guidelines
https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#options-for-steering-gpu-code-generation .
If OpenMP is to be removed then also remove the -Xcompiler -fopenmp options from the line (passes the -fopenmp line to the gcc compiler after nvcc completes compilation of kernels).
* Release/src/subdir.mk This file contains all the files that needs to be built and their dependencies, as well as the nvcc compilation invocations. Lapack linbrary paths need to be properly defined here.
Nvidia target architecture needs to be defined here as well.
To build the program cd to Release and execute
`make clean`
followed by
`make`
The binary will show up as Release/gpulinearsolversmallbatched .
## Using the current tester
The binary can be compiled in two configurations: manual and automatic tester.
In manual mode usage is
`gpulinearsolversmallbatched <matrix size> <number of linear systems> <number of openMP threads for CPU test>`
eg:
`gpulinearsolversmallbatched 4 10000 8`
In automatic mode the tester will generate a file results.csv with all the test results. The test parameters need to be configured from code before build.
## Usage on Galileo supercomputer.
The program needs some modules to be loaded:
`module load autoload git cuda gnu lapack blas`
Clone the repository or copy it someway and cd to Release
`cd Release`
To build use
`make clean`
`make`
To run the program on the GPU nodes use the following command:
`srun --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --mem=6G --time=20 --partition=gll_usr_gpuprod --gres=gpu:kepler:1 ./gpulinearsolversmallbatched`
adding command line parameters at the end if in manual mode.
Change --cpus-per-task= to increase the number of phisical cpu cores available for openMP to use.
Change --mem= to ingrease the maximum usable memory (gpu is limited to a little less than 12G) and currently the program is limited by int memory pointers.
Change --time= to set the time limit in minutes of of the test. For autotester 20-30 minutes might be needed. For manual 3 minutes is already plenty. Mostly to ensure that the program does not go into infinite loop while on a work node.
Change --gres=gpu:kepler: to change the number of GPUs requested (in case of adding multi gpu support).
To peform profiling run add `cudaProfilerStart()` and `cudaProfilerStop()` before and after the interested section, rebuild and use the following command:
`srun --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --mem=6G --time=3 --partition=gll_usr_gpuprod --gres=gpu:kepler:1 nvprof --export-profile timeline.prof -f --profile-from-start off --cpu-profiling off ./gpulinearsolversmallbatched 16 1000 1`
Which will export the profiling data to ./timeline.prof which can be imported into Visual Profile. Visual profile can import the file even if running on a windows machine (use scp to get the file).
## Code structure and configurations
`main` is situated in `testing_sgesv_batched.cpp`, where also all the testing code is situated.
This is the file that needs to be edited to remove lapack, blas and openmp dependencies.
To change the mode to manual tester the macro MANUAL_TEST needds to be defined in testing_sgesv_batched.cpp and to use the automatic tester comment the macro definition.
Various macros with comments are present at the beginning of the file to change the manual test behavior.
The manual test is performed by the function `gpuLinearSolverBatched_tester` while the autotester is performed by `gpuCSVTester` which is the function that needs to be edited to change the parameters of the autotest.
To perform GPU solution the user needs prepare the linear systems in host memory and call `gpuLinearSolverBatched` which is in `linearSolverSLU_batched.cpp`.
This function allocates the device memory and transfers the data to the call `linearSolverSLU_batched` (same file) wich takes pointers to device memory to start executing the different phases.
To perform LU decomposition the function `linearDecompSLU_batched` (in file `linearDecompSLU_batched.cpp`) is called, which in turn calls `magma_sgetrf_batched_smallsq_shfl` (`tinySLUfactorization_batched.cu`).
Only now `magma_sgetrf_batched_smallsq_shfl` executes the kernel `sgetrf_batched_smallsq_shfl_kernel` which contains the main calculations of the program to do the LU factorization.
After the factorization is complete control returns to `linearSolverSLU_batched`which then calls `linearSolverFactorizedSLU_batched` (linearSolverFactorizedSLU_batched.cpp)
This function uses various inexpensive other functions to manipulate the factorized data and obtain the resolution to the linear systems, by using forwards and backwards substitutions.
These functions are located in `set_pointer.cu`, `strsv_batched.cu`, `linearSolverFactorizedSLUutils.cu`
For the remiaining files we have `operation_batched.h` which contains the declaration of most host batched functions listed above, `utils.cpp` `utilscu.cuh` `utils.h` contain utility functions that are used thoughought the code.
`testing.h`, `flops.h`, `magma_types.h` instead contain important magma definitions that are used throughout the code.
| 57.272 | 265 | 0.789077 | eng_Latn | 0.997471 |
a909e7bed823c200dba492a7aeb65a7824e4d508 | 3,203 | md | Markdown | includes/virtual-machines-common-classic-agents-and-extensions.md | allanfann/azure-docs.zh-tw | c66e7b6d1ba48add6023a4c08cc54085e3286aa3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-machines-common-classic-agents-and-extensions.md | allanfann/azure-docs.zh-tw | c66e7b6d1ba48add6023a4c08cc54085e3286aa3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-machines-common-classic-agents-and-extensions.md | allanfann/azure-docs.zh-tw | c66e7b6d1ba48add6023a4c08cc54085e3286aa3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: cynthn
ms.service: virtual-machines
ms.topic: include
ms.date: 10/26/2018
ms.author: cynthn
ms.openlocfilehash: 9158e6bfe07fc5d06b0685d77eff26644b594a8b
ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 04/23/2019
ms.locfileid: "61485263"
---
VM 擴充功能可協助您:
* 修改安全性與身分識別的功能,例如重設帳戶值和使用反惡意程式碼
* 啟動、停止或設定監視和診斷
* 重設或安裝連線功能,例如 RDP 和 SSH
* 診斷、監視和管理您的 VM
也有許多其他功能。 會定期發行新的 VM 擴充功能。 這篇文章描述 Windows 和 Linux 的 Azure VM 代理程式,以及它們如何支援 VM 擴充功能。 如需依功能分類的 VM 延伸模組清單,請參閱 [Azure VM 延伸模組與功能](../articles/virtual-machines/extensions/features-windows.md)。
## <a name="azure-vm-agents-for-windows-and-linux"></a>适用于 Windows 和 Linux 的 Azure VM 代理
Azure 虛擬機器代理程式 (VM 代理程式) 是一個安全、輕量級程序,它會安裝、設定和移除 Azure 虛擬機器執行個體上的 VM 擴充功能。 VM 代理充当 Azure VM 的安全本地控制服务。 该代理加载的扩展提供特定功能,以在使用实例时提高工作效率。
有兩個 Azure VM 代理程式存在,一個適用於 Windows VM,一個適用於 Linux VM。
如果您想要讓虛擬機器執行個體使用一或多個 VM 擴充功能,執行個體必須安裝 VM 代理程式。 藉由使用 Azure 入口網站和來自 **Marketplace** 之映像所建立的虛擬機器映像,會在建立程序中自動安裝 VM 代理程式。 如果虛擬機器執行個體沒有 VM 代理程式,您可以在建立虛擬機器執行個體之後安裝 VM 代理程式。 或者,您可以在您稍後上傳的自訂 VM 映像中安裝代理程式。
> [!IMPORTANT]
> 这些 VM 代理是非常轻量级的,可启用虚拟机实例的安全管理的服务。 有些情況下您可能不想使用 VM 代理程式。 若是如此,請務必建立不會使用 Azure CLI 或 PowerShell 安裝 VM 代理程式的 VM。 雖然可以實際移除 VM 代理程式,但是執行個體上的 VM 擴充功能的行為並未定義。 因此,不支援移除已安裝的 VM 代理程式。
>
在下列情況下會啟用 VM 代理程式:
* 當您使用 Azure 入口網站和從 **Marketplace** 選取映像來建立 VM 的執行個體時,
* 當您使用 [New-AzureVM](https://msdn.microsoft.com/library/azure/dn495254.aspx) 或 [New-AzureQuickVM](https://msdn.microsoft.com/library/azure/dn495183.aspx) Cmdlet 來建立 VM 的執行個體時。 您可以藉由將 **–DisableGuestAgent** 參數新增至 [Add-AzureProvisioningConfig](https://msdn.microsoft.com/library/azure/dn495299.aspx) Cmdlet,來建立沒有 VM 代理程式的 VM。
* 當您在現有 VM 執行個體上手動下載並安裝 VM 代理程式時,將 **ProvisionGuestAgent** 值設定為 **true**。 您可以藉由使用 PowerShell 命令或 REST 呼叫,針對 Windows 和 Linux 代理程式使用這項技術。 (如果您以手動方式安裝 VM 代理程式之後未設定 **ProvisionGuestAgent** 值,則不會正確偵測到新增 VM 代理程式。)下列程式碼範例示範如何使用 PowerShell 執行此動作,其中 `$svc` 和 `$name` 引數都已經確定:
$vm = Get-AzureVM –ServiceName $svc –Name $name
$vm.VM.ProvisionGuestAgent = $TRUE
Update-AzureVM –Name $name –VM $vm.VM –ServiceName $svc
* 當您建立 VM 映像時,會包含已安裝的 VM 代理程式。 一旦具有 VM 代理程式的映像存在,您可以將該映像上傳至 Azure。 若為 Windows VM,請下載 [Windows VM Agent.msi 檔案](https://go.microsoft.com/fwlink/?LinkID=394789) ,然後安裝 VM 代理程式。 若為 Linux VM,請從位於 <https://github.com/Azure/WALinuxAgent> 的 GitHub 存放庫安裝 VM 代理程式。 如需如何在 Linux 上安裝 VM 代理程式的詳細資訊,請參閱[Azure Linux VM 代理程式使用者指南](../articles/virtual-machines/extensions/agent-linux.md)。
> [!NOTE]
> 在 PaaS 中,VM 代理程式稱為 **WindowsAzureGuestAgent**,且在 Web 和背景工作角色 VM 中皆可使用。 (如需詳細資訊,請參閱 [Azure 角色架構](https://blogs.msdn.com/b/kwill/archive/2011/05/05/windows-azure-role-architecture.aspx))。角色 VM 的 VM 代理程式現已可將延伸模組加入雲端服務 VM,其方法與永續性虛擬機器相同。 角色 VM 和永續性 VM 上 VM 擴充功能之間最大的差異是新增 VM 擴充功能的時機。 使用角色 VM 時,擴充功能會先新增至雲端服務,然後新增至該雲端服務內的部署。
>
> 使用 [Get AzureServiceAvailableExtension](https://msdn.microsoft.com/library/azure/dn722498.aspx) Cmdlet,來列出所有可用的角色 VM 延伸模組。
>
>
## <a name="find-add-update-and-remove-vm-extensions"></a>尋找、加入、更新和移除 VM 擴充功能
如需這些工作的詳細資訊,請參閱[加入、尋找、更新及移除 Azure VM 延伸模組](../articles/virtual-machines/windows/classic/manage-extensions.md?toc=%2fazure%2fvirtual-machines%2fwindows%2fclassic%2ftoc.json)。
| 57.196429 | 370 | 0.778021 | yue_Hant | 0.994542 |
a90a796ab5536b8a7eba042a90e64c9492c94401 | 496 | md | Markdown | README.md | vitaly-t/dev-distribution | b12123ee0d41b44c580c2a8630c7bfd99be5aa93 | [
"MIT"
] | null | null | null | README.md | vitaly-t/dev-distribution | b12123ee0d41b44c580c2a8630c7bfd99be5aa93 | [
"MIT"
] | null | null | null | README.md | vitaly-t/dev-distribution | b12123ee0d41b44c580c2a8630c7bfd99be5aa93 | [
"MIT"
] | null | null | null | Calculate distribution rate of [Dev]('https://devtoken.rocks/')
[](https://travis-ci.org/frame00/dev-distribution)
# Getting Started
How to install:
```bash
git clone git@github.com:frame00/dev-distribution.git
cd dev-distribution
npm i
```
# How to Calculate
Distribution rate calculation:
```bash
npm run calc 2018-07-20 2018-08-19 1000000
```
`npm run calc <start date> <end date> <total distrobitions>`
| 20.666667 | 131 | 0.741935 | eng_Latn | 0.326334 |
a90a8542abc9e8e7d95bedeaacdc157bd759f218 | 4,782 | md | Markdown | Instructions/Labs/LAB_05_Making_changes_by_using_CIM_and_WMI.md | MicrosoftLearning/AZ-040T00-Automating-Administration-with-PowerShell.ja-jp | cfe7e0c5cbd0c3ad7f73b68cc874a46c3ac9eee3 | [
"MIT"
] | null | null | null | Instructions/Labs/LAB_05_Making_changes_by_using_CIM_and_WMI.md | MicrosoftLearning/AZ-040T00-Automating-Administration-with-PowerShell.ja-jp | cfe7e0c5cbd0c3ad7f73b68cc874a46c3ac9eee3 | [
"MIT"
] | 2 | 2022-03-23T03:31:34.000Z | 2022-03-31T02:15:21.000Z | Instructions/Labs/LAB_05_Making_changes_by_using_CIM_and_WMI.md | MicrosoftLearning/AZ-040T00-Automating-Administration-with-PowerShell.ja-jp | cfe7e0c5cbd0c3ad7f73b68cc874a46c3ac9eee3 | [
"MIT"
] | null | null | null | ---
lab:
title: 'ラボ: WMI と CIM を使って情報についてのクエリを実行する'
module: 'Module 5: Querying management information by using CIM and WMI'
ms.openlocfilehash: 29a429a264ef6b6b61ca69ce9a3ca90644e89f32
ms.sourcegitcommit: a95a9bb3a7919b785df0574c3407f4b6c3bea9f5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 11/08/2021
ms.locfileid: "132116732"
---
# <a name="lab-querying-information-by-using-wmi-and-cim"></a>ラボ: WMI と CIM を使って情報についてのクエリを実行する
## <a name="scenario"></a>シナリオ
複数のコンピューターに管理情報のクエリを実行する必要があります。 まず、環境内のローカル コンピューターと 1 つのテスト コンピューターにクエリを実行します。
## <a name="objectives"></a>目標
このラボを完了すると、次のことができるようになります。
- Windows Management Instrumentation (WMI) コマンドを使用して情報についてのクエリを実行する。
- Common Information Model (CIM) コマンドを使用して情報についてのクエリを実行する。
- WMI および CIM コマンドを使用してメソッドを呼び出します。
## <a name="estimated-time-45-minutes"></a>予想所要時間: 45 分
## <a name="lab-setup"></a>ラボのセットアップ
仮想マシン: **AZ-040T00A-LON-DC1**、**AZ-040T00A-LON-CL1**
ユーザー名: **Adatum\\Administrator**
パスワード: **Pa55w.rd**
このラボでは、提供されている仮想マシン環境を使用します。 ラボを開始する前に、次の手順を完了します。
1. **LON-DC1** を開いて、**Adatum\\Administrator** として、パスワード **Pa55w.rd** を使ってサインインします。
1. **LON-CL1** について手順 1 を繰り返します。
## <a name="exercise-1-querying-information-by-using-wmi"></a>演習 1: WMI を使って情報についてのクエリを実行する
### <a name="scenario-1"></a>シナリオ 1
この演習では、リポジトリ クラスを検出し、WMI コマンドを使用してこれに対するクエリを実行します。
この演習の主なタスクは次のとおりです。
1. IP アドレスのクエリを実行する。
1. オペレーティング システムのバージョン情報のクエリを実行する。
1. コンピューター システムのハードウェア情報のクエリを実行する。
1. サービス情報のクエリを実行する。
### <a name="task-1-query-ip-addresses"></a>タスク 1: IP アドレスのクエリを実行する
1. **LON-CL1** で、Windows PowerShell を管理者として開始します。
1. IP アドレスは、ネットワーク アダプターの構成の一部です。 キーワード **configuration** を使用して、ローカル コンピューターで使用されている IP アドレスを一覧表示するリポジトリ クラスを検索します。
1. WMI コマンドと、前の手順で検出したクラスを使用して、静的に構成された IP アドレスの一覧を表示します。
### <a name="task-2-query-operating-system-version-information"></a>タスク 2: オペレーティング システムのバージョン情報のクエリを実行する
1. キーワード **operating** を使用して、オペレーティング システムのバージョン情報を一覧表示するリポジトリ クラスを検索します。 これを **name** プロパティで並べ替えます。
1. 前の手順で検出したクラスのプロパティの一覧を表示します。
1. オペレーティング システムのバージョン、Service Pack のメジャー バージョン、オペレーティング システムのビルド番号を含むプロパティをメモします。
1. WMI コマンドと手順 1 で検出したクラスを使用して、ローカル オペレーティング システムのバージョン、Service Pack のメジャー バージョン、オペレーティング システムのビルド番号を表示します。
### <a name="task-3-query-computer-system-hardware-information"></a>タスク 3: コンピューター システムのハードウェア情報のクエリを実行する
1. キーワード **system** を使って、コンピューター システム情報を含むリポジトリ クラスを検索します。
1. 前の手順で検出したクラスのプロパティとプロパティ値の一覧を表示します。
1. プロパティの一覧と WMI コマンドを使用して、ローカル コンピューターの製造元、モデル、物理メモリの合計容量を表示します。 物理メモリの合計容量の列に **RAM** というラベルを付けます。
### <a name="task-4-query-service-information"></a>タスク 4: サービス情報のクエリを実行する
1. キーワード **service** を使って、サービス情報を含むリポジトリ クラスを検索します。
1. 前の手順で検出したクラスのプロパティとプロパティ値の一覧を表示します。
1. プロパティの一覧と WMI コマンドを使用して、名前が **S** で始まるすべてのサービスのサービス名、状態 (実行中または停止済み)、サインイン名を表示します。
## <a name="exercise-2-querying-information-by-using-cim"></a>演習 2: CIM を使って情報についてのクエリを実行する
### <a name="scenario-2"></a>シナリオ 2
この演習では、新しいリポジトリ クラスを検出し、CIM コマンドを使用してクエリを実行します。
この演習の主なタスクは次のとおりです。
1. ユーザー アカウントのクエリを実行する。
1. BIOS 情報のクエリを実行する。
1. ネットワーク アダプターの構成情報のクエリを実行する。
1. ユーザー グループ情報のクエリを実行する。
### <a name="task-1-query-user-accounts"></a>タスク 1: ユーザー アカウントのクエリを実行する
1. CIM コマンドとキーワード **user** を使用して、ユーザー アカウントを一覧表示するリポジトリ クラスを検索します。
1. CIM コマンドを使って、前の手順で検出したクラスのプロパティの一覧を表示します。
1. CIM コマンドとプロパティ リストを使用して、テーブル内のユーザー アカウントの一覧を表示します。 アカウント キャプション、ドメイン、セキュリティ ID、フル ネーム、名前の列を含めます。 一部またはすべてのアカウントで、フル ネームの列が空白になる場合があります。
### <a name="task-2-query-bios-information"></a>タスク 2: BIOS 情報のクエリを実行する
1. キーワード **bios** と CIM コマンドを使用して、BIOS 情報を含むリポジトリ クラスを検索します。
1. CIM コマンドと、前の手順で検出したクラスを使用して、使用可能なすべての BIOS 情報の一覧を表示します。
### <a name="task-3-query-network-adapter-configuration-information"></a>タスク 3: ネットワーク アダプターの構成情報のクエリを実行する
1. CIM コマンドを使用して、`Win32_NetworkAdapterConfiguration` クラスのすべてのローカル インスタンスを表示します。
1. CIM コマンドを使用して、**LON-DC1** に存在する `Win32_NetworkAdapterConfiguration` クラスのすべてのインスタンスを表示します。
### <a name="task-4-query-user-group-information"></a>タスク 4: ユーザー グループ情報のクエリを実行する
1. CIM コマンドとキーワード **group** を使用して、ユーザー グループを一覧表示するクラスを検索します。
1. CIM コマンドを使用して、**LON-DC1** に存在するユーザー グループの一覧を表示します。
## <a name="exercise-3-invoking-methods"></a>演習 3: メソッドの呼び出し
### <a name="scenario-3"></a>シナリオ 3
この演習では、WMI および CIM コマンドを使用して、リポジトリ オブジェクトのメソッドを呼び出します。
この演習の主なタスクは次のとおりです。
1. CIM メソッドを呼び出す。
1. WMI メソッドを呼び出す。
### <a name="task-1-invoke-a-cim-method"></a>タスク 1: CIM メソッドを呼び出す
- CIM コマンドと `Win32_OperatingSystem` の `Reboot` メソッドを使用して、**LON-CL1** からリモートで **LON-DC1** を再起動します。
### <a name="task-2-invoke-a-wmi-method"></a>タスク 2: WMI メソッドを呼び出す
1. `Get-Service` コマンドレットを使用して、**WinRM** サービスの **StartType** プロパティを確認します。
1. WMI コマンドと `Win32_Service` の `ChangeStartMode` メソッドを使用して、**WinRM** サービスの開始モードを **[自動]** に変更します。
1. `Get-Service` コマンドレットを使用して、**WinRM** サービスの **StartType** プロパティが **[自動]** に更新されたことを確認します。
| 35.422222 | 138 | 0.765997 | yue_Hant | 0.57591 |
a90a86d4c411eddbce6e9c8a5c3cd6b0b8e84c0d | 321 | md | Markdown | packages/cli/readme.md | pelletier197/magidocql | b73bad46f39793b1b230b86eccca97823f5736eb | [
"MIT"
] | 2 | 2022-02-26T15:19:20.000Z | 2022-02-27T00:08:47.000Z | packages/cli/readme.md | pelletier197/magidocql | b73bad46f39793b1b230b86eccca97823f5736eb | [
"MIT"
] | null | null | null | packages/cli/readme.md | pelletier197/magidocql | b73bad46f39793b1b230b86eccca97823f5736eb | [
"MIT"
] | null | null | null | # Magidoc CLI
Magidoc CLI is a NodeJS CLI application that can be used to generate beautiful and fully customizable GraphQL documentation websites from scratch.
## Documentation
Read our [documentation](https://magidoc-org.github.io/magidoc/introduction/welcome) for more details about how to get started with Magidoc. | 64.2 | 147 | 0.813084 | eng_Latn | 0.971805 |
a90b42d53e0cca4d8ce31efc945308698a341b11 | 641 | md | Markdown | reactor/README.md | spy16/fusion | f50f2731eed85b33d78cf08272c59b064f62ca5e | [
"MIT"
] | 17 | 2020-07-21T13:52:14.000Z | 2021-06-30T20:38:34.000Z | reactor/README.md | spy16/fusion | f50f2731eed85b33d78cf08272c59b064f62ca5e | [
"MIT"
] | null | null | null | reactor/README.md | spy16/fusion | f50f2731eed85b33d78cf08272c59b064f62ca5e | [
"MIT"
] | 1 | 2020-12-25T09:40:23.000Z | 2020-12-25T09:40:23.000Z | # Reactor
`reactor` is a stream processing tool built using [💥 fusion](https://github.com/spy16/fusion).
## Usage
1. Install `reactor` using `go get -u -v github.com/spy16/fusion/reactor`
2. Create a config file `my_config.json` by referring to config files in [samples](./samples)
3. Run `reactor -config my_config.json`. When you run this, `reactor` will:
1. Reactor will parse the proto message and create a message descriptor.
2. Connect to Kafka cluster and subscribe to given topic name.
3. Parse every message body as protobuf using the descriptor created in step 1 and log JSON formatted version
to `stdout`.
| 42.733333 | 113 | 0.733229 | eng_Latn | 0.975876 |
a90bd788412953ab8551f7ef585d179a0343a88a | 1,370 | md | Markdown | langs/ko-kr/tutorials/stores_nested_reactivity/lesson.md | lechuckroh/solid-docs | de63c17ee666537f644520ac6cf99acd74356af0 | [
"MIT"
] | null | null | null | langs/ko-kr/tutorials/stores_nested_reactivity/lesson.md | lechuckroh/solid-docs | de63c17ee666537f644520ac6cf99acd74356af0 | [
"MIT"
] | null | null | null | langs/ko-kr/tutorials/stores_nested_reactivity/lesson.md | lechuckroh/solid-docs | de63c17ee666537f644520ac6cf99acd74356af0 | [
"MIT"
] | null | null | null | Solid에서 세분화된 반응성을 제공할 수 있는 이유 중 하나는 중첩된 업데이트를 독립적으로 처리할 수 있기 때문입니다.
사용자 리스트가 있고 이 중의 한 사용자 이름을 업데이트한다고 했을 때, Solid는 리스트 자체의 내용을 비교하지 않으면서 DOM 에 있는 이름의 위치만 업데이트합니다. UI 프레임워크 중에서 이런게 가능한 프레임워크는 거의 없습니다. 심지어 리액티프 프레임워크라고 하더라도 말이죠.
하지만 어떻게 이를 가능하게 했을까요? 이 예제에서는 todo 리스트를 가진 시그널이 있습니다. todo를 완료 상태로 설정하기 위해, todo의 복제를 사용해 기존 항목을 교체합니다. 대부분의 프레임워크에서 이런 방법을 사용하지만, `console.log`에서 볼 수 있듯이 리스트를 비교해서 DOM 엘리먼트를 다시 생성해야하기 때문에 비효율적입니다.
```js
const toggleTodo = (id) => {
setTodos(
todos().map((todo) => (todo.id !== id ? todo : { ...todo, completed: !todo.completed })),
);
};
```
반면에 Solid와 같은 세분화된 라이브러리에서는, 다음과 같이 중첩된 시그널로 데이터를 초기화합니다:
```js
const addTodo = (text) => {
const [completed, setCompleted] = createSignal(false);
setTodos([...todos(), { id: ++todoId, text, completed, setCompleted }]);
};
```
이제 추가 비교 작업없이 `setCompleted`를 호출해서 완료 상태를 업데이트할 수 있습니다. 이는 복잡한 부분을 뷰에서 데이터로 이동했기 때문에 가능합니다. 이제 Solid는 데이터가 어떻게 변경되는지를 알 수 있게 됩니다.
```js
const toggleTodo = (id) => {
const index = todos().findIndex((t) => t.id === id);
const todo = todos()[index];
if (todo) todo.setCompleted(!todo.completed())
}
```
이제 나머지 `todo.completed` 참조 코드를 `todo.completed()`로 변경하게 되면, 예제에서 `console.log`는 생성시에만 실행하며, todo를 토글할 때는 실행하지 않게 됩니다.
이 방법은 수동 매핑이 필요하며, 지금까지는 유일하게 선택 가능한 방법이었습니다. 하지만 지금은 프록시 덕분에 이러한 작업의 대부분을 백그라운드에서 수동 작업없이 가능해졌습니다. 어떻게 하는지는 다음 튜토리얼에서 살펴보겠습니다.
| 39.142857 | 197 | 0.681022 | kor_Hang | 1.00001 |
a90c412b542d436219be42cd6a551acbd7f37a42 | 750 | md | Markdown | README.md | runzezhang/React-Native-App-with-Redux-Axios-React-Navigation | 4bd8136e22244080885ca90dee2b48909dd3f231 | [
"MIT"
] | null | null | null | README.md | runzezhang/React-Native-App-with-Redux-Axios-React-Navigation | 4bd8136e22244080885ca90dee2b48909dd3f231 | [
"MIT"
] | null | null | null | README.md | runzezhang/React-Native-App-with-Redux-Axios-React-Navigation | 4bd8136e22244080885ca90dee2b48909dd3f231 | [
"MIT"
] | null | null | null | # Complete React Native App Demo
This is a complete Demo App for developers to use Redux / Axios / React Navigation to build a React Native App by using Expo.
# Functions
1. Register / Login / Logout
2. CRUD functions
3. Multi Inputs form, text, textarea, single select, multi select, checkbox etc.
4. Pull Refresh List
5. Singature, Image View/ Upload
6. Map View
7. Tab/ Stack Nav
8. Axios HTTp Request
9. Redux State Management
10. Multi Language
11. Rating
12. Payment
# Components
1. React Native
2. React Navigation
3. Redux
4. Axios
5. react-native-calendars
6. react-native-datepicker
7. react-native-easy-toast
8. react-native-elements
9. react-native-picker-select
10. react-native-ratings
11. react-navigation-collapsible
12. tipsi-stripe
| 25.862069 | 125 | 0.772 | eng_Latn | 0.629043 |
a90c7fafc2eff3b6f542aa0cd1957aa3058f66c3 | 2,595 | md | Markdown | README.md | cncjs/cncjs-pendant-raspi-gpio | 42a535aece710fecccbd9d7e2a27abbf937f2aa2 | [
"MIT"
] | 8 | 2018-04-14T21:38:47.000Z | 2021-08-21T00:38:54.000Z | README.md | cncjs/cncjs-pendant-raspi-gpio | 42a535aece710fecccbd9d7e2a27abbf937f2aa2 | [
"MIT"
] | 1 | 2021-04-26T21:54:48.000Z | 2021-04-26T21:54:48.000Z | README.md | cncjs/cncjs-pendant-raspi-gpio | 42a535aece710fecccbd9d7e2a27abbf937f2aa2 | [
"MIT"
] | 8 | 2018-02-26T01:00:50.000Z | 2021-08-01T18:36:37.000Z | # cncjs-pendant-raspi-gpio
Simple Raspberry Pi GPIO Pendant control for CNCjs.
[](https://nodei.co/npm/cncjs-pendant-raspi-gpio/)

## Installation
#### NPM Install (local)
```
npm install cncjs-pendant-raspi-gpio
```
#### NPM Install (global) [Recommended]
```
sudo npm install -g cncjs-pendant-raspi-gpio@latest --unsafe-perm --build-from-source
```
#### Manual Install
```
# Clone Repository
cd ~/
#wget https://github.com/cncjs/cncjs-pendant-raspi-gpio/archive/master.zip
#unzip master.zip
git clone https://github.com/cncjs/cncjs-pendant-raspi-gpio.git
cd cncjs-pendant-raspi-gpio*
npm install
```
## Usage
Run `bin/cncjs-pendant-raspi-gpio` to start. Pass --help to `cncjs-pendant-raspi-gpio` for more options.
Eamples:
```
bin/cncjs-pendant-keyboard --help
node bin/cncjs-pendant-raspi-gpio --port /dev/ttyUSB0
```
#### Auto Start
###### Install [Production Process Manager [PM2]](http://pm2.io)
```
# Install PM2
sudo npm install -g pm2
# Setup PM2 Startup Script
# sudo pm2 startup # To Start PM2 as root
pm2 startup # To start PM2 as pi / current user
#[PM2] You have to run this command as root. Execute the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u pi --hp /home/pi
# Start CNCjs (on port 8000, /w Tinyweb mount point) with PM2
## pm2 start ~/.cncjs/cncjs-pendant-raspi-gpio/bin/cncjs-pendant-raspi-gpio -- --port /dev/ttyUSB0
pm2 start $(which cncjs-pendant-raspi-gpio) -- --port /dev/ttyUSB0
# Set current running apps to startup
pm2 save
# Get list of PM2 processes
pm2 list
```
#### Button Presses
1. G-Code: M9
2. G-Code: M8
3. G-Code: M7
4. G-Code: $X "Unlock"
5. G-Code: $X "Unlock"
6. G-Code: $SLP "Sleep"
7. G-Code: $SLP "Sleep"
8. G-Code: $H "Home"
#### Press & Hold
- 3 Sec: sudo poweroff "Shutdown"
## Wiring
See the [fivdi/onoff](https://www.npmjs.com/package/onoff) Raspberry Pi GPIO NodeJS repository for more infomation.





| 29.488636 | 120 | 0.720617 | kor_Hang | 0.172414 |
a90d0519a5037409cdbe35b37e9b9992b64f8524 | 9,335 | md | Markdown | articles/event-grid/move-system-topics-across-regions.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/event-grid/move-system-topics-across-regions.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/event-grid/move-system-topics-across-regions.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Flytta Azure Event Grid system ämnen till en annan region
description: Den här artikeln visar hur du flyttar Azure Event Grid Systems ämnen från en region till en annan region.
ms.topic: how-to
ms.custom: subject-moving-resources
ms.date: 08/28/2020
ms.openlocfilehash: eb6029b206e7d47789371ee81e75c4e05c69ee65
ms.sourcegitcommit: 656c0c38cf550327a9ee10cc936029378bc7b5a2
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 08/28/2020
ms.locfileid: "89087201"
---
# <a name="move-azure-event-grid-system-topics-to-another-region"></a>Flytta Azure Event Grid system ämnen till en annan region
Du kanske vill flytta dina resurser till en annan region av olika anledningar. Till exempel för att dra nytta av en ny Azure-region för att uppfylla interna principer och styrnings krav, eller som svar på kapacitets planerings kraven.
Här följer de övergripande steg som beskrivs i den här artikeln:
- **Exportera resurs gruppen** som innehåller det Azure Storage kontot och det associerade system avsnittet till en Resource Manager-mall. Du kan också exportera en mall endast för system avsnittet. Om du använder den här vägen måste du komma ihåg att flytta Azure-händelseloggen (i det här exemplet ett Azure Storage konto) till den andra regionen innan du flyttar system-avsnittet. I avsnittet exporterad mall för system uppdaterar du sedan det externa ID: t för lagrings kontot i mål regionen.
- **Ändra mallen** för att lägga till `endpointUrl` egenskapen för att peka på en webhook som prenumererar på system-avsnittet. När system avsnittet exporteras, exporteras även dess prenumeration (i det här fallet en webhook) till mallen, men `endpointUrl` egenskapen ingår inte. Så du måste uppdatera den så att den pekar på den slut punkt som prenumererar på ämnet. Uppdatera också `location` egenskapens värde till den nya platsen eller regionen. För andra typer av händelse hanterare behöver du bara uppdatera platsen.
- **Använd mallen för att distribuera resurser** till mål regionen. Du anger namn för lagrings kontot och system avsnittet som ska skapas i mål regionen.
- **Verifiera distributionen**. Kontrol lera att webhooken anropas när du laddar upp en fil till blob-lagringen i mål regionen.
- **Slutför flyttningen**genom att ta bort resurser (avsnittet händelse källa och system) från käll regionen.
## <a name="prerequisites"></a>Krav
- Slutför [snabb starten: dirigera Blob Storage-händelser till webb slut punkten med Azure Portal](blob-event-quickstart-portal.md) i käll regionen. Det här steget är **valfritt**. Testa stegen i den här artikeln. Behåll lagrings kontot i en separat resurs grupp från App Service och App Service plan.
- Se till att tjänsten Event Grid är tillgänglig i mål regionen. Se [tillgängliga produkter per region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid®ions=all).
## <a name="prepare"></a>Förbereda
Kom igång genom att exportera en Resource Manager-mall för resurs gruppen som innehåller system händelse källan (Azure Storage konto) och det associerade system avsnittet.
1. Logga in på [Azure-portalen](https://portal.azure.com).
1. Välj **resurs grupper** på den vänstra menyn. Välj sedan den resurs grupp som innehåller den händelse källa som systemet ska skapas för. I följande exempel är det **Azure Storage** kontot. Resurs gruppen innehåller avsnittet lagrings konto och det associerade systemet.
:::image type="content" source="./media/move-system-topics-across-regions/resource-group-page.png" alt-text="Sidan resurs grupp":::
3. På den vänstra menyn väljer du **Exportera mall** under **Inställningar**och väljer sedan **Hämta** i verktygsfältet.
:::image type="content" source="./media/move-system-topics-across-regions/export-template-menu.png" alt-text="Storage konto – exportera mall":::
5. Leta upp **zip** -filen som du laddade ned från portalen och zippa upp filen till en valfri mapp. Den här zip-filen innehåller mallar och parametrar JSON-filer.
1. Öppna **template.js** i valfritt redigerings program.
1. URL: en för webhooken har inte exporter ATS till mallen. Gör så här:
1. I mallfilen söker du efter **webhook**.
1. I avsnittet **Egenskaper** lägger du till ett kommatecken ( `,` )-tangent i slutet av den sista raden. I det här exemplet är det `"preferredBatchSizeInKilobytes": 64` .
1. Lägg till `endpointUrl` egenskapen med värdet som angetts till webhook-URL: en som visas i följande exempel.
```json
"destination": {
"properties": {
"maxEventsPerBatch": 1,
"preferredBatchSizeInKilobytes": 64,
"endpointUrl": "https://mysite.azurewebsites.net/api/updates"
},
"endpointType": "WebHook"
}
```
> [!NOTE]
> För andra typer av händelse hanterare exporteras alla egenskaper till mallen. Du behöver bara uppdatera `location` egenskapen till mål regionen som visas i nästa steg.
7. Uppdatera `location` **lagrings konto** resursen till mål regionen eller platsen. För att hämta plats koder, se [Azure-platser](https://azure.microsoft.com/global-infrastructure/locations/). Koden för en region är region namnet utan mellanslag, till exempel `West US` är lika med `westus` .
```json
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2019-06-01",
"name": "[parameters('storageAccounts_spegridstorage080420_name')]",
"location": "westus",
```
8. Upprepa steget för att uppdatera `location` **system ämnes** resursen i mallen.
```json
"type": "Microsoft.EventGrid/systemTopics",
"apiVersion": "2020-04-01-preview",
"name": "[parameters('systemTopics_spegridsystopic080420_name')]",
"location": "westus",
```
1. **Spara** mallen.
## <a name="recreate"></a>Återskapa
Distribuera mallen för att skapa ett lagrings konto och ett system ämne för lagrings kontot i mål regionen.
1. I Azure Portal väljer du **skapa en resurs**.
2. I **Sök på Marketplace**skriver du **mall distribution**och trycker sedan på **RETUR**.
3. Välj **malldistribution**.
4. Välj **Skapa**.
5. Välj **Bygg en egen mall i redigeraren**.
6. Välj **Läs in fil**och följ sedan anvisningarna för att läsa in **template.jspå** filen som du laddade ned i det sista avsnittet.
7. Spara mallen genom att välja **Spara** .
8. Följ de här stegen på sidan **Anpassad distribution** .
1. Välj en Azure- **prenumeration**.
1. Välj en befintlig **resurs grupp** i mål regionen eller skapa en.
1. För **region**väljer du mål regionen. Om du har valt en befintlig resurs grupp är den här inställningen skrivskyddad.
1. I **ämnes namnet för systemet**anger du ett namn för det system ämne som ska associeras med lagrings kontot.
1. För **lagrings konto namnet**anger du ett namn för lagrings kontot som ska skapas i mål regionen.
:::image type="content" source="./media/move-system-topics-across-regions/deploy-template.png" alt-text="Distribuera Resource Manager-mall":::
5. Välj **Granska + skapa** längst ned på sidan.
1. På sidan **Granska + skapa** granskar du inställningarna och väljer **skapa**.
## <a name="verify"></a>Verifiera
1. När distributionen har slutförts väljer du **gå till resurs grupp**.
1. På sidan **resurs grupp** kontrollerar du att händelse källan (i det här exemplet Azure Storage konto) och system avsnittet skapas.
1. Ladda upp en fil till en behållare i Azure Blob Storage och kontrol lera att webhooken har tagit emot händelsen. Mer information finns i [skicka en händelse till din slut punkt](blob-event-quickstart-portal.md#send-an-event-to-your-endpoint).
## <a name="discard-or-clean-up"></a>Ta bort eller rensa
Ta bort resurs gruppen som innehåller lagrings kontot och det associerade system avsnittet i käll regionen för att slutföra flyttningen.
Om du vill börja om tar du bort resurs gruppen i mål regionen och upprepar stegen i avsnitten [förbereda](#prepare) och [Återskapa](#recreate) i den här artikeln.
Så här tar du bort en resurs grupp (källa eller mål) med hjälp av Azure Portal:
1. I fönstret Sök högst upp i Azure Portal, Skriv **resurs grupper**och välj **resurs grupper** från Sök resultat.
2. Välj den resurs grupp som ska tas bort och välj **ta bort** från verktygsfältet.
:::image type="content" source="./media/move-system-topics-across-regions/delete-resource-group-button.png" alt-text="Ta bort resursgrupp":::
3. På sidan bekräftelse anger du namnet på resurs gruppen och väljer **ta bort**.
## <a name="next-steps"></a>Nästa steg
Du har lärt dig hur du flyttar en Azure Event-källa och dess associerade system-avsnitt från en region till en annan region. I följande artiklar finns information om hur du flyttar anpassade ämnen, domäner och partners namn områden i olika regioner.
- [Flytta anpassade ämnen mellan regioner](move-custom-topics-across-regions.md).
- [Flytta domäner mellan regioner](move-domains-across-regions.md).
- [Flytta namn rymder för partner över regioner](move-partner-namespaces-across-regions.md).
Mer information om hur du flyttar resurser mellan regioner och haveri beredskap i Azure finns i följande artikel: [Flytta resurser till en ny resurs grupp eller prenumeration](../azure-resource-manager/management/move-resource-group-and-subscription.md).
| 75.282258 | 523 | 0.749652 | swe_Latn | 0.999175 |
a90ddf0d9ad4e5caf369b256369b93a90982e82c | 4,260 | md | Markdown | bio.md | julbinb/julbinb.github.io | f3906955a9bd808772638d995f29d9db0630c06f | [
"MIT"
] | null | null | null | bio.md | julbinb/julbinb.github.io | f3906955a9bd808772638d995f29d9db0630c06f | [
"MIT"
] | null | null | null | bio.md | julbinb/julbinb.github.io | f3906955a9bd808772638d995f29d9db0630c06f | [
"MIT"
] | null | null | null | ---
layout: category
title: Bio
---
## Education
I received both BS and MS from the {{site.data.links.mdlinks.sfedu}}
(former Rostov State University), Rostov-on-Don, Russia.
My department, Faculty of Mathematics, Mechanics and Computer Science (MMCS),
is now called [I. I. Vorovich Institute for Mathematics, Mechanics and Computer Science]({{site.data.links.places.mmcs.link}}).
* **MS in Computer Science (2012–2014)**
Thesis: Модель концептов в императивном языке программирования (A model of concepts for an imperative programming language).
{% include link-button.html name="thesis PDF (in Russian)" link="files/thesis/belyakova-MS-2014_net-concepts.pdf" small="true" %}
{% include link-button.html name="slides PDF (in Russian)" link="files/thesis/belyakova-MS-2014_net-concepts-slides.pdf" small="true" %}
* **BS in Computer Science (2008–2012)**
Thesis: Автоматическое построение ограничений в модельном языке программирования с шаблонами функций и автовыводом типов (Automatic constraints collection in
a programming language with generic functions and type inference).
{% include link-button.html name="thesis PDF (in Russian)" link="files/thesis/belyakova-BS-2012_PollyTL.pdf" small="true" %}
{% include link-button.html name="slides PDF (in Russian)" link="files/thesis/belyakova-BS-2012_PollyTL-slides.pdf" small="true" %}
## Employment
* *Research Assistant.*
Faculty of Information Technology ([FIT]({{site.data.links.places.fitcvut.link}})),
Czech Technical University in Prague ([CVUT]({{site.data.links.places.cvut.link}})).
September 2017−July 2018.
* *Research Assistant.*
Khoury College of Computer Sciences ([Khoury]({{site.data.links.places.khoury.link}})),
Northeastern University ([NEU]({{site.data.links.places.neu.link}})).
January−July 2017.
* *Teaching Assistant, Lecturer.*
[Department of Computer Science and Computational Experiment](http://sfedu.ru/www/rsu$elements$.info?p_es_id=2001100000000),
I. I. Vorovich Institute for Mathematics, Mechanics and Computer Science ([MMCS]({{site.data.links.places.mmcs.link}})),
Southern Federal University ([SFedU]({{site.data.links.places.sfedu.link}})).
2014−2016.
* *Part Time Programmer.* Angstrem-SFedU Laboratory.
2012−2013.
## Bio
> My actual name in Russian is _Юлия_, transliterated as _Yulia_.
> I have been using _Julia_ as my professional name because
> it seems easier to pronounce, but I equally like both.
---
I was born in 1991 in the city of [Rostov-on-Don](https://en.wikipedia.org/wiki/Rostov-on-Don), Russia, where I grew up and spent most of my life so far.
In 2008, I finished high school and started a CS undergraduate program at the
{{site.data.links.mdlinks.mmcs}} ({{site.data.links.mdlinks.sfedu}}).
During my undergrad, I got involved in programming languages research
and teaching, thanks to my advisor
[Stanislav Mikhalkovich]({{site.data.links.people.ssm.link}}).
(Stanislav is also the main person behind the teaching programming language
and IDE [PascalABC.NET]({{site.data.links.websites.pascalabc}}),
which is not to be confused with the old Pascal.)
I was lucky to have studied and then worked with several great faculty
who were interested in PL. Special thanks to
[Vitaly Bragilevsky]({{site.data.links.people.vitaly.link}}),
{{site.data.links.mdlinks.artem}},
and [Stanislav Mikhalkovich]({{site.data.links.people.ssm.link}}).
In 2012–2016, I was happily teaching undergraduate CS courses at my alma mater,
{{site.data.links.mdlinks.mmcs}}.
While teaching half-time, I had entered a PhD program as well, although
later started anew at [Northeastern]({{site.data.links.places.neu.link}}).
(Feel free to ask me about this.)
At the [ECOOP conference in 2016](https://2016.ecoop.org/), I had met
{{site.data.links.mdlinks.janvitek}} who later became my PhD advisor.
I did a research internship with him in Boston in 2017 and then spent a year in
[Prague](https://en.wikipedia.org/wiki/Prague) as a researcher at the
[Czech Technical University]({{site.data.links.places.cvut.link}}).
In September 2018, I started a PhD in Computer Science
at {{site.data.links.mdlinks.neu}},
and I have been living in Boston since then.
---
* [CV of failures](failures)
* [Personal](personal)
| 48.409091 | 159 | 0.75 | eng_Latn | 0.864934 |
a90e086ef680c01aac6d84d9683c1010fe043382 | 207 | md | Markdown | playbooks/README.md | jtudelag/casl-ansible | a114e7baf207e9ac937c6e882b244952b13f6ef8 | [
"Apache-2.0"
] | 137 | 2016-10-26T23:01:24.000Z | 2022-03-19T19:37:49.000Z | playbooks/README.md | darthlukan/advanced-openshift-deployment-homework | 94593c51d206cc05bb65ef6360814afa94728f4e | [
"Apache-2.0"
] | 272 | 2016-10-12T19:56:42.000Z | 2020-08-27T19:52:59.000Z | playbooks/README.md | darthlukan/advanced-openshift-deployment-homework | 94593c51d206cc05bb65ef6360814afa94728f4e | [
"Apache-2.0"
] | 116 | 2016-10-12T19:30:24.000Z | 2021-07-12T12:53:06.000Z | # The CASL Ansible playbooks
## openshift-cluster-seed.yml (openshift-applier)
This playbook (and supporting components) have been moved to a separate repo: https://github.com/redhat-cop/openshift-applier
| 34.5 | 125 | 0.792271 | eng_Latn | 0.949594 |
a90e20a3ccea4800ab576e49b3fa9fae229a00c1 | 2,922 | md | Markdown | learn-bizapps-pr/power-bi/ai-visuals-power-bi/includes/4-decomposition-tree.md | caoyongxu/learn-bizapps-pr-test | d92ce2adf403add16c2c426d6e62a56c84f08eba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | learn-bizapps-pr/power-bi/ai-visuals-power-bi/includes/4-decomposition-tree.md | caoyongxu/learn-bizapps-pr-test | d92ce2adf403add16c2c426d6e62a56c84f08eba | [
"CC-BY-4.0",
"MIT"
] | null | null | null | learn-bizapps-pr/power-bi/ai-visuals-power-bi/includes/4-decomposition-tree.md | caoyongxu/learn-bizapps-pr-test | d92ce2adf403add16c2c426d6e62a56c84f08eba | [
"CC-BY-4.0",
"MIT"
] | 3 | 2022-03-31T08:41:43.000Z | 2022-03-31T09:19:05.000Z | The **Decomposition Tree** visual automatically aggregates your data and lets you drill down into your dimensions so that you can view your data across multiple dimensions. Because **Decomposition Tree** is an AI visual, you can use it for improvised exploration and conducting root cause analysis.
In this example, you've built visuals for the Supply Chain team, but the visuals do not answer all the team's questions. In particular, the team wants to be able to analyze the percentage of products that the organization has on back order, in other words, the percentage of products that are out of stock. The **Decomposition Tree** visual can help you accomplish that task.
Add the **Decomposition Tree** visual to your report by selecting the **Decomposition Tree** icon on the **Visualization** pane. Then, in the **Analyze** field well, add the measure or aggregate that you want to analyze. In the **Explain by** field well, add the dimension(s) that you want to drill down into. In this case, you want to analyze the **Sales** field by drilling down into a number of dimensions, such as **Country**, **City**, and **Product**, as illustrated in the following image.
> [!div class="mx-imgBorder"]
> [](../media/4-use-decomposition-tree-visual-ss.png#lightbox)
The visual updates according to the fields that you added and displays the analysis summary result. In this case, the value of sales is USD 13,499,680.00. You can select the plus (**+**) sign, which will present the drill-down options that you have added. You can select any of the fields in the drop-down list to drill down into the data and see how it contributed to the overall result.
At the top of the list of dimensions that you added are two additional options that are marked with lightbulb icons. These options are referred to as *AI splits*, and they'll automatically find high and low values in the data for you.
> [!div class="mx-imgBorder"]
> [](../media/4-ai-split-options-ss.png#lightbox)
AI splits work by considering all available fields and determining which one to drill into to get the highest/lowest value of the measure that is being analyzed. You can use the results of these splits to find out where you should look next in the data. The following image illustrates the result of selecting the **High value** AI split.
> [!div class="mx-imgBorder"]
> [](../media/4-apply-ai-split-decomposition-tree-ss.png#lightbox)
For more information, see [Create and view decomposition tree visuals in Power BI](https://docs.microsoft.com/power-bi/visuals/power-bi-visualization-decomposition-tree/?azure-portal=true). | 132.818182 | 496 | 0.770021 | eng_Latn | 0.998593 |
a90ee21f345ef2c140e326a16441b14172c6b07e | 73 | md | Markdown | README.md | kintoe2e/node-hello-world | f194cab3ce761189c42bcc9d9609f5ba1381f817 | [
"MIT"
] | null | null | null | README.md | kintoe2e/node-hello-world | f194cab3ce761189c42bcc9d9609f5ba1381f817 | [
"MIT"
] | null | null | null | README.md | kintoe2e/node-hello-world | f194cab3ce761189c42bcc9d9609f5ba1381f817 | [
"MIT"
] | 1 | 2019-12-16T11:02:52.000Z | 2019-12-16T11:02:52.000Z | # node-hello-world
Hello World with Node.js
## Execute
`node server.js` | 12.166667 | 24 | 0.726027 | nld_Latn | 0.205436 |
a90f559587a9126566059ddfead105c6115f401a | 2,512 | md | Markdown | play/roles/rbenv/README.md | marcusramberg/dotfiles | 07a0576eaa0a233a2125ddcb2bb04a5fe5033674 | [
"MIT"
] | 1 | 2020-10-14T00:06:54.000Z | 2020-10-14T00:06:54.000Z | play/roles/rbenv/README.md | marcusramberg/dotfiles | 07a0576eaa0a233a2125ddcb2bb04a5fe5033674 | [
"MIT"
] | null | null | null | play/roles/rbenv/README.md | marcusramberg/dotfiles | 07a0576eaa0a233a2125ddcb2bb04a5fe5033674 | [
"MIT"
] | 2 | 2015-08-06T07:45:48.000Z | 2017-01-04T17:47:16.000Z | rbenv
========
Role for installing [rbenv](https://github.com/sstephenson/rbenv).
Role ready status
------------
[](https://travis-ci.org/zzet/ansible-rbenv-role)
Requirements
------------
none
Role Variables
--------------
Default variables are:
rbenv:
env: system
version: v0.4.0
ruby_version: 2.0.0-p247
rbenv_repo: "git://github.com/sstephenson/rbenv.git"
rbenv_plugins:
- { name: "rbenv-vars",
repo: "git://github.com/sstephenson/rbenv-vars.git",
version: "v1.2.0" }
- { name: "ruby-build",
repo: "git://github.com/sstephenson/ruby-build.git",
version: "v20131225.1" }
- { name: "rbenv-default-gems",
repo: "git://github.com/sstephenson/rbenv-default-gems.git",
version: "v1.0.0" }
- { name: "rbenv-installer",
repo: "git://github.com/fesplugas/rbenv-installer.git",
version: "8bb9d34d01f78bd22e461038e887d6171706e1ba" }
- { name: "rbenv-update",
repo: "git://github.com/rkh/rbenv-update.git",
version: "32218db487dca7084f0e1954d613927a74bc6f2d" }
- { name: "rbenv-whatis",
repo: "git://github.com/rkh/rbenv-whatis.git",
version: "v1.0.0" }
- { name: "rbenv-use",
repo: "git://github.com/rkh/rbenv-use.git",
version: "v1.0.0" }
rbenv_root: "{% if rbenv.env == 'system' %}/usr/local/rbenv{% else %}~/.rbenv{% endif %}"
rbenv_users: []
Description:
- ` rbenv.env ` - Type of rbenv installation. Allows 'system' or 'user' values
- ` rbenv.version ` - Version of rbenv to install (tag from [rbenv releases page](https://github.com/sstephenson/rbenv/releases))
- ` rbenv.ruby_version ` - Version of ruby to install as global rbenv ruby
- ` rbenv_repo ` - Repository with source code of rbenv to install
- ` rbenv_plugins ` - Array of Hashes with information about plugins to install
- ` rbenv_root ` - Install path
- ` rbenv_users ` - Array of Hashes with users for multiuser install. User must be present in the system
Example:
rbenv_users:
- { name: "user", home: "/home/user/", comment: "Deploy user" }
Dependencies
------------
none
License
-------
MIT
Author Information
------------------
[Andrew Kumanyaev](http://github.com/zzet)
[](https://bitdeli.com/free "Bitdeli Badge")
| 27.010753 | 133 | 0.624204 | kor_Hang | 0.124502 |
a910193514e303f42dd61f310ad3a02bae497e03 | 373 | md | Markdown | JS/20. Array.find() and Array.findIndex().md | AbdallahHemdan/TIL | 0a94d4897ed5ee21e5618fe0791a606772c99701 | [
"MIT"
] | 32 | 2020-04-26T16:08:25.000Z | 2022-01-07T06:55:00.000Z | JS/20. Array.find() and Array.findIndex().md | Rowida46/TIL | 0a94d4897ed5ee21e5618fe0791a606772c99701 | [
"MIT"
] | null | null | null | JS/20. Array.find() and Array.findIndex().md | Rowida46/TIL | 0a94d4897ed5ee21e5618fe0791a606772c99701 | [
"MIT"
] | 8 | 2020-06-16T00:12:42.000Z | 2021-10-31T15:02:41.000Z | # Array.find() and Array.findIndex()



| 46.625 | 110 | 0.801609 | yue_Hant | 0.13959 |
a9109262bf908921ff25223cce9684e821ae8c56 | 3,568 | md | Markdown | data/readme_files/maK-.scantastic-tool.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/maK-.scantastic-tool.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/maK-.scantastic-tool.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | # scantastic-tool
## It's bloody scantastic
If you like this and are feeling a bit(coin) generous - 1JdSGqg2zGTbpFMJPLbWoXg7Nng3z1Qp58
It works for me: http://makthepla.net/scantastichax.png
- Dependencies: (DIY - I ain't supportin shit)
- Masscan - https://github.com/robertdavidgraham/masscan
- Nmap - https://nmap.org/download.html
- ElasticSearch - http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_installing_elasticsearch.html
- Kibana - http://www.elasticsearch.org/overview/kibana/installation/
This tool can be used to store masscan or nmap data in elasticsearch,
(the scantastic plugin in the image is not here)
It allows performs distributed directory brute-forcing.
All your base are belong to us. I might maintain or improve this over time. MIGHT.
## Quickstart
### Example usage
Run and import a scan of home /24 network
```
./scantastic.py -s -H 192.168.1.0/24 -p 80,443 -x homescan.xml (with masscan)
./scantastic.py -ns -H 192.168.1.0/24 -p 80,443 -x homescan.xml (with nmap)
```
Export homescan to a list of urls
```
./scantastic.py -eurl -x homescan.xml > urlist (with masscan)
./scantastic.py -nurl -x homescan.xml > urlist (with nmap)
```
Brute force the url list using wordlist and put results into index homescan
using 10 threads (By default it uses 1 thread)
```
./scantastic.py -d -u urlist -w some_wordlist -i homescan -t 10
```
```
root@ubuntu:~/scantastic-tool# ./scantastic.py -h
usage: scantastic.py [-h] [-v] [-d] [-s] [-noes] [-sl] [-in] [-e] [-eurl]
[-del] [-H HOST] [-p PORTS] [-x XML] [-w WORDS] [-u URLS]
[-t THREADS] [-esh ESHOST] [-esp PORT] [-i INDEX]
[-a AGENT]
optional arguments:
-h, --help show this help message and exit
-v, --version Version information
-d, --dirb Run directory brute force. Requires --urls & --words
-s, --scan Run masscan on single range. Specify --host & --ports
& --xml
-ns, --nmap Run Nmap on a single range specify -H & -p
-noes, --noelastics Run scan without elasticsearch insertion
-sl, --scanlist Run masscan on a list of ranges. Requires --host &
--ports & --xml
-nsl, --nmaplist Run Nmap on a list of ranges -H & -p & -x
-in, --noinsert Perform a scan without inserting to elasticsearch
-e, --export Export a scan XML into elasticsearch. Requires --xml
-eurl, --exporturl Export urls to scan from XML file. Requires --xml
-nurl, --exportnmap Export urls from nmap XML, requires -x
-del, --delete Specify an index to delete.
-H HOST, --host HOST Scan this host or list of hosts
-p PORTS, --ports PORTS
Specify ports in masscan format. (ie.0-1000 or
80,443...)
-x XML, --xml XML Specify an XML file to store output in
-w WORDS, --words WORDS
Wordlist to be used with --dirb
-u URLS, --urls URLS List of Urls to be used with --dirb
-t THREADS, --threads THREADS
Specify the number of threads to use.
-esh ESHOST, --eshost ESHOST
Specify the elasticsearch host
-esp PORT, --port PORT
Specify ElasticSearch port
-i INDEX, --index INDEX
Specify the ElasticSearch index
-a AGENT, --agent AGENT
Specify a User Agent for requests
```
Use -noes and -in scans to not import scans by default upon completion of a scan
| 38.782609 | 115 | 0.629204 | eng_Latn | 0.771463 |
a9118d3d75a9872a0c13a956fb12ee45f7fc6ef0 | 191 | md | Markdown | _pages/TrainingOverview.md | 18F/formservice-trainingmaterials | 96a753f1c82cbc0ec4ac97dfb9b3c464dad7e367 | [
"CC0-1.0"
] | 2 | 2021-02-03T00:33:28.000Z | 2022-02-19T08:04:57.000Z | _pages/TrainingOverview.md | 18F/formservice-trainingmaterials | 96a753f1c82cbc0ec4ac97dfb9b3c464dad7e367 | [
"CC0-1.0"
] | 36 | 2021-02-03T00:32:41.000Z | 2022-03-21T09:09:22.000Z | _pages/TrainingOverview.md | 18F/formservice-trainingmaterials | 96a753f1c82cbc0ec4ac97dfb9b3c464dad7e367 | [
"CC0-1.0"
] | null | null | null | ---
layout: training
sidenav: false
title: Training Overview
parent: contact
redirect_from:
- /documentation/training
- /training
- /docs/help/
---
Some stuff for the training overview | 15.916667 | 36 | 0.73822 | eng_Latn | 0.926056 |
a911e947eb9c950a323023d1b20148ad5726d301 | 3,467 | md | Markdown | articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/human-in-the-loop.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/human-in-the-loop.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/human-in-the-loop.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Naučte se koncepty nástrojů pro kontrolu – Moderátor obsahu
titleSuffix: Azure Cognitive Services
description: Přečtěte si o nástroji Content Moderator Review, webové stránce, která koordinuje kombinovanou umělou a umělou a lidskou kontrolu.
services: cognitive-services
author: PatrickFarley
manager: mikemcca
ms.date: 03/15/2019
ms.service: cognitive-services
ms.subservice: content-moderator
ms.topic: conceptual
ms.author: pafarley
ms.openlocfilehash: a23e6d46ee6e79fd7a5cabf4434c561f7d83b31b
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/28/2020
ms.locfileid: "76169504"
---
# <a name="content-moderator-review-tool"></a>Nástroj revize moderátora obsahu
Azure Content Moderator poskytuje služby pro kombinaci moderování obsahu strojového učení s lidskými recenzemi a web [nástroje revize](https://contentmoderator.cognitive.microsoft.com) je uživatelsky přívětivý front-end, který poskytuje podrobný přístup k těmto službám.

## <a name="what-it-does"></a>Co dělá
[Nástroj revize](https://contentmoderator.cognitive.microsoft.com), pokud se používá ve spojení s uvolděnými pomocí nastavení API moderování, umožňuje provádět následující úkoly v procesu moderování obsahu:
- Pomocí jedné sady nástrojů můžete moderovat obsah ve více formátech (text, obrázek a video).
- Automatizujte vytváření lidských [recenzí,](../review-api.md#reviews) když přijdou výsledky rozhraní API pro moderování.
- Přiřaďte nebo eskalujte recenze obsahu více týmům pro recenze, které jsou uspořádány podle kategorie obsahu nebo úrovně prostředí.
- Pomocí výchozích nebo vlastních[filtrů logiky](../review-api.md#workflows)( pracovních postupů ) můžete seřadit a sledovat obsah bez psaní kódu.
- Kromě rozhraní API pro moderátora obsahu můžete pomocí [konektorů](./configure.md#connectors) zpracovávat obsah pomocí služeb Microsoft PhotoDNA, Text Analytics a Face.
- Vytvořte si vlastní konektor pro vytváření pracovních postupů pro libovolné rozhraní API nebo obchodní proces.
- Získejte klíčové metriky výkonu procesů moderování obsahu.
## <a name="review-tool-dashboard"></a>Řídicí panel nástroje revize
Na kartě **Řídicí panel** uvidíte klíčové metriky pro recenze obsahu provedené v nástroji. Podívejte se na celkový počet, úplných a nevyřízených recenzí pro obrázek, text a video obsah. Můžete také zobrazit rozdělení uživatelů a týmů, kteří dokončili recenze, a také značky moderování, které byly použity.

## <a name="review-tool-credentials"></a>Zkontrolovat pověření nástroje
Když se zaregistrujete pomocí [nástroje revize](https://contentmoderator.cognitive.microsoft.com), budete vyzváni k výběru oblasti Azure pro vás účet. Důvodem je, [že nástroj revize](https://contentmoderator.cognitive.microsoft.com) generuje bezplatný zkušební klíč pro služby Azure Content Moderator; Tento klíč budete potřebovat pro přístup ke všem službám z volání REST nebo sady SDK klienta. Adresu URL koncového bodu klíče a rozhraní API můžete zobrazit tak, že vyberete **Nastavení** > **přihlašovacích údajů**.

## <a name="next-steps"></a>Další kroky
Informace o tom, jak získat přístup k prostředkům nástroje Revize a změnit nastavení, [najdete v tématu Konfigurace nástroje Revize.](./configure.md) | 66.673077 | 517 | 0.80848 | ces_Latn | 0.999505 |
a9123c973dc2071c40611e46b5e1098cefb88327 | 387 | md | Markdown | README.md | d605-upjv/veille-concurrentielle | d5d3434008ba7b4c67577f29d46c35314e9378f9 | [
"MIT"
] | null | null | null | README.md | d605-upjv/veille-concurrentielle | d5d3434008ba7b4c67577f29d46c35314e9378f9 | [
"MIT"
] | null | null | null | README.md | d605-upjv/veille-concurrentielle | d5d3434008ba7b4c67577f29d46c35314e9378f9 | [
"MIT"
] | null | null | null | # veille-concurrentielle
L’objectif de ce projet est de concevoir un outil informatique permettant de collecter les prix de vente d’une liste de produits sur un ensemble de boutiques/places de marchés. Un tel outil peut permettre, par exemple, de savoir si les concurrents vendent plus ou moins cher les produits vendus sur sa propre boutique en ligne et ainsi adapter ses propres prix.
| 129 | 361 | 0.816537 | fra_Latn | 0.995533 |
a91269153ce9082edf13252539f4e34a974440c7 | 1,848 | md | Markdown | README.md | minhkhoa92/signal_U_fiddle | eb0a4041656078db1d3f0f7f3b8643ef8fccef99 | [
"FSFAP"
] | null | null | null | README.md | minhkhoa92/signal_U_fiddle | eb0a4041656078db1d3f0f7f3b8643ef8fccef99 | [
"FSFAP"
] | null | null | null | README.md | minhkhoa92/signal_U_fiddle | eb0a4041656078db1d3f0f7f3b8643ef8fccef99 | [
"FSFAP"
] | null | null | null | ## Synopsis
These codes are unfinished. They should display signals between processes on UNIX-like operating systems. In some way the specification of the program is done and there are probably no new elements to be introduced.
These codes utilize POSIX functions and to a much lesser extent SysV functions.
## Code Example
This is from test\_signal.c
```
void handler(int signo);
int main
(
int argc, char **argv
)
{
printf("%d\n", getppid());
printf("%d\n", getpid());
if(signal(SIGUSR1, handler) == SIG_ERR)
{
exit(EXIT_FAILURE);
}
else
pause();
}
void handler(int signo)
{
printf("see if this reaches\n");
}
```
The essential codes are pause() and signal(SIGUSR1, func\_name). The former stops execution and the latter assigns the handler-function func\_name to the signal number SIGUSR1.
## Motivation
Because I solve all my problems in concurrency with Semaphores, and all my messaging with data structures like shared memories, msg-queues, pipes and sockets, I add something as small as signals to my repertoire.
## Installation
Some codes are not supposed to compile.
server\_and\_monitor\_posix\_sem\_shm.c is supposed to be used like this:
usage: command <no>|reader <no>|reader <no>|reader <no>|reader [...]
## Contributors
Copied from https://www.softprayog.in/programming/interprocess-communication-using-posix-shared-memory-in-linux
Copyright © 2007-2017 SoftPrayog.in. All Rights Reserved.
Suggestion to use malloc\_stats() from www.linuxjournal.com/article/6390
And me, Luu Minh Khoa Ngo
## License
This is purely for educational purpose. Feel free to copy, if you need to.
## Final words
At some point in my life, I will go back and finish the thoughts in this one.
If you find certain points, where I could improve my code, send me a pull request or message. Thank you very much.
| 27.176471 | 215 | 0.75487 | eng_Latn | 0.990975 |
a9126b6a49cdf2c99955c32f392b3577dd478fd6 | 11,676 | md | Markdown | articles/active-directory/app-provisioning/provision-on-demand.md | gencomp/azure-docs.de-de | ea9dc9bb0bf0a7673d4f83d8a8187d55087b3bce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/app-provisioning/provision-on-demand.md | gencomp/azure-docs.de-de | ea9dc9bb0bf0a7673d4f83d8a8187d55087b3bce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/app-provisioning/provision-on-demand.md | gencomp/azure-docs.de-de | ea9dc9bb0bf0a7673d4f83d8a8187d55087b3bce | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Bedarfsorientierte Bereitstellung von Benutzern in Azure Active Directory
description: Synchronisierung erzwingen
services: active-directory
author: msmimart
manager: CelesteDG
ms.service: active-directory
ms.subservice: app-provisioning
ms.workload: identity
ms.topic: how-to
ms.date: 06/23/2020
ms.author: mimart
ms.reviewer: arvinh
ms.openlocfilehash: 78a56b6a848139c47d7934a47decb126afe00b7a
ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 07/02/2020
ms.locfileid: "85297527"
---
# <a name="on-demand-provisioning"></a>Bedarfsorientierte Bereitstellung
Mit der bedarfsorientierten Bereitstellung können Sie einen Benutzer innerhalb von Sekunden in einer Anwendung bereitstellen. Mit der Funktion können Sie schnell Konfigurationsprobleme beheben, von Ihnen definierte Ausdrücke überprüfen, Gültigkeitsbereichsfilter testen und vieles mehr.
## <a name="how-to-use-on-demand-provisioning"></a>Verwenden der bedarfsorientierten Bereitstellung
1. Melden Sie sich beim **Azure-Portal** an.
2. Navigieren Sie zu **Unternehmensanwendungen**.
3. Wählen Sie Ihre Anwendung aus, und navigieren Sie zur Konfigurationsseite für die Bereitstellung.
4. Konfigurieren Sie die Bereitstellung, indem Sie Ihre Administratoranmeldeinformationen angeben.
5. Klicken Sie auf **Bedarfsorientierte Bereitstellung**.
6. Suchen Sie anhand von Vorname, Nachname, Anzeigename oder E-Mail-Adresse nach einem Benutzer.
7. Wählen Sie unten auf der Seite „Bereitstellen“ aus.
## <a name="understanding-the-provisioning-steps"></a>Grundlegendes zu den Bereitstellungsschritten
Die Funktion für die bedarfsorientierte Bereitstellung versucht, die Schritte anzuzeigen, die der Bereitstellungsdienst beim Bereitstellen eines Benutzers ausführt. Die Bereitstellung eines Benutzers besteht in der Regel aus fünf Schritten, und ein Schritt oder auch mehrere Schritte werden in der Umgebung für die bedarfsorientierte Bereitstellung angezeigt.
### <a name="step-1-test-connection"></a>Schritt 1: Verbindung testen
Der Bereitstellungsdienst versucht, den Zugriff auf die Zielanwendung zu autorisieren, indem er eine Anforderung für einen „Testbenutzer“ ausgibt. Der Bereitstellungsdienst erwartet eine Antwort, aus der hervorgeht, dass er autorisiert ist, die Bereitstellungsschritte fortzusetzen. Dieser Schritt wird nur angezeigt, wenn in diesem Schritt ein Fehler auftritt. Wenn der Schritt erfolgreich ist, wird er in der Umgebung für die bedarfsorientierte Bereitstellung nicht angezeigt.
**Tipps zur Problembehandlung**
* Stellen Sie sicher, dass Sie gültige Anmeldeinformationen (z. B. geheimes Token und Mandanten-URL) für die Zielanwendung bereitgestellt haben. Welche Anmeldeinformationen erforderlich sind, hängt von der jeweiligen Anwendung ab. Ausführliche Tutorials zur Konfiguration finden Sie [hier](https://docs.microsoft.com/azure/active-directory/saas-apps/tutorial-list).
* Stellen Sie sicher, dass die Zielanwendung das Filtern anhand von übereinstimmenden Attributen (definiert auf der Attributzuordnungsseite) unterstützt. Unter Umständen müssen Sie in der vom Anwendungsentwickler bereitgestellten API-Dokumentation nachschlagen, um zu erfahren, welche Filter unterstützt werden.
* Bei SCIM-Anwendungen können Sie mit einem Tool (z. B. Postman) sicherstellen, dass die Anwendung die Autorisierungsanforderungen entsprechend den Erwartungen des Azure AD-Bereitstellungsdiensts beantwortet. Eine Beispielanforderung finden Sie [hier](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#request-3).
### <a name="step-2-import-user"></a>Schritt 2: Benutzer importieren
Als Nächstes ruft der Bereitstellungsdienst den Benutzer aus dem Quellsystem ab. Anhand der vom Dienst abgerufenen Benutzerattributen wird später evaluiert, ob sich der Benutzer im Gültigkeitsbereich für die Bereitstellung befindet. Außerdem wird das Zielsystem auf einen vorhandenen Benutzer überprüft, und es wird bestimmt, welche Benutzerattribute in das Zielsystem exportiert werden sollen.
**Details anzeigen**
Im Abschnitt „Details anzeigen“ werden die Eigenschaften des Benutzers angezeigt, die aus dem Quellsystem (z. B. Azure AD) importiert wurden.
**Tipps zur Problembehandlung**
* Beim Importieren des Benutzers kann ein Fehler auftreten, wenn im Benutzerobjekt im Zielsystem das übereinstimmende Attribut fehlt. Sie können diesen Fehler beheben, indem Sie das Benutzerobjekt mit dem Wert für das übereinstimmende Attribut aktualisieren oder das übereinstimmende Attribut in Ihrer Bereitstellungskonfiguration ändern.
* Wenn ein von Ihnen erwartetes Attribut in der importierten Liste fehlt, vergewissern Sie sich, dass das Attribut im Benutzerobjekt im Zielsystem mit einem Wert versehen ist. Der Bereitstellungsdienst unterstützt derzeit keine Attribute mit Nullwert.
* Stellen Sie sicher, dass die Attributzuordnungsseite der Bereitstellungskonfiguration das erwartete Attribut enthält.
### <a name="step-3-determine-if-user-is-in-scope"></a>Schritt 3: Gültigkeitsbereich des Benutzers ermitteln
Als Nächstes ermittelt der Bereitstellungsdienst, ob sich der Benutzer im [Gültigkeitsbereich](https://docs.microsoft.com/azure/active-directory/app-provisioning/how-provisioning-works#scoping) für die Bereitstellung befindet. Der Dienst überprüft, ob der Benutzer der Anwendung zugewiesen und der Gültigkeitsbereich auf „Nur zugewiesene Benutzer und Gruppen synchronisieren“ oder „Alle synchronisieren“ festgelegt ist. Außerdem berücksichtigt er die Gültigkeitsbereichsfilter, die in Ihrer Bereitstellungskonfiguration definiert sind.
**Details anzeigen**
Im Abschnitt „Details anzeigen“ werden die ausgewerteten Bedingungen für den Gültigkeitsbereich angezeigt. Mindestens eine der folgenden Eigenschaften wird angezeigt:
* **Aktiv im Quellsystem**: Für den Benutzer wurde in Azure AD die Eigenschaft „Ist aktiv“ auf „True“ eingestellt.
* **Der Anwendung zugewiesen**: Der Benutzer ist in Azure AD der Anwendung zugewiesen.
* **Gültigkeitsbereich „Alle synchronisieren“** : Diese Einstellung für den Gültigkeitsbereich lässt alle Benutzer und Gruppen im Mandanten zu.
* **Benutzer hat erforderliche Rolle**: Der Benutzer verfügt über die Rollen, die für eine Bereitstellung in der Anwendung erforderlich sind.
* **Gültigkeitsbereichsfilter**: Diese Eigenschaft wird ebenfalls angezeigt, wenn Sie Gültigkeitsbereichsfilter für Ihre Anwendung definiert haben. Der Filter wird in folgendem Format angezeigt: {Titel des Gültigkeitsbereichsfilters} {Attribut des Gültigkeitsbereichsfilters} {Operator des Gültigkeitsbereichsfilters} {Wert des Gültigkeitsbereichsfilters}.
**Tipps zur Problembehandlung**
* Stellen Sie sicher, dass Sie eine für den Gültigkeitsbereich gültige Rolle definiert haben. Vermeiden Sie beispielsweise den Operator [Größer als](https://docs.microsoft.com/azure/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts#create-a-scoping-filter) mit einem nicht ganzzahligen Wert.
* Wenn der Benutzer nicht über die notwendige Rolle verfügt, überprüfen Sie die [hier](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned#provisioning-users-assigned-to-the-default-access-role) beschriebenen Tipps.
### <a name="step-4-match-user-between-source-and-target"></a>Schritt 4: Benutzer zwischen Quell- und Zielsystem abgleichen
In diesem Schritt versucht der Dienst, den im Importschritt abgerufenen Benutzer mit einem Benutzer im Zielsystem abzugleichen.
**Details anzeigen**
Die Seite „Details anzeigen“ enthält die Eigenschaften des Benutzers oder mehrerer Benutzer, der bzw. die mit dem Zielsystem abgeglichen wurde(n). Die im Kontextbereich angezeigten Eigenschaften variieren wie folgt:
* Wenn es im Zielsystem keine übereinstimmenden Benutzer gibt, werden keine Eigenschaften angezeigt.
* Wenn es einen übereinstimmenden Benutzer im Zielsystem gibt, werden die Eigenschaften des abgeglichenen Benutzers im Zielsystem angezeigt.
* Wenn es mehrere übereinstimmende Benutzer gibt, werden die Eigenschaften dieser abgeglichenen Benutzer angezeigt.
* Wenn Ihre Attributzuordnungen mehrere übereinstimmende Attribute enthalten, werden die einzelnen übereinstimmenden Attribute nacheinander ausgewertet, und die abgeglichenen Benutzer werden angezeigt.
**Details zur Problembehandlung**
* Der Bereitstellungsdienst kann einen Benutzer im Quellsystem nicht eindeutig mit einem Benutzer im Zielsystem abgleichen. Dieses Problem kann gelöst werden, indem Sie sicherstellen, dass das übereinstimmende Attribut eindeutig ist.
* Stellen Sie sicher, dass die Zielanwendung das Filtern anhand des als übereinstimmend definierten Attributs unterstützt.
### <a name="step-5-perform-action"></a>Schritt 5: Aktion ausführen
Schließlich führt der Bereitstellungsdienst eine Aktion (z. B. Erstellen, Aktualisieren, Löschen oder Überspringen des Benutzers) aus.
**Details anzeigen**
Im Abschnitt „Details anzeigen“ werden die Attribute angezeigt, die in der Zielanwendung geändert wurden. Diese Anzeige beinhaltet das endgültige Ergebnis der Aktivität des Bereitstellungsdiensts und die exportierten Attribute. Wenn bei diesem Schritt ein Fehler auftritt, handelt es sich bei den angezeigten Attributen um die Attribute, die der Bereitstellungsdienst zu ändern versucht hat.
**Tipps zur Problembehandlung**
* Die Fehler, die beim Exportieren von Änderungen auftreten, können sehr unterschiedlich sein. In der [Dokumentation](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs#error-codes) zu den Bereitstellungsprotokollen sind häufige Fehler beschrieben.
## <a name="frequently-asked-questions"></a>Häufig gestellte Fragen
**Muss man die Bereitstellung deaktivieren, um die bedarfsorientierte Bereitstellung zu verwenden?** Bei Anwendungen, die ein langlebiges Bearertoken oder die Kombination aus Benutzername und Kennwort für die Autorisierung verwenden, sind keine weiteren Schritte erforderlich. Bei Anwendungen, die OAuth für die Autorisierung verwenden, muss derzeit der Bereitstellungsauftrag angehalten werden, bevor die bedarfsorientierte Bereitstellung verwendet werden kann. Anwendungen wie G Suite, Box, Workplace by Facebook und Slack gehören zu dieser Kategorie. Wir arbeiten daran, für alle Anwendungen die Ausführung der bedarfsorientierten Bereitstellung ohne Anhalten des Bereitstellungsauftrags zu ermöglichen.
**Wie lange dauert die bedarfsorientierte Bereitstellung?** Sie dauert im Allgemeinen weniger als 30 Sekunden.
## <a name="known-limitations"></a>Bekannte Einschränkungen
Aktuell gibt es ein paar bekannte Einschränkungen. Verfassen Sie einen Beitrag auf [UserVoice](https://aka.ms/appprovisioningfeaturerequest), damit wir die als Nächstes anstehenden Optimierungen besser priorisieren können. Beachten Sie, dass sich diese Einschränkungen nur auf die Funktion für die bedarfsorientierte Bereitstellung beziehen. Einzelheiten zur Unterstützung von Bereitstellungsgruppen, Löschvorgängen usw. durch eine Anwendung finden Sie im Tutorial zur jeweiligen Anwendung.
* Die Anwendungen Workday, AWS und SuccessFactors unterstützen die bedarfsorientierte Bereitstellung nicht.
* Die bedarfsorientierte Bereitstellung von Gruppen und Rollen wird nicht unterstützt.
* Das Deaktivieren oder Löschen von Benutzern und Gruppen wird nicht unterstützt.
## <a name="next-steps"></a>Nächste Schritte
* [Behandeln von Bereitstellungsproblemen](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-config-problem)
| 102.421053 | 707 | 0.829736 | deu_Latn | 0.998922 |
a9130797801b53eb85ee47fb5cf8c5fe2893b092 | 343 | md | Markdown | license-util3/README.md | Bhaskers-Blu-Org1/cp4d-deployment | 15edfb3326d3c4bc56f031d0abe914abf6aa94a4 | [
"Apache-2.0"
] | null | null | null | license-util3/README.md | Bhaskers-Blu-Org1/cp4d-deployment | 15edfb3326d3c4bc56f031d0abe914abf6aa94a4 | [
"Apache-2.0"
] | null | null | null | license-util3/README.md | Bhaskers-Blu-Org1/cp4d-deployment | 15edfb3326d3c4bc56f031d0abe914abf6aa94a4 | [
"Apache-2.0"
] | 1 | 2020-07-30T10:29:15.000Z | 2020-07-30T10:29:15.000Z | # Program to Activate License on a CPD Trial Cluster
## Compile
* On MacOSX, run:
```
cd lutil3 && go build
```
* To compile for a Linux distribution, download dependencies using `dep init` and use the build script:
```
cd lutil3 && dep init
cd build && ./build.sh lutil3
```
## Usage
* Run binary:
```
./lutil3
```
* Read and accept license. | 19.055556 | 103 | 0.676385 | eng_Latn | 0.858819 |
a91423a955ac9317cbc420037cc74b8bb9132ce9 | 5,577 | md | Markdown | articles/rest-api/bot-framework-rest-direct-line-1-1-send-message.md | yanxiaodi/bot-docs.zh-cn | 63fd3144d92fc9bd9f69da0062ecfd76e96d03fa | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/rest-api/bot-framework-rest-direct-line-1-1-send-message.md | yanxiaodi/bot-docs.zh-cn | 63fd3144d92fc9bd9f69da0062ecfd76e96d03fa | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/rest-api/bot-framework-rest-direct-line-1-1-send-message.md | yanxiaodi/bot-docs.zh-cn | 63fd3144d92fc9bd9f69da0062ecfd76e96d03fa | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 向机器人发送消息 | Microsoft Docs
description: 了解如何使用 Direct Line API v1.1 向机器人发送消息。
author: RobStand
ms.author: kamrani
manager: kamrani
ms.topic: article
ms.service: bot-service
ms.subservice: sdk
ms.date: 12/13/2017
ms.openlocfilehash: 360ec3a6a6c9a3be16370aaf445f24a237a702e3
ms.sourcegitcommit: b78fe3d8dd604c4f7233740658a229e85b8535dd
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 10/24/2018
ms.locfileid: "49998010"
---
# <a name="send-a-message-to-the-bot"></a>向机器人发送消息
> [!IMPORTANT]
> 本文介绍如何使用 Direct Line API v1.1 向机器人发送消息。 如果要在客户端应用程序和机器人之间创建新连接,请改用 [Direct Line API 3.0](bot-framework-rest-direct-line-3-0-send-activity.md)。
通过使用 Direct Line 1.1 协议,客户端可以与机器人交换消息。 这些消息将转换为机器人支持的架构(Bot Framework v1 或 Bot Framework v3)。 客户端可为每个请求发送一条消息。
## <a name="send-a-message"></a>发送消息
若要将消息发送给机器人,客户端必须创建 [Message](bot-framework-rest-direct-line-1-1-api-reference.md#message-object) 对象来定义消息,然后发出 `POST` 请求到 `https://directline.botframework.com/api/conversations/{conversationId}/messages`,指定请求正文中的 Message 对象。
以下代码片段提供了 Send Message 请求和响应的示例。
### <a name="request"></a>请求
```http
POST https://directline.botframework.com/api/conversations/abc123/messages
Authorization: Bearer RCurR_XV9ZA.cwA.BKA.iaJrC8xpy8qbOF5xnR2vtCX7CZj0LdjAPGfiCpg4Fv0
[other headers]
```
```json
{
"text": "hello",
"from": "user1"
}
```
### <a name="response"></a>响应
将消息传递到机器人时,服务通过反映机器人状态代码的 HTTP 状态代码进行响应。 如果机器人生成错误,则会向客户端返回 HTTP 500 响应(“内部服务器错误”)以响应其 Send Message 请求。 如果 POST 成功,则该服务返回 HTTP 204 状态代码。 响应正文中不返回任何数据。 可以通过[轮询](bot-framework-rest-direct-line-1-1-receive-messages.md)获取客户端的消息和来自机器人的任何消息。
```http
HTTP/1.1 204 No Content
[other headers]
```
### <a name="total-time-for-the-send-message-requestresponse"></a>Send Message 请求/响应的总时间
将消息发布到 Direct Line 聊天的总时间是以下时间的总和:
- HTTP 请求从客户端传输到 Direct Line 服务的传输时间
- Direct Line 中的内部处理时间(通常少于 120 ms)
- 从 Direct Line 服务到机器人的传输时间
- 机器人中的处理时间
- HTTP 响应传输回客户端的传输时间
## <a name="send-attachments-to-the-bot"></a>向机器人发送附件
在某些情况下,客户端可能需要向机器人发送附件,如图像或文档。 客户端可以通过以下方法向机器人发送附件:[指定使用](#send-by-url) `POST /api/conversations/{conversationId}/messages` 发送的 [Message](bot-framework-rest-direct-line-1-1-api-reference.md#message-object) 对象中的附件的 URL,或使用 `POST /api/conversations/{conversationId}/upload` [上传附件](#upload-attachments)。
## <a id="send-by-url"></a>通过 URL 发送附件
若要使用 `POST /api/conversations/{conversationId}/messages` 发送作为 [Message](bot-framework-rest-direct-line-1-1-api-reference.md#message-object) 对象一部分的一个或多个附件,请指定消息的 `images` 数组和/或 `attachments` 数组内的附件 URL。
## <a id="upload-attachments"></a>通过上传发送附件
通常情况下,客户端可能在设备上包含要发送给机器人的图像或文档,但没有对应这些文件的 URL。 在此情况下,客户端可以发出 `POST /api/conversations/{conversationId}/upload` 请求通过上传向机器人发送附件。 请求的格式和内容将取决于客户端是[发送单个附件](#upload-one-attachment)还是[发送多个附件](#upload-multiple-attachments)。
### <a id="upload-one-attachment"></a>通过上传发送单个附件
若要通过上传发送附件,请发出此请求:
```http
POST https://directline.botframework.com/api/conversations/{conversationId}/upload?userId={userId}
Authorization: Bearer SECRET_OR_TOKEN
Content-Type: TYPE_OF_ATTACHMENT
Content-Disposition: ATTACHMENT_INFO
[other headers]
[file content]
```
在此请求 URI 中,将 {conversationId} 替换为会话 ID,将 {userId} 替换为发送消息的用户的 ID。 在请求标头中,设置 `Content-Type` 以指定附件的类型,设置 `Content-Disposition` 以指定附件的文件名。
以下代码片段提供了 Send (single) Attachment 请求和响应的示例。
#### <a name="request"></a>请求
```http
POST https://directline.botframework.com/api/conversations/abc123/upload?userId=user1
Authorization: Bearer RCurR_XV9ZA.cwA.BKA.iaJrC8xpy8qbOF5xnR2vtCX7CZj0LdjAPGfiCpg4Fv0
Content-Type: image/jpeg
Content-Disposition: name="file"; filename="badjokeeel.jpg"
[other headers]
[JPEG content]
```
#### <a name="response"></a>响应
如果请求成功,上传完成后将向机器人发送消息,服务将返回 HTTP 204 状态代码。
```http
HTTP/1.1 204 No Content
[other headers]
```
### <a id="upload-multiple-attachments"></a>通过上传发送多个附件
若要通过上传发送多个附件,向 `/api/conversations/{conversationId}/upload` 终结点 `POST` 多部分请求。 设置对 `multipart/form-data` 请求的 `Content-Type` 标头并包含各部分的 `Content-Type` 标头和 `Content-Disposition` 标头,以指定每个附件的类型和文件名。 在请求 URI 中,将 `userId` 参数设置为发送消息的用户的 ID。
可以通过添加指定 `Content-Type` 标头值 `application/vnd.microsoft.bot.message` 的部件在请求中包含 [Message](bot-framework-rest-direct-line-1-1-api-reference.md#message-object) 对象。 这使客户端能够自定义包含附件的消息。 如果请求包含 Message,在发送之前,由有效负载的其他部分指定的附件要先添加为该 Message 的附件。
以下代码片段提供了 Send (multiple) Attachment 请求和响应的示例。 在此示例中,请求发送包含一些文本和单张图像附件的消息。 其他部分可以添加到请求中,以在此消息中包含多个附件。
#### <a name="request"></a>请求
```http
POST https://directline.botframework.com/api/conversations/abc123/upload?userId=user1
Authorization: Bearer RCurR_XV9ZA.cwA.BKA.iaJrC8xpy8qbOF5xnR2vtCX7CZj0LdjAPGfiCpg4Fv0
Content-Type: multipart/form-data; boundary=----DD4E5147-E865-4652-B662-F223701A8A89
[other headers]
----DD4E5147-E865-4652-B662-F223701A8A89
Content-Type: image/jpeg
Content-Disposition: form-data; name="file"; filename="badjokeeel.jpg"
[other headers]
[JPEG content]
----DD4E5147-E865-4652-B662-F223701A8A89
Content-Type: application/vnd.microsoft.bot.message
[other headers]
{
"text": "Hey I just IM'd you\n\nand this is crazy\n\nbut here's my webhook\n\nso POST me maybe",
"from": "user1"
}
----DD4E5147-E865-4652-B662-F223701A8A89
```
#### <a name="response"></a>响应
如果请求成功,上传完成后将向机器人发送消息,服务将返回 HTTP 204 状态代码。
```http
HTTP/1.1 204 No Content
[other headers]
```
## <a name="additional-resources"></a>其他资源
- [关键概念](bot-framework-rest-direct-line-1-1-concepts.md)
- [身份验证](bot-framework-rest-direct-line-1-1-authentication.md)
- [启动会话](bot-framework-rest-direct-line-1-1-start-conversation.md)
- [从机器人接收消息](bot-framework-rest-direct-line-1-1-receive-messages.md) | 33.8 | 300 | 0.769051 | yue_Hant | 0.450623 |
a914accd4c9091b6676d01849df5ee38b56902c3 | 956 | md | Markdown | devhub-content/cloud-pathfinder-services/pathfinder.md | ActionAnalytics/cloud-pathfinder-technology-and-ux | ac8f35f324104d24663521d7ba7e11aab0a78e13 | [
"CC0-1.0"
] | 1 | 2021-10-13T21:52:25.000Z | 2021-10-13T21:52:25.000Z | devhub-content/cloud-pathfinder-services/pathfinder.md | ActionAnalytics/cloud-pathfinder-technology-and-ux | ac8f35f324104d24663521d7ba7e11aab0a78e13 | [
"CC0-1.0"
] | 1,103 | 2020-12-04T01:50:18.000Z | 2022-03-30T22:50:20.000Z | devhub-content/cloud-pathfinder-services/pathfinder.md | ActionAnalytics/cloud-pathfinder-technology-and-ux | ac8f35f324104d24663521d7ba7e11aab0a78e13 | [
"CC0-1.0"
] | 5 | 2021-08-04T19:43:54.000Z | 2022-03-03T19:25:01.000Z | ---
title: Cloud Pathfinder
description: Cloud Pathfinder is an OCIO initiative to explore cloud services to inform strategy, policy, and related activities within BC Gov.
author: sheaphillips
resourceType: Documentation
---
# What is Pathfinder
Technology is moving faster than ever and it’s not showing any signs of slowing down. As an ongoing OCIO initiative, Pathfinder projects are testbeds for government to explore the potential of these emerging technology solutions, and to be ready to capture the benefits. They deliver business value through a “learn by doing” process, matching known business problems with these emerging technologies. What is learned is shared, so other projects have a path to follow. To facilitate speed and innovation, Pathfinder projects are provided greater latitude when adhering to existing standards with the understanding that the learning gathered will inform any necessary standards/revisions going forward.
| 86.909091 | 702 | 0.819038 | eng_Latn | 0.999709 |
a915e457847665fe0e133655a7b279e11662073c | 75 | md | Markdown | _build/html/_sources/chapters/extended/extended_index.md | ssm-jax/ssm-book | f3bfa29a1c474b7dc85792a563df0f29736a44c6 | [
"MIT"
] | 10 | 2022-03-22T21:28:03.000Z | 2022-03-29T17:42:06.000Z | chapters/extended/extended_index.md | ssm-jax/ssm-book | f3bfa29a1c474b7dc85792a563df0f29736a44c6 | [
"MIT"
] | null | null | null | chapters/extended/extended_index.md | ssm-jax/ssm-book | f3bfa29a1c474b7dc85792a563df0f29736a44c6 | [
"MIT"
] | 1 | 2022-03-23T02:15:23.000Z | 2022-03-23T02:15:23.000Z |
(ch:extended)=
# Extended (linearized) methods
```{tableofcontents}
``` | 10.714286 | 32 | 0.666667 | eng_Latn | 0.656396 |
a916a342d2d9d22361448ff5b26e6889b77ebbd2 | 964 | md | Markdown | content/xkcd/0225.md | whatifrussian/xkcdbird | e21cd93b6f45ef04a46328cdd0feececfdd58984 | [
"Unlicense"
] | null | null | null | content/xkcd/0225.md | whatifrussian/xkcdbird | e21cd93b6f45ef04a46328cdd0feececfdd58984 | [
"Unlicense"
] | null | null | null | content/xkcd/0225.md | whatifrussian/xkcdbird | e21cd93b6f45ef04a46328cdd0feececfdd58984 | [
"Unlicense"
] | null | null | null | Title: Open Source
Slug: 225
Category: xkcd
Date: 2013-03-19 08:39:26
SourceNum: 225
SourceTitle: Open Source
Image: /comics/0225.png
MicroImage: /comics/0225_micro.png
MiniImage: /comics/0225_mini.png
Description: Потом мы оденемся как головорезы — нефтяные магнаты и нападём на Ральфа Нейдера.
[Ричард Столлмен лежит на постели и храпит. Потолок пробивают два ниндзя.]
Ниндзя 1: Ричард Столлмен! Твои вирусные свободные лицензии стали слишком сильны. GPL надо пресечь. На корню. А?
[Столлмен просыпается и с лязгом достаёт меч из-под кровати.]
Столлмен: Ха, прислужники Микрософта! Наконец это свершилось. Кровавая ночь, которую я так долго ждал. Неважно, я умру или вы, свободное ПО продолжит жить! За победу GNU! За свободу! ...Эй, ребята, вы куда?
[Нинздя, раздеваясь, убегают в окно.]
Ниндзя 1: Блин, ты прав, ему никогда не надоест.
Ниндзя 2: Пойдём тогда к Эрику Рэймонду.
Ниндзя 1: Или к Линусу Торвальдсу. Я слышал, он спит с нунчаками. | 50.736842 | 207 | 0.766598 | rus_Cyrl | 0.961273 |
a91725d77daf265ade2a764b3107e33beed36233 | 100,340 | md | Markdown | output/StackOverflow_Top_ReactJS_Questions.md | djsnipa1/stackoverflow-scraper | 9b65dc640b4989f46588bf2dd9d8fb6737264ef3 | [
"MIT"
] | null | null | null | output/StackOverflow_Top_ReactJS_Questions.md | djsnipa1/stackoverflow-scraper | 9b65dc640b4989f46588bf2dd9d8fb6737264ef3 | [
"MIT"
] | null | null | null | output/StackOverflow_Top_ReactJS_Questions.md | djsnipa1/stackoverflow-scraper | 9b65dc640b4989f46588bf2dd9d8fb6737264ef3 | [
"MIT"
] | 1 | 2020-04-09T21:38:11.000Z | 2020-04-09T21:38:11.000Z | # StackOverflow Top ReactJS Questions
## [Why use Redux over Facebook Flux?](https://stackoverflow.com/questions/32461229/why-use-redux-over-facebook-flux)
**961 Votes**, Volodymyr Bakhmatiuk
Redux author here!
Redux is not that different from Flux. Overall it has same architecture, but Redux is able to cut some complexity corners by using functional composition where Flux uses callback registration.
There is not a fundamental difference in Redux, but I find it makes certain abstractions easier, or at least possible to implement, that would be hard or impossible to implement in Flux.
### Reducer Composition
Take, for example, pagination. My Flux + React Router example handles pagination, but the code for that is awful. One of the reasons it's awful is that Flux makes it unnatural to reuse functionality across stores. If two stores need to handle pagination in response to different actions, they either need to inherit from a common base store (bad! you're locking yourself into a particular design when you use inheritance), or call a function from the handler, which will need to somehow operate on the Flux store's private state. The whole thing is messy (although definitely in the realm of possible).
On the other hand, with Redux pagination is natural thanks to reducer composition. It's reducers all the way down, so you can write a reducer factory that generates pagination reducers and then use it in your reducer tree. The key to why it's so easy is because in Flux, stores are flat, but in Redux, reducers can be nested via functional composition, just like React components can be nested.
This pattern also enables wonderful features like no-user-code undo/redo. Can you imagine plugging Undo/Redo into a Flux app being two lines of code? Hardly. With Redux, it isagain, thanks to reducer composition pattern. I need to highlight there's nothing new about itthis is the pattern pioneered and described in detail in Elm Architecture which was itself influenced by Flux.
### Server Rendering
People have been rendering on the server fine with Flux, but seeing that we have 20 Flux libraries each attempting to make server rendering easier, perhaps Flux has some rough edges on the server. The truth is Facebook doesn't do much server rendering, so they haven't been very concerned about it, and rely on the ecosystem to make it easier.
In traditional Flux, stores are singletons. This means it's hard to separate the data for different requests on the server. Not impossible, but hard. This is why most Flux libraries (as well as the new Flux Utils) now suggest you use classes instead of singletons, so you can instantiate stores per request.
There are still the following problems that you need to solve in Flux (either yourself or with the help of your favorite Flux library such as Flummox or Alt):
If stores are classes, how do I create and destroy them with dispatcher per request? When do I register stores?
How do I hydrate the data from the stores and later rehydrate it on the client? Do I need to implement special methods for this?
Admittedly Flux frameworks (not vanilla Flux) have solutions to these problems, but I find them overcomplicated. For example, Flummox asks you to implement `serialize()` and `deserialize()` in your stores. Alt solves this nicer by providing `takeSnapshot()` that automatically serializes your state in a JSON tree.
Redux just goes further: since there is just a single store (managed by many reducers), you don't need any special API to manage the (re)hydration. You don't need to flush or hydrate storesthere's just a single store, and you can read its current state, or create a new store with a new state. Each request gets a separate store instance. Read more about server rendering with Redux.
Again, this is a case of something possible both in Flux and Redux, but Flux libraries solve this problem by introducing a ton of API and conventions, and Redux doesn't even have to solve it because it doesn't have that problem in the first place thanks to conceptual simplicity.
### Developer Experience
I didn't actually intend Redux to become a popular Flux libraryI wrote it as I was working on my ReactEurope talk on hot reloading with time travel. I had one main objective: make it possible to change reducer code on the fly or even change the past by crossing out actions, and see the state being recalculated.
I haven't seen a single Flux library that is able to do this. React Hot Loader also doesn't let you do thisin fact it breaks if you edit Flux stores because it doesn't know what to do with them.
When Redux needs to reload the reducer code, it calls `replaceReducer()`, and the app runs with the new code. In Flux, data and functions are entangled in Flux stores, so you can't just replace the functions. Moreover, you'd have to somehow re-register the new versions with the Dispatchersomething Redux doesn't even have.
### Ecosystem
Redux has a rich and fast-growing ecosystem. This is because it provides a few extension points such as middleware. It was designed with use cases such as logging, support for Promises, Observables, routing, immutability dev checks, persistence, etc, in mind. Not all of these will turn out to be useful, but it's nice to have access to a set of tools that can be easily combined to work together.
### Simplicity
Redux preserves all the benefits of Flux (recording and replaying of actions, unidirectional data flow, dependent mutations) and adds new benefits (easy undo-redo, hot reloading) without introducing Dispatcher and store registration.
Keeping it simple is important because it keeps you sane while you implement higher-level abstractions.
Unlike most Flux libraries, Redux API surface is tiny. If you remove the developer warnings, comments, and sanity checks, it's 99 lines. There is no tricky async code to debug.
You can actually read it and understand all of Redux.
See also my answer on downsides of using Redux compared to Flux.
## [Programmatically navigate using react router](https://stackoverflow.com/questions/31079081/programmatically-navigate-using-react-router)
**710 Votes**, George Mauer
React Router v4
With v4 of React Router, there are three approaches that you can take to programmatic routing within components.
Use the `withRouter` higher-order component.
Use composition and render a `<Route>`
Use the `context`.
React Router is mostly a wrapper around the `history` library. `history` handles interaction with the browser's `window.history` for you with its browser and hash histories. It also provides a memory history which is useful for environments that don't have a global history. This is particularly useful in mobile app development (`react-native`) and unit testing with Node.
A `history` instance has two methods for navigating: `push` and `replace`. If you think of the `history` as an array of visited locations, `push` will add a new location to the array and `replace` will replace the current location in the array with the new one. Typically you will want to use the `push` method when you are navigating.
In earlier versions of React Router, you had to create your own `history` instance, but in v4 the `<BrowserRouter>`, `<HashRouter>`, and `<MemoryRouter>` components will created browser, hash, and memory instances for you. React Router makes the properties and methods of the `history` instance associated with your router available through the context, under the `router` object.
1. Use the `withRouter` higher-order component
The `withRouter` higher-order component will inject the `history` object as a prop of the component. This allows you to access the `push` and `replace` methods without having to deal with the `context`.
```reactjs
import { withRouter } from 'react-router-dom'
// this also works with react-router-native
const Button = withRouter(({ history }) => (
<button
type='button'
onClick={() => { history.push('/new-location') }}
>
Click Me!
</button>
))
```
2. Use composition and render a `<Route>`
The `<Route>` component isn't just for matching locations. You can render a pathless route and it will always match the current location. The `<Route>` component passes the same props as `withRouter`, so you will be able to access the `history` methods through the `history` prop.
```reactjs
import { Route } from 'react-router-dom'
const Button = () => (
<Route render={({ history}) => (
<button
type='button'
onClick={() => { history.push('/new-location') }}
>
Click Me!
</button>
)} />
)
```
3. Use the context*
*But you probably should not
The last option is one that you should only use if you feel comfortable working with React's context model. Although context is an option, it should be stressed that context is an unstable API and React has a section Why Not To Use Context in their documentation. So use at your own risk!
```reactjs
const Button = (props, context) => (
<button
type='button'
onClick={() => {
// context.history.push === history.push
context.history.push('/new-location')
}}
>
Click Me!
</button>
)
// you need to specify the context type so that it
// is available within the component
Button.contextTypes = {
history: React.PropTypes.shape({
push: React.PropTypes.func.isRequired
})
}
```
1 and 2 are the simplest choices to implement, so for most use cases they are your best bets.
## [Loop inside React JSX](https://stackoverflow.com/questions/22876978/loop-inside-react-jsx)
**677 Votes**, Ben Roberts
Think of it like you're just calling JavaScript functions. You can't put a `for` loop inside a function call:
```reactjs
return tbody(
for (var i = 0; i < numrows; i++) {
ObjectRow()
}
)
```
But you can make an array, and then pass that in:
```reactjs
var rows = [];
for (var i = 0; i < numrows; i++) {
rows.push(ObjectRow());
}
return tbody(rows);
```
You can use basically the same structure when working with JSX:
```reactjs
var rows = [];
for (var i = 0; i < numrows; i++) {
// note: we add a key prop here to allow react to uniquely identify each
// element in this array. see: https://reactjs.org/docs/lists-and-keys.html
rows.push(<ObjectRow key={i} />);
}
return <tbody>{rows}</tbody>;
```
Incidentally, my JavaScript example is almost exactly what that example of JSX transforms into. Play around with Babel REPL to get a feel for how JSX works.
## [How to pass props to {this.props.children}](https://stackoverflow.com/questions/32370994/how-to-pass-props-to-this-props-children)
**518 Votes**, plus-
You can use React.Children to iterate over the children, and then clone each element with new props (shallow merged) using React.cloneElement e.g:
```reactjs
const Child = ({ doSomething, value }) => (
<div onClick={() => doSomething(value)}>Click Me</div>
);
class Parent extends React.PureComponent {
doSomething = (value) => {
console.log('doSomething called by child with value:', value);
}
render() {
const { children } = this.props;
const childrenWithProps = React.Children.map(children, child =>
React.cloneElement(child, { doSomething: this.doSomething }));
return <div>{childrenWithProps}</div>
}
};
ReactDOM.render(
<Parent>
<Child value="1" />
<Child value="2" />
</Parent>,
document.getElementById('container')
);
```
Fiddle: https://jsfiddle.net/2q294y43/2/
## [What is the difference between using constructor vs getInitialState in React / React Native?](https://stackoverflow.com/questions/30668326/what-is-the-difference-between-using-constructor-vs-getinitialstate-in-react-r)
**406 Votes**, Nader Dabit
The two approaches are not interchangeable. You should initialize state in the constructor when using ES6 classes, and define the `getInitialState` method when using `React.createClass`.
See the official React doc on the subject of ES6 classes.
```reactjs
class MyComponent extends React.Component {
constructor(props) {
super(props);
this.state = { /* initial state */ };
}
}
```
is equivalent to
```reactjs
var MyComponent = React.createClass({
getInitialState() {
return { /* initial state */ };
},
});
```
## [What do these three dots in React do?](https://stackoverflow.com/questions/31048953/what-do-these-three-dots-in-react-do)
**392 Votes**, Thomas Johansen
Updated Answer (April 2018)
That's property spread notation, being added in ES2018 (proposal here, in the draft specification here), but long-supported in React projects via transpilation (as "JSX spread attributes" even though you could do it elsewhere, too, not just attributes).
`{...this.props}` spreads out the properties in props as discrete properties (attributes) on the `Modal` element you're creating. For instance, if `this.props` contained `a: 1` and `b: 2`, then
```reactjs
<Modal {...this.props} title='Modal heading' animation={false}>
```
would be the same as
```reactjs
<Modal a={this.props.a} b={this.props.b} title='Modal heading' animation={false}>
```
But it's dynamic, so whatever properties are in `props` are included.
Spread notation is handy not only for that use case, but for creating a new object with most (or all) of the properties of an existing object which comes up a lot when you're updating state, since you can't modify state directly:
```reactjs
this.setState(prevState => {
return {foo: {...prevState.foo, a: "updated"}};
});
```
That replaces `this.state.foo` with a new object with all the same properties as `foo` except the ``a property, which becomes `"updated"`:
```reactjs
const obj = {
foo: {
a: 1,
b: 2,
c: 3
}
};
console.log("original", obj.foo);
// Creates a NEW object and assigns it to `obj.foo`
obj.foo = {...obj.foo, a: "updated"};
console.log("updated", obj.foo);```
```reactjs
.as-console-wrapper {
max-height: 100% !important;
}```
Original Answer (July 2015)
Those are JSX spread attributes:
Spread Attributes
If you already have props as an object, and you want to pass it in JSX, you can use `...` as a "spread" operator to pass the whole props object. These two components are equivalent:
```reactjs
function App1() {
return <Greeting firstName="Ben" lastName="Hector" />;
}
function App2() {
const props = {firstName: 'Ben', lastName: 'Hector'};
return <Greeting {...props} />;
}
```
Spread attributes can be useful when you are building generic containers. However, they can also make your code messy by making it easy to pass a lot of irrelevant props to components that don't care about them. We recommend that you use this syntax sparingly.
That documentation used to mention that although this is (for now) a JSX thing, there's a proposal to add Object Rest and Spread Properties to JavaScript itself. (JavaScript has had rest and spread for arrays since ES2015, but not for object properties.) As of November 2017, that proposal is at Stage3 and has been for some time. Both Chrome's V8 and Firefox's SpiderMonkey now support it, so presumably if the specification language is worked out in time it'll be Stage4 soon and part of the ES2018 snapshot specification. (More about stages here.) Transpilers have supported it for some time (even separately from JSX).
Side note: Although the JSX quote above talks about a "spread operator," `...` isn't an operator, and can't be. Operators have a single result value. `...` is primary syntax (kind of like the `()` used with `for` aren't the grouping operator, even though they look like it).
## [Why do we need middleware for async flow in Redux?](https://stackoverflow.com/questions/34570758/why-do-we-need-middleware-for-async-flow-in-redux)
**370 Votes**, sbichenko
What is wrong with this approach? Why would I want to use Redux Thunk or Redux Promise, as the documentation suggests?
There is nothing wrong with this approach. Its just inconvenient in a large application because youll have different components performing the same actions, you might want to debounce some actions, or keep some local state like auto-incrementing IDs close to action creators, etc. So it is just easier from the maintenance point of view to extract action creators into separate functions.
You can read my answer to How to dispatch a Redux action with a timeout for a more detailed walkthrough.
Middleware like Redux Thunk or Redux Promise just gives you syntax sugar for dispatching thunks or promises, but you dont have to use it.
So, without any middleware, your action creator might look like
```reactjs
// action creator
function loadData(dispatch, userId) { // needs to dispatch, so it is first argument
return fetch(`http://data.com/${userId}`)
.then(res => res.json())
.then(
data => dispatch({ type: 'LOAD_DATA_SUCCESS', data }),
err => dispatch({ type: 'LOAD_DATA_FAILURE', err })
);
}
// component
componentWillMount() {
loadData(this.props.dispatch, this.props.userId); // don't forget to pass dispatch
}
```
But with Thunk Middleware you can write it like this:
```reactjs
// action creator
function loadData(userId) {
return dispatch => fetch(`http://data.com/${userId}`) // Redux Thunk handles these
.then(res => res.json())
.then(
data => dispatch({ type: 'LOAD_DATA_SUCCESS', data }),
err => dispatch({ type: 'LOAD_DATA_FAILURE', err })
);
}
// component
componentWillMount() {
this.props.dispatch(loadData(this.props.userId)); // dispatch like you usually do
}
```
So there is no huge difference. One thing I like about the latter approach is that the component doesnt care that the action creator is async. It just calls `dispatch` normally, it can also use `mapDispatchToProps` to bind such action creator with a short syntax, etc. The components dont know how action creators are implemented, and you can switch between different async approaches (Redux Thunk, Redux Promise, Redux Saga) without changing the components. On the other hand, with the former, explicit approach, your components know exactly that a specific call is async, and needs `dispatch` to be passed by some convention (for example, as a sync parameter).
Also think about how this code will change. Say we want to have a second data loading function, and to combine them in a single action creator.
With the first approach we need to be mindful of what kind of action creator we are calling:
```reactjs
// action creators
function loadSomeData(dispatch, userId) {
return fetch(`http://data.com/${userId}`)
.then(res => res.json())
.then(
data => dispatch({ type: 'LOAD_SOME_DATA_SUCCESS', data }),
err => dispatch({ type: 'LOAD_SOME_DATA_FAILURE', err })
);
}
function loadOtherData(dispatch, userId) {
return fetch(`http://data.com/${userId}`)
.then(res => res.json())
.then(
data => dispatch({ type: 'LOAD_OTHER_DATA_SUCCESS', data }),
err => dispatch({ type: 'LOAD_OTHER_DATA_FAILURE', err })
);
}
function loadAllData(dispatch, userId) {
return Promise.all(
loadSomeData(dispatch, userId), // pass dispatch first: it's async
loadOtherData(dispatch, userId) // pass dispatch first: it's async
);
}
// component
componentWillMount() {
loadAllData(this.props.dispatch, this.props.userId); // pass dispatch first
}
```
With Redux Thunk action creators can `dispatch` the result of other action creators and not even think whether those are synchronous or asynchronous:
```reactjs
// action creators
function loadSomeData(userId) {
return dispatch => fetch(`http://data.com/${userId}`)
.then(res => res.json())
.then(
data => dispatch({ type: 'LOAD_SOME_DATA_SUCCESS', data }),
err => dispatch({ type: 'LOAD_SOME_DATA_FAILURE', err })
);
}
function loadOtherData(userId) {
return dispatch => fetch(`http://data.com/${userId}`)
.then(res => res.json())
.then(
data => dispatch({ type: 'LOAD_OTHER_DATA_SUCCESS', data }),
err => dispatch({ type: 'LOAD_OTHER_DATA_FAILURE', err })
);
}
function loadAllData(userId) {
return dispatch => Promise.all(
dispatch(loadSomeData(userId)), // just dispatch normally!
dispatch(loadOtherData(userId)) // just dispatch normally!
);
}
// component
componentWillMount() {
this.props.dispatch(loadAllData(this.props.userId)); // just dispatch normally!
}
```
With this approach, if you later want your action creators to look into current Redux state, you can just use the second `getState` argument passed to the thunks without modifying the calling code at all:
```reactjs
function loadSomeData(userId) {
// Thanks to Redux Thunk I can use getState() here without changing callers
return (dispatch, getState) => {
if (getState().data[userId].isLoaded) {
return Promise.resolve();
}
fetch(`http://data.com/${userId}`)
.then(res => res.json())
.then(
data => dispatch({ type: 'LOAD_SOME_DATA_SUCCESS', data }),
err => dispatch({ type: 'LOAD_SOME_DATA_FAILURE', err })
);
}
}
```
If you need to change it to be synchronous, you can also do this without changing any calling code:
```reactjs
// I can change it to be a regular action creator without touching callers
function loadSomeData(userId) {
return {
type: 'LOAD_SOME_DATA_SUCCESS',
data: localStorage.getItem('my-data')
}
}
```
So the benefit of using middleware like Redux Thunk or Redux Promise is that components arent aware of how action creators are implemented, and whether they care about Redux state, whether they are synchronous or asynchronous, and whether or not they call other action creators. The downside is a little bit of indirection, but we believe its worth it in real applications.
Finally, Redux Thunk and friends is just one possible approach to asynchronous requests in Redux apps. Another interesting approach is Redux Saga which lets you define long-running daemons (sagas) that take actions as they come, and transform or perform requests before outputting actions. This moves the logic from action creators into sagas. You might want to check it out, and later pick what suits you the most.
I searched the Redux repo for clues, and found that Action Creators were required to be pure functions in the past.
This is incorrect. The docs said this, but the docs were wrong.
Action creators were never required to be pure functions.
We fixed the docs to reflect that.
## [React.js inline style best practices](https://stackoverflow.com/questions/26882177/react-js-inline-style-best-practices)
**359 Votes**, eye_mew
There aren't a lot of "Best Practices" yet. Those of us that are using inline-styles, for React components, are still very much experimenting.
There are a number of approaches that vary wildly: React inline-style lib comparison chart
### All or nothing?
What we refer to as "style" actually includes quite a few concepts:
Layout how an element/component looks in relationship to others
Appearance the characteristics of an element/component
Behavior and state how an element/component looks in a given state
### Start with state-styles
React is already managing the state of your components, this makes styles of state and behavior a natural fit for colocation with your component logic.
Instead of building components to render with conditional state-classes, consider adding state-styles directly:
```reactjs
// Typical component with state-classes
<li
className={classnames({ 'todo-list__item': true, 'is-complete': item.complete })} />
// Using inline-styles for state
<li className='todo-list__item'
style={(item.complete) ? styles.complete : {}} />
```
Note that we're using a class to style appearance but no longer using any `.is-` prefixed class for state and behavior.
We can use `Object.assign` (ES6) or `_.extend` (underscore/lodash) to add support for multiple states:
```reactjs
// Supporting multiple-states with inline-styles
<li 'todo-list__item'
style={Object.assign({}, item.complete && styles.complete, item.due && styles.due )}>
```
### Customization and reusability
Now that we're using `Object.assign` it becomes very simple to make our component reusable with different styles. If we want to override the default styles, we can do so at the call-site with props, like so: `<TodoItem dueStyle={ fontWeight: "bold" } />`. Implemented like this:
```reactjs
<li 'todo-list__item'
style={Object.assign({},
item.due && styles.due,
item.due && this.props.dueStyles)}>
```
Layout
Personally, I don't see compelling reason to inline layout styles. There are a number of great CSS layout systems out there. I'd just use one.
That said, don't add layout styles directly to your component. Wrap your components with layout components. Here's an example.
```reactjs
// This couples your component to the layout system
// It reduces the reusability of your component
<UserBadge
className="col-xs-12 col-sm-6 col-md-8"
firstName="Michael"
lastName="Chan" />
// This is much easier to maintain and change
<div class="col-xs-12 col-sm-6 col-md-8">
<UserBadge
firstName="Michael"
lastName="Chan" />
</div>
```
For layout support, I often try to design components to be `100%` `width` and `height`.
Appearance
This is the most contentious area of the "inline-style" debate. Ultimately, it's up to the component your designing and the comfort of your team with JavaScript.
One thing is certain, you'll need the assistance of a library. Browser-states (`:hover`, `:focus`), and media-queries are painful in raw React.
I like Radium because the syntax for those hard parts is designed to model that of SASS.
### Code organization
Often you'll see a style object outside of the module. For a todo-list component, it might look something like this:
```reactjs
var styles = {
root: {
display: "block"
},
item: {
color: "black"
complete: {
textDecoration: "line-through"
},
due: {
color: "red"
}
},
}
```
### getter functions
Adding a bunch of style logic to your template can get a little messy (as seen above). I like to create getter functions to compute styles:
```reactjs
React.createClass({
getStyles: function () {
return Object.assign(
{},
item.props.complete && styles.complete,
item.props.due && styles.due,
item.props.due && this.props.dueStyles
);
},
render: function () {
return <li style={this.getStyles()}>{this.props.item}</li>
}
});
```
### Further watching
I discussed all of these in more detail at React Europe earlier this year: Inline Styles and when it's best to 'just use CSS'.
I'm happy to help as you make new discoveries along the way :) Hit me up -> @chantastic
## [Pros/cons of using redux-saga with ES6 generators vs redux-thunk with ES2017 async/await](https://stackoverflow.com/questions/34930735/pros-cons-of-using-redux-saga-with-es6-generators-vs-redux-thunk-with-es2017-asy)
**342 Votes**, hampusohlsson
In redux-saga, the equivalent of the above example would be
```reactjs
export function* loginSaga() {
while(true) {
const { user, pass } = yield take(LOGIN_REQUEST)
try {
let { data } = yield call(request.post, '/login', { user, pass });
yield fork(loadUserData, data.uid);
yield put({ type: LOGIN_SUCCESS, data });
} catch(error) {
yield put({ type: LOGIN_ERROR, error });
}
}
}
export function* loadUserData(uid) {
try {
yield put({ type: USERDATA_REQUEST });
let { data } = yield call(request.get, `/users/${uid}`);
yield put({ type: USERDATA_SUCCESS, data });
} catch(error) {
yield put({ type: USERDATA_ERROR, error });
}
}
```
The first thing to notice is that we're calling the api functions using the form `yield call(func, ...args)`. `call` doesn't execute the effect, it just creates a plain object like `{type: 'CALL', func, args}`. The execution is delegated to the redux-saga middleware which takes care of executing the function and resuming the generator with its result.
The main advantage is that you can test the generator outside of Redux using simple equality checks
```reactjs
const iterator = loginSaga()
assert.deepEqual(iterator.next().value, take(LOGIN_REQUEST))
// resume the generator with some dummy action
const mockAction = {user: '...', pass: '...'}
assert.deepEqual(
iterator.next(mockAction).value,
call(request.post, '/login', mockAction)
)
// simulate an error result
const mockError = 'invalid user/password'
assert.deepEqual(
iterator.throw(mockError).value,
put({ type: LOGIN_ERROR, error: mockError })
)
```
Note we're mocking the api call result by simply injecting the mocked data into the `next` method of the iterator. Mocking data is way simpler than mocking functions.
The second thing to notice is the call to `yield take(ACTION)`. Thunks are called by the action creator on each new action (e.g. `LOGIN_REQUEST`). i.e. actions are continually pushed to thunks, and thunks have no control on when to stop handling those actions.
In redux-saga, generators pull the next action. i.e. they have control when to listen for some action, and when to not. In the above example the flow instructions are placed inside a `while(true)` loop, so it'll listen for each incoming action, which somewhat mimics the thunk pushing behavior.
The pull approach allows implementing complex control flows. Suppose for example we want to add the following requirements
Handle LOGOUT user action
upon the first successful login, the server returns a token which expires in some delay stored in a `expires_in` field. We'll have to refresh the authorization in the background on each `expires_in` milliseconds
Take into account that when waiting for the result of api calls (either initial login or refresh) the user may logout in-between.
How would you implement that with thunks; while also providing full test coverage for the entire flow? Here is how it may look with Sagas:
```reactjs
function* authorize(credentials) {
const token = yield call(api.authorize, credentials)
yield put( login.success(token) )
return token
}
function* authAndRefreshTokenOnExpiry(name, password) {
let token = yield call(authorize, {name, password})
while(true) {
yield call(delay, token.expires_in)
token = yield call(authorize, {token})
}
}
function* watchAuth() {
while(true) {
try {
const {name, password} = yield take(LOGIN_REQUEST)
yield race([
take(LOGOUT),
call(authAndRefreshTokenOnExpiry, name, password)
])
// user logged out, next while iteration will wait for the
// next LOGIN_REQUEST action
} catch(error) {
yield put( login.error(error) )
}
}
}
```
In the above example, we're expressing our concurrency requirement using `race`. If `take(LOGOUT)` wins the race (i.e. user clicked on a Logout Button). The race will automatically cancel the `authAndRefreshTokenOnExpiry` background task. And if the `authAndRefreshTokenOnExpiry` was blocked in middle of a `call(authorize, {token})` call it'll also be cancelled. Cancellation propagates downward automatically.
You can find a runnable demo of the above flow
## [Can you force a React component to rerender without calling setState?](https://stackoverflow.com/questions/30626030/can-you-force-a-react-component-to-rerender-without-calling-setstate)
**322 Votes**, Philip Walton
In your component, you can call `this.forceUpdate()` to force a rerender.
Documentation: https://facebook.github.io/react/docs/component-api.html
## [Why is React's concept of Virtual DOM said to be more performant than dirty model checking?](https://stackoverflow.com/questions/21109361/why-is-reacts-concept-of-virtual-dom-said-to-be-more-performant-than-dirty-mode)
**315 Votes**, Daniil
I'm the primary author of a virtual-dom module, so I might be able to answer your questions. There are in fact 2 problems that need to be solved here
When do I re-render? Answer: When I observe that the data is dirty.
How do I re-render efficiently? Answer: Using a virtual DOM to generate a real DOM patch
In React, each of your components have a state. This state is like an observable you might find in knockout or other MVVM style libraries. Essentially, React knows when to re-render the scene because it is able to observe when this data changes. Dirty checking is slower than observables because you must poll the data at a regular interval and check all of the values in the data structure recursively. By comparison, setting a value on the state will signal to a listener that some state has changed, so React can simply listen for change events on the state and queue up re-rendering.
The virtual DOM is used for efficient re-rendering of the DOM. This isn't really related to dirty checking your data. You could re-render using a virtual DOM with or without dirty checking. You're right in that there is some overhead in computing the diff between two virtual trees, but the virtual DOM diff is about understanding what needs updating in the DOM and not whether or not your data has changed. In fact, the diff algorithm is a dirty checker itself but it is used to see if the DOM is dirty instead.
We aim to re-render the virtual tree only when the state changes. So using an observable to check if the state has changed is an efficient way to prevent unnecessary re-renders, which would cause lots of unnecessary tree diffs. If nothing has changed, we do nothing.
A virtual DOM is nice because it lets us write our code as if we were re-rendering the entire scene. Behind the scenes we want to compute a patch operation that updates the DOM to look how we expect. So while the virtual DOM diff/patch algorithm is probably not the optimal solution, it gives us a very nice way to express our applications. We just declare exactly what we want and React/virtual-dom will work out how to make your scene look like this. We don't have to do manual DOM manipulation or get confused about previous DOM state. We don't have to re-render the entire scene either, which could be much less efficient than patching it.
## [React set focus on input after render](https://stackoverflow.com/questions/28889826/react-set-focus-on-input-after-render)
**313 Votes**, Dave
You should do it in `componentDidMount` and `refs callback` instead. Something like this
```reactjs
componentDidMount(){
this.nameInput.focus();
}
```
```reactjs
class App extends React.Component{
componentDidMount(){
this.nameInput.focus();
}
render() {
return(
<div>
<input
defaultValue="Won't focus"
/>
<input
ref={(input) => { this.nameInput = input; }}
defaultValue="will focus"
/>
</div>
);
}
}
ReactDOM.render(<App />, document.getElementById('app'));```
```reactjs
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.3.1/react.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.3.1/react-dom.js"></script>
<div id="app"></div>```
## [Invariant Violation: _registerComponent(): Target container is not a DOM element](https://stackoverflow.com/questions/26566317/invariant-violation-registercomponent-target-container-is-not-a-dom-elem)
**307 Votes**, Dan Abramov
By the time script is executed, `document` element is not available yet, because `script` itself is in the `head`. While it's a valid solution to keep `script` in `head` and render on `DOMContentLoaded` event, it's even better to put your `script` at the very bottom of the `body` and render root component to a `div` before it like this:
```reactjs
<html>
<head>
</head>
<body>
<div id="root"></div>
<script src="/bundle.js"></script>
</body>
</html>
```
and in your code, call
```reactjs
React.render(<App />, document.getElementById('root'));
```
You should always render to a nested `div` instead of `body`. Otherwise all sorts of third-party code (Google Font Loader, browser plugins, whatever) can modify the `body` DOM node when React doesn't expect it, and cause weird errors that are very hard to trace and debug. Read more about this issue.
The nice thing about putting `script` at the bottom is that it won't block rendering until script load in case you add React server rendering to your project.
## [React js onClick can't pass value to method](https://stackoverflow.com/questions/29810914/react-js-onclick-cant-pass-value-to-method)
**304 Votes**, user1924375
### Easy Way
Use an arrow function:
```reactjs
return (
<th value={column} onClick={() => this.handleSort(column)}>{column}</th>
);
```
This will create a new function that calls `handleSort` with the right params.
### Better Way
Extract it into a sub-component.
The problem with using an arrow function in the render call is it will create a new function every time, which ends up causing unneeded re-renders.
If you create a sub-component, you can pass handler and use props as the arguments, which will then re-render only when the props change (because the handler reference now never changes):
Sub-component
```reactjs
class TableHeader extends Component {
handleClick = () => {
this.props.onHeaderClick(this.props.value);
}
render() {
return (
<th onClick={this.handleClick}>
{this.props.column}
</th>
);
}
}
```
Main component
```reactjs
{this.props.defaultColumns.map((column) => (
<TableHeader
value={column}
onHeaderClick={this.handleSort}
/>
))}
```
Old Easy Way (ES5)
Use `.bind` to pass the parameter you want:
```reactjs
return (
<th value={column} onClick={that.handleSort.bind(that, column)}>{column}</th>
);
```
## [Uncaught Error: Invariant Violation: Element type is invalid: expected a string (for built-in components) or a class/function but got: object](https://stackoverflow.com/questions/34130539/uncaught-error-invariant-violation-element-type-is-invalid-expected-a-string)
**301 Votes**, Pankaj Thakur
In my case (using Webpack) it was the difference between:
```reactjs
import {MyComponent} from '../components/xyz.js';
```
vs
```reactjs
import MyComponent from '../components/xyz.js';
```
The second one works while the first caused the error.
## [What's the difference between super() and super(props) in React when using es6 classes?](https://stackoverflow.com/questions/30571875/whats-the-difference-between-super-and-superprops-in-react-when-using-e)
**298 Votes**, Misha Moroshko
There is only one reason when one needs to pass `props` to `super()`:
When you want to access `this.props` in constructor.
Passing:
```reactjs
class MyComponent extends React.Component {
constructor(props) {
super(props)
console.log(this.props)
// -> { icon: 'home', }
}
}
```
Not passing:
```reactjs
class MyComponent extends React.Component {
constructor(props) {
super()
console.log(this.props)
// -> undefined
// Props parameter is still available
console.log(props)
// -> { icon: 'home', }
}
render() {
// No difference outside constructor
console.log(this.props)
// -> { icon: 'home', }
}
}
```
Note that passing or not passing `props` to `super` has no effect on later uses of `this.props` outside `constructor`. That is `render`, `shouldComponentUpdate`, or event handlers always have access to it.
This is explicitly said in one Sophie Alpert's answer to a similar question.
The documentationState and Lifecycle, Adding Local State to a Class, point 2recommends:
Class components should always call the base constructor with `props`.
However, no reason is provided. We can speculate it is either because of subclassing or for future compatibility.
(Thanks @MattBrowne for the link)
## [What is the difference between React Native and React?](https://stackoverflow.com/questions/34641582/what-is-the-difference-between-react-native-and-react)
**288 Votes**, shiva kumar
ReactJS is a JavaScript library, supporting both front end web and being run on the server, for building user interfaces and web applications.
React Native is a mobile framework that compiles to native app components, allowing you to build native mobile applications (iOS, Android, and Windows) in JavaScript that allows you to use ReactJS to build your components, and implements ReactJS under the hood.
Both are open sourced by Facebook.
## [React-router urls don't work when refreshing or writting manually](https://stackoverflow.com/questions/27928372/react-router-urls-dont-work-when-refreshing-or-writting-manually)
**278 Votes**, DavidDev
Looking at the comments on the accepted answer and the generic nature of this question ('don't work'), I thought this might be a good place for some general explanations about the issues involved here. So this answer is intended as background info / elaboration on the specific use case of the OP. Please bear with me.
Server-side vs Client-side
The first big thing to understand about this is that there are now 2 places where the URL is interpreted, whereas there used to be only 1 in 'the old days'. In the past, when life was simple, some user sent a request for `http://example.com/about` to the server, which inspected the path part of the URL, determined the user was requesting the about page and then sent back that page.
With client-side routing, which is what React-Router provides, things are less simple. At first, the client does not have any JS code loaded yet. So the very first request will always be to the server. That will then return a page that contains the needed script tags to load React and React Router etc. Only when those scripts have loaded does phase 2 start. In phase 2, when the user clicks on the 'About us' navigation link for example, the URL is changed locally only to `http://example.com/about` (made possible by the History API), but no request to the server is made. Instead, React Router does it's thing on the client side, determines which React view to render and renders it. Assuming your about page does not need to make any REST calls, it's done already. You have transitioned from Home to About Us without any server request having fired.
So basically when you click a link, some Javascript runs that manipulates the URL in the address bar, without causing a page refresh, which in turn causes React Router to perform a page transition on the client side.
But now consider what happens if you copy-paste the URL in the address bar and e-mail it to a friend. Your friend has not loaded your website yet. In other words, she is still in phase 1. No React Router is running on her machine yet. So her browser will make a server request to `http://example.com/about`.
And this is where your trouble starts. Until now, you could get away with just placing a static HTML at the webroot of your server. But that would give `404` errors for all other URLs when requested from the server. Those same URLs work fine on the client side, because there React Router is doing the routing for you, but they fail on the server side unless you make your server understand them.
Combining server- and client-side routing
If you want the `http://example.com/about` URL to work on both the server- and the client-side, you need to set up routes for it on both the server- and the client side. Makes sense right?
And this is where your choices begin. Solutions range from bypassing the problem altogether, via a catch-all route that returns the bootstrap HTML, to the full-on isomorphic approach where both the server and the client run the same JS code.
.
### Bypassing the problem altogether: Hash History
With Hash History instead of Browser History, your URL for the about page would look something like this:
`http://example.com/#/about`
The part after the hash (``#) symbol is not sent to the server. So the server only sees `http://example.com/` and sends the index page as expected. React-Router will pick up the `#/about` part and show the correct page.
Downsides:
'ugly' URLs
Server-side rendering is not possible with this approach. As far as Search Engine Optimization (SEO) is concerned, your website consists of a single page with hardly any content on it.
.
### Catch-all
With this approach you do use Browser History, but just set up a catch-all on the server that sends `/*` to `index.html`, effectively giving you much the same situation as with Hash History. You do have clean URLs however and you could improve upon this scheme later without having to invalidate all your user's favorites.
Downsides:
More complex to set up
Still no good SEO
.
### Hybrid
In the hybrid approach you expand upon the catch-all scenario by adding specific scripts for specific routes. You could make some simple PHP scripts to return the most important pages of your site with content included, so Googlebot can at least see what's on your page.
Downsides:
Even more complex to set up
Only good SEO for those routes you give the special treatment
Duplicating code for rendering content on server and client
.
### Isomorphic
What if we use Node JS as our server so we can run the same JS code on both ends? Now, we have all our routes defined in a single react-router config and we don't need to duplicate our rendering code. This is 'the holy grail' so to speak. The server sends the exact same markup as we would end up with if the page transition had happened on the client. This solution is optimal in terms of SEO.
Downsides:
Server must (be able to) run JS. I've experimented with Java i.c.w. Nashorn but it's not working for me. In practice it mostly means you must use a Node JS based server.
Many tricky environmental issues (using `window` on server-side etc)
Steep learning curve
.
Which should I use?
Choose the one that you can get away with. Personally I think the catch-all is simple enough to set up that that would be my minimum. This setup allows you to improve on things over time. If you are already using Node JS as your server platform, I'd definitely investigate doing an isomorphic app. Yes it's tough at first but once you get the hang of it it's actually a very elegant solution to the problem.
So basically, for me, that would be the deciding factor. If my server runs on Node JS, I'd go isomorphic, otherwise I would go for the Catch-all solution and just expand on it (Hybrid solution) as time progresses and SEO requirements demand it.
If you'd like to learn more on isomorphic (also called 'universal') rendering with React, there are some good tutorials on the subject:
React to the future with isomorphic apps
The Pain and the Joy of creating isomorphic apps in ReactJS
How to Implement Node + React Isomorphic JavaScript & Why it Matters
Also, to get you started, I recommend looking at some starter kits. Pick one that matches your choices for the technology stack (remember, React is just the V in MVC, you need more stuff to build a full app). Start with looking at the one published by Facebook itself:
Create React App
Or pick one of the many by the community. There is a nice site now that tries to index all of them:
Pick your perfect React starter project
I started with these:
React Isomorphic Starterkit
React Redux Universal Hot Example
Currently I am using a home-brew version of universal rendering that was inspired by the two starter kits above, but they are out of date now.
Good luck with your quest!
## [ReactJS - Does render get called any time setState is called?](https://stackoverflow.com/questions/24718709/reactjs-does-render-get-called-any-time-setstate-is-called)
**270 Votes**, Brad Parks
Does React re-render all components and sub components every time setState is called?
By default - yes.
There is a method boolean shouldComponentUpdate(object nextProps, object nextState), each component has this method and it's responsible to determine "should component update (run render function)?" every time you change state or pass new props from parent component.
You can write your own implementation of shouldComponentUpdate method for your component, but default implementation always returns true - meaning always re-run render function.
Quote from official docs http://facebook.github.io/react/docs/component-specs.html#updating-shouldcomponentupdate
By default, shouldComponentUpdate always returns true to prevent
subtle bugs when state is mutated in place, but if you are careful to
always treat state as immutable and to read only from props and state
in render() then you can override shouldComponentUpdate with an
implementation that compares the old props and state to their
replacements.
Next part of your question:
If so, why? I thought the idea was that React only rendered as little as needed - when state changed.
There are two steps of what we may call "render":
Virtual DOM render: when render method is called it returns a new virtual dom structure of the component. As I mentioned before, this render method is called always when you call setState(), because shouldComponentUpdate always returns true by default. So, by default, there is no optimization here in React.
Native DOM render: React changes real DOM nodes in your browser only if they were changed in the Virtual DOM and as little as needed - this is that great React's feature which optimizes real DOM mutation and makes React fast.
## [What is the difference between state and props in React?](https://stackoverflow.com/questions/27991366/what-is-the-difference-between-state-and-props-in-react)
**267 Votes**, skaterdav85
Props and state are related. The state of one component will often become the props of a child component. Props are passed to the child within the render method of the parent as the second argument to `React.createElement()` or, if you're using JSX, the more familiar tag attributes.
```reactjs
<MyChild name={this.state.childsName} />
```
The parent's state value of `childsName` becomes the child's `this.props.name`. From the child's perspective, the name prop is immutable. If it needs to be changed, the parent should just change its internal state:
```reactjs
this.setState({ childsName: 'New name' });
```
and React will propagate it to the child for you. A natural follow-on question is: what if the child needs to change its name prop? This is usually done through child events and parent callbacks. The child might expose an event called, for example, `onNameChanged`. The parent would then subscribe to the event by passing a callback handler.
```reactjs
<MyChild name={this.state.childsName} onNameChanged={this.handleName} />
```
The child would pass its requested new name as an argument to the event callback by calling, e.g., `this.props.onNameChanged('New name')`, and the parent would use the name in the event handler to update its state.
```reactjs
handleName: function(newName) {
this.setState({ childsName: newName });
}
```
## [Understanding unique keys for array children in React.js](https://stackoverflow.com/questions/28329382/understanding-unique-keys-for-array-children-in-react-js)
**267 Votes**, Brett DeWoody
You should add a key to each child as well as each element inside children.
This way React can handle the minimal DOM change.
In your code, each `<TableRowItem key={item.id} data={item} columns={columnNames}/>` is trying to render some children inside them without a key.
Check this example.
Try removing the `key={i}` from the `<b></b>` element inside the div's (and check the console).
In the sample, if we don't give a key to the `<b>` element and we want to update only the `object.city`, React needs to re-render the whole row vs just the element.
Here is the code:
```reactjs
var data = [{name:'Jhon', age:28, city:'HO'},
{name:'Onhj', age:82, city:'HN'},
{name:'Nohj', age:41, city:'IT'}
];
var Hello = React.createClass({
render: function() {
var _data = this.props.info;
console.log(_data);
return(
<div>
{_data.map(function(object, i){
return <div className={"row"} key={i}>
{[ object.name ,
// remove the key
<b className="fosfo" key={i}> {object.city} </b> ,
object.age
]}
</div>;
})}
</div>
);
}
});
React.render(<Hello info={data} />, document.body);
```
React documentation on the importance of keys in reconciliation: Keys
## [Perform debounce in React.js](https://stackoverflow.com/questions/23123138/perform-debounce-in-react-js)
**261 Votes**, Chetan Ankola
The important part here is to create a single debounced (or throttled) function per component instance. You don't want to recreate the debounce (or throttle) function everytime, and you don't want either multiple instances to share the same debounced function.
I'm not defining debouncing function in this answer as it's not really relevant, but this answer will work perfectly fine with `_.debounce` of underscore or lodash, as well as user-provided debouncing function.
### NOT a good idea:
```reactjs
var SearchBox = React.createClass({
method: function() {...},
debouncedMethod: debounce(this.method,100);
});
```
It won't work, because during class description object creation, `this` is not the object created itself. `this.method` does not return what you expect because the `this` context is not the object itself (which actually does not really exist yet BTW as it is just being created).
### NOT a good idea:
```reactjs
var SearchBox = React.createClass({
method: function() {...},
debouncedMethod: function() {
var debounced = debounce(this.method,100);
debounced();
},
});
```
This time you are effectively creating a debounced function that calls your `this.method`. The problem is that you are recreating it on every `debouncedMethod` call, so the newly created debounce function does not know anything about former calls! You must reuse the same debounced function over time or the debouncing will not happen.
### NOT a good idea:
```reactjs
var SearchBox = React.createClass({
debouncedMethod: debounce(function () {...},100),
});
```
This is a little bit tricky here.
All the mounted instances of the class will share the same debounced function, and most often this is not what you want!. See JsFiddle: 3 instances are producting only 1 log entry globally.
You have to create a debounced function for each component instance, and not a singe debounced function at the class level, shared by each component instance.
GOOD IDEA:
Because debounced functions are stateful, we have to create one debounced function per component instance.
ES6 (class property): recommended
```reactjs
class SearchBox extends React.Component {
method = debounce(() => {
...
});
}
```
ES6 (class constructor)
```reactjs
class SearchBox extends React.Component {
constructor(props) {
super(props);
this.method = debounce(this.method,1000);
}
method() { ... }
}
```
ES5
```reactjs
var SearchBox = React.createClass({
method: function() {...},
componentWillMount: function() {
this.method = debounce(this.method,100);
},
});
```
See JsFiddle: 3 instances are producting 1 log entry per instance (that makes 3 globally).
Take care of React's event pooling
This is related because we often want to debounce or throttle DOM events.
In React, the event objects (ie, `SyntheticEvent`) that you receive in callbacks are pooled (this is now documented). This means that after the event callback has be called, the SyntheticEvent you receive will be put back in the pool with empty attributes to reduce the GC pressure.
So if you access SyntheticEvent properties async to the original callback (as it may be the case if you throttle/debounce), the properties you access may be erased. If you want the event to never be put back in the pool, you can use the `persist()` method.
Without persist (default behavior: pooled event)
```reactjs
onClick = e => {
alert(`sync -> hasNativeEvent=${!!e.nativeEvent}`);
setTimeout(() => {
alert(`async -> hasNativeEvent=${!!e.nativeEvent}`);
}, 0);
};
```
The 2nd (async) will print `hasNativeEvent=false` because the event properties have been cleaned up.
With persist
```reactjs
onClick = e => {
e.persist();
alert(`sync -> hasNativeEvent=${!!e.nativeEvent}`);
setTimeout(() => {
alert(`async -> hasNativeEvent=${!!e.nativeEvent}`);
}, 0);
};
```
The 2nd (async) will print `hasNativeEvent=true` because persist() permits to avoid putting back the event in the pool.
You can test these 2 behaviors here JsFiddle
Read Julen's answer for an example of using `persist()` with a throttle/debounce function.
## [Show or hide element in React](https://stackoverflow.com/questions/24502898/show-or-hide-element-in-react)
**257 Votes**, user1725382
The key is to update the state of the component in the click handler using `setState`. When the state changes get applied, the `render` method gets called again with the new state:
```reactjs
var Search = React.createClass({
getInitialState: function() {
return { showResults: false };
},
onClick: function() {
this.setState({ showResults: true });
},
render: function() {
return (
<div>
<input type="submit" value="Search" onClick={this.onClick} />
{ this.state.showResults ? <Results /> : null }
</div>
);
}
});
var Results = React.createClass({
render: function() {
return (
<div id="results" className="search-results">
Some Results
</div>
);
}
});
ReactDOM.render(<Search />, document.getElementById('container'));
```
http://jsfiddle.net/kb3gN/15084/
## [Parse Error: Adjacent JSX elements must be wrapped in an enclosing tag](https://stackoverflow.com/questions/31284169/parse-error-adjacent-jsx-elements-must-be-wrapped-in-an-enclosing-tag)
**245 Votes**, user1072337
You should put your component between an enclosing tag, Which mean:
```reactjs
//WRONG!
return (
<Comp1 />
<Comp2 />
)
```
Instead:
```reactjs
//Correct
return (
<div>
<Comp1 />
<Comp2 />
</div>
)
```
## [react-router - pass props to handler component](https://stackoverflow.com/questions/27864720/react-router-pass-props-to-handler-component)
**243 Votes**, Kosmetika
UPDATE since new release, it's possible to pass props directly via the `Route` component, without using a Wrapper
For example, by using `render` prop. Link to react router: https://reacttraining.com/react-router/web/api/Route/render-func
Code example at codesandbox: https://codesandbox.io/s/z3ovqpmp44
Component
```reactjs
class Greeting extends React.Component {
render() {
const { text, match: { params } } = this.props;
const { name } = params;
return (
<React.Fragment>
<h1>Greeting page</h1>
<p>
{text} {name}
</p>
</React.Fragment>
);
}
}
```
And usage
```reactjs
<Route path="/greeting/:name" render={(props) => <Greeting text="Hello, " {...props} />} />
```
OLD VERSION
My preferred way is wrap the `Comments` component and pass the wrapper as a route handler.
This is your example with changes applied:
```reactjs
var Dashboard = require('./Dashboard');
var Comments = require('./Comments');
var CommentsWrapper = React.createClass({
render: function () {
return (
<Comments myprop="myvalue" />
);
}
});
var Index = React.createClass({
render: function () {
return (
<div>
<header>Some header</header>
<RouteHandler />
</div>
);
}
});
var routes = (
<Route path="/" handler={Index}>
<Route path="comments" handler={CommentsWrapper}/>
<DefaultRoute handler={Dashboard}/>
</Route>
);
ReactRouter.run(routes, function (Handler) {
React.render(<Handler/>, document.body);
});
```
## [ReactJS Two components communicating](https://stackoverflow.com/questions/21285923/reactjs-two-components-communicating)
**242 Votes**, woutr_be
The best approach would depend on how you plan to arrange those components. A few example scenarios that come to mind right now:
`<Filters />` is a child component of `<List />`
Both `<Filters />` and `<List />` are children of a parent component
`<Filters />` and `<List />` live in separate root components entirely.
There may be other scenarios that I'm not thinking of. If yours doesn't fit within these, then let me know. Here are some very rough examples of how I've been handling the first two scenarios:
### Scenario #1
You could pass a handler from `<List />` to `<Filters />`, which could then be called on the `onChange` event to filter the list with the current value.
JSFiddle for #1
```reactjs
/** @jsx React.DOM */
var Filters = React.createClass({
handleFilterChange: function() {
var value = this.refs.filterInput.getDOMNode().value;
this.props.updateFilter(value);
},
render: function() {
return <input type="text" ref="filterInput" onChange={this.handleFilterChange} placeholder="Filter" />;
}
});
var List = React.createClass({
getInitialState: function() {
return {
listItems: ['Chicago', 'New York', 'Tokyo', 'London', 'San Francisco', 'Amsterdam', 'Hong Kong'],
nameFilter: ''
};
},
handleFilterUpdate: function(filterValue) {
this.setState({
nameFilter: filterValue
});
},
render: function() {
var displayedItems = this.state.listItems.filter(function(item) {
var match = item.toLowerCase().indexOf(this.state.nameFilter.toLowerCase());
return (match !== -1);
}.bind(this));
var content;
if (displayedItems.length > 0) {
var items = displayedItems.map(function(item) {
return <li>{item}</li>;
});
content = <ul>{items}</ul>
} else {
content = <p>No items matching this filter</p>;
}
return (
<div>
<Filters updateFilter={this.handleFilterUpdate} />
<h4>Results</h4>
{content}
</div>
);
}
});
React.renderComponent(<List />, document.body);
```
### Scenario #2
Similar to scenario #1, but the parent component will be the one passing down the handler function to `<Filters />`, and will pass the filtered list to `<List />`. I like this method better since it decouples the `<List />` from the `<Filters />`.
JSFiddle for #2
```reactjs
/** @jsx React.DOM */
var Filters = React.createClass({
handleFilterChange: function() {
var value = this.refs.filterInput.getDOMNode().value;
this.props.updateFilter(value);
},
render: function() {
return <input type="text" ref="filterInput" onChange={this.handleFilterChange} placeholder="Filter" />;
}
});
var List = React.createClass({
render: function() {
var content;
if (this.props.items.length > 0) {
var items = this.props.items.map(function(item) {
return <li>{item}</li>;
});
content = <ul>{items}</ul>
} else {
content = <p>No items matching this filter</p>;
}
return (
<div className="results">
<h4>Results</h4>
{content}
</div>
);
}
});
var ListContainer = React.createClass({
getInitialState: function() {
return {
listItems: ['Chicago', 'New York', 'Tokyo', 'London', 'San Francisco', 'Amsterdam', 'Hong Kong'],
nameFilter: ''
};
},
handleFilterUpdate: function(filterValue) {
this.setState({
nameFilter: filterValue
});
},
render: function() {
var displayedItems = this.state.listItems.filter(function(item) {
var match = item.toLowerCase().indexOf(this.state.nameFilter.toLowerCase());
return (match !== -1);
}.bind(this));
return (
<div>
<Filters updateFilter={this.handleFilterUpdate} />
<List items={displayedItems} />
</div>
);
}
});
React.renderComponent(<ListContainer />, document.body);
```
### Scenario #3
When the components can't communicate between any sort of parent-child relationship, the documentation recommends setting up a global event system.
## [babel-loader jsx SyntaxError: Unexpected token [duplicate]](https://stackoverflow.com/questions/33460420/babel-loader-jsx-syntaxerror-unexpected-token)
**235 Votes**, Keyu Lin
Add "babel-preset-react"
```reactjs
npm install babel-preset-react
```
and add "presets" option to babel-loader in your webpack.config.js
(or you can add it to your .babelrc or package.js: http://babeljs.io/docs/usage/babelrc/)
Here is an example webpack.config.js:
```reactjs
{
test: /\.jsx?$/, // Match both .js and .jsx files
exclude: /node_modules/,
loader: "babel",
query:
{
presets:['react']
}
}
```
Recently Babel 6 was released and there was a major change:
https://babeljs.io/blog/2015/10/29/6.0.0
If you are using react 0.14, you should use `ReactDOM.render()` (from `require('react-dom')`) instead of `React.render()`: https://facebook.github.io/react/blog/#changelog
UPDATE 2018
Rule.query has already been deprecated in favour of Rule.options. Usage in webpack 4 is as follows:
```reactjs
npm install babel-loader babel-preset-react
```
Then in your webpack configuration (as an entry in the module.rules array in the module.exports object)
```reactjs
{
test: /\.jsx?$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
options: {
presets: ['react']
}
}
],
}
```
## [How to conditionally add attributes to React components?](https://stackoverflow.com/questions/31163693/how-to-conditionally-add-attributes-to-react-components)
**232 Votes**, Remi Sture
Apparently, for certain attributes, React is intelligent enough to omit the attribute if the value you pass to it is not truthy. For example:
```reactjs
var InputComponent = React.createClass({
render: function() {
var required = true;
var disabled = false;
return (
<input type="text" disabled={disabled} required={required} />
);
}
});
```
will result in:
```reactjs
<input type="text" required data-reactid=".0.0">
```
## [React after render code?](https://stackoverflow.com/questions/26556436/react-after-render-code)
**221 Votes**, Oscar Godson
https://facebook.github.io/react/docs/react-component.html#componentdidmount
This method is called once after your component is rendered. So your code would look like so.
```reactjs
var AppBase = React.createClass({
componentDidMount: function() {
var $this = $(ReactDOM.findDOMNode(this));
// set el height and width etc.
},
render: function () {
return (
<div className="wrapper">
<Sidebar />
<div className="inner-wrapper">
<ActionBar title="Title Here" />
<BalanceBar balance={balance} />
<div className="app-content">
<List items={items} />
</div>
</div>
</div>
);
}
});
```
## [React JSX: selecting selected on selected <select> option](https://stackoverflow.com/questions/21733847/react-jsx-selecting-selected-on-selected-select-option)
**219 Votes**, cantera
React automatically understands booleans for this purpose, so you can simply write (note: not recommended)
```reactjs
<option value={option.value} selected={optionsState == option.value}>{option.label}</option>
```
and it will output 'selected' appropriately.
However, React makes this even easier for you. Instead of defining `selected` on each option, you can (and should) simply write `value={optionsState}` on the select tag itself:
```reactjs
<select value={optionsState}>
<option value="A">Apple</option>
<option value="B">Banana</option>
<option value="C">Cranberry</option>
</select>
```
More info at http://facebook.github.io/react/docs/forms.html#why-select-value.
## [Pass props to parent component in React.js](https://stackoverflow.com/questions/22639534/pass-props-to-parent-component-in-react-js)
**218 Votes**, KendallB
Edit: see the end examples for ES6 updated examples.
This answer simply handle the case of direct parent-child relationship. When parent and child have potentially a lot of intermediaries, check this answer.
Other solutions are missing the point
While they still work fine, other answers are missing something very important.
Is there not a simple way to pass a child's props to its parent using events, in React.js?
The parent already has that child prop!: if the child has a prop, then it is because its parent provided that prop to the child! Why do you want the child to pass back the prop to the parent, while the parent obviously already has that prop?
Better implementation
Child: it really does not have to be more complicated than that.
```reactjs
var Child = React.createClass({
render: function () {
return <button onClick={this.props.onClick}>{this.props.text}</button>;
},
});
```
Parent with single child: using the value it passes to the child
```reactjs
var Parent = React.createClass({
getInitialState: function() {
return {childText: "Click me! (parent prop)"};
},
render: function () {
return (
<Child onClick={this.handleChildClick} text={this.state.childText}/>
);
},
handleChildClick: function(event) {
// You can access the prop you pass to the children
// because you already have it!
// Here you have it in state but it could also be
// in props, coming from another parent.
alert("The Child button text is: " + this.state.childText);
// You can also access the target of the click here
// if you want to do some magic stuff
alert("The Child HTML is: " + event.target.outerHTML);
}
});
```
JsFiddle
Parent with list of children: you still have everything you need on the parent and don't need to make the child more complicated.
```reactjs
var Parent = React.createClass({
getInitialState: function() {
return {childrenData: [
{childText: "Click me 1!", childNumber: 1},
{childText: "Click me 2!", childNumber: 2}
]};
},
render: function () {
var children = this.state.childrenData.map(function(childData,childIndex) {
return <Child onClick={this.handleChildClick.bind(null,childData)} text={childData.childText}/>;
}.bind(this));
return <div>{children}</div>;
},
handleChildClick: function(childData,event) {
alert("The Child button data is: " + childData.childText + " - " + childData.childNumber);
alert("The Child HTML is: " + event.target.outerHTML);
}
});
```
JsFiddle
It is also possible to use `this.handleChildClick.bind(null,childIndex)` and then use `this.state.childrenData[childIndex]`
Note we are binding with a `null` context because otherwise React issues a warning related to its autobinding system. Using null means you don't want to change the function context. See also.
About encapsulation and coupling in other answers
This is for me a bad idea in term of coupling and encapsulation:
```reactjs
var Parent = React.createClass({
handleClick: function(childComponent) {
// using childComponent.props
// using childComponent.refs.button
// or anything else using childComponent
},
render: function() {
<Child onClick={this.handleClick} />
}
});
```
Using props:
As I explained above, you already have the props in the parent so it's useless to pass the whole child component to access props.
Using refs:
You already have the click target in the event, and in most case this is enough.
Additionnally, you could have used a ref directly on the child:
```reactjs
<Child ref="theChild" .../>
```
And access the DOM node in the parent with
```reactjs
React.findDOMNode(this.refs.theChild)
```
For more advanced cases where you want to access multiple refs of the child in the parent, the child could pass all the dom nodes directly in the callback.
The component has an interface (props) and the parent should not assume anything about the inner working of the child, including its inner DOM structure or which DOM nodes it declares refs for. A parent using a ref of a child means that you tightly couple the 2 components.
To illustrate the issue, I'll take this quote about the Shadow DOM, that is used inside browsers to render things like sliders, scrollbars, video players...:
They created a boundary between what you, the Web developer can reach
and whats considered implementation details, thus inaccessible to
you. The browser however, can traipse across this boundary at will.
With this boundary in place, they were able to build all HTML elements
using the same good-old Web technologies, out of the divs and spans
just like you would.
The problem is that if you let the child implementation details leak into the parent, you make it very hard to refactor the child without affecting the parent. This means as a library author (or as a browser editor with Shadow DOM) this is very dangerous because you let the client access too much, making it very hard to upgrade code without breaking retrocompatibility.
If Chrome had implemented its scrollbar letting the client access the inner dom nodes of that scrollbar, this means that the client may have the possibility to simply break that scrollbar, and that apps would break more easily when Chrome perform its auto-update after refactoring the scrollbar... Instead, they only give access to some safe things like customizing some parts of the scrollbar with CSS.
About using anything else
Passing the whole component in the callback is dangerous and may lead novice developers to do very weird things like calling `childComponent.setState(...)` or `childComponent.forceUpdate()`, or assigning it new variables, inside the parent, making the whole app much harder to reason about.
Edit: ES6 examples
As many people now use ES6, here are the same examples for ES6 syntax
The child can be very simple:
```reactjs
const Child = ({
onClick,
text
}) => (
<button onClick={onClick}>
{text}
</button>
)
```
The parent can be either a class (and it can eventually manage the state itself, but I'm passing it as props here:
```reactjs
class Parent1 extends React.Component {
handleChildClick(childData,event) {
alert("The Child button data is: " + childData.childText + " - " + childData.childNumber);
alert("The Child HTML is: " + event.target.outerHTML);
}
render() {
return (
<div>
{this.props.childrenData.map(child => (
<Child
key={child.childNumber}
text={child.childText}
onClick={e => this.handleChildClick(child,e)}
/>
))}
</div>
);
}
}
```
But it can also be simplified if it does not need to manage state:
```reactjs
const Parent2 = ({childrenData}) => (
<div>
{childrenData.map(child => (
<Child
key={child.childNumber}
text={child.childText}
onClick={e => {
alert("The Child button data is: " + child.childText + " - " + child.childNumber);
alert("The Child HTML is: " + e.target.outerHTML);
}}
/>
))}
</div>
)
```
JsFiddle
PERF WARNING (apply to ES5/ES6): if you are using `PureComponent` or `shouldComponentUpdate`, the above implementations will not be optimized by default because using `onClick={e => doSomething()}`, or binding directly during the render phase, because it will create a new function everytime the parent renders. If this is a perf bottleneck in your app, you can pass the data to the children, and reinject it inside "stable" callback (set on the parent class, and binded to `this` in class constructor) so that `PureComponent` optimization can kick in, or you can implement your own `shouldComponentUpdate` and ignore the callback in the props comparison check.
You can also use Recompose library, which provide higher order components to achieve fine-tuned optimisations:
```reactjs
// A component that is expensive to render
const ExpensiveComponent = ({ propA, propB }) => {...}
// Optimized version of same component, using shallow comparison of props
// Same effect as React's PureRenderMixin
const OptimizedComponent = pure(ExpensiveComponent)
// Even more optimized: only updates if specific prop keys have changed
const HyperOptimizedComponent = onlyUpdateForKeys(['propA', 'propB'])(ExpensiveComponent)
```
In this case you could optimize the Child component by using:
```reactjs
const OptimizedChild = onlyUpdateForKeys(['text'])(Child)
```
## [How to have conditional elements and keep DRY with Facebook React's JSX?](https://stackoverflow.com/questions/22538638/how-to-have-conditional-elements-and-keep-dry-with-facebook-reacts-jsx)
**214 Votes**, Jack Allan
Just leave banner as being undefined and it does not get included.
## [What could be the downsides of using Redux instead of Flux](https://stackoverflow.com/questions/32021763/what-could-be-the-downsides-of-using-redux-instead-of-flux)
**214 Votes**, Ivan Wang
Redux author here!
I'd like to say you're going to make the following compromises using it:
You'll need to learn to avoid mutations. Flux is unopinionated about mutating data, but Redux doesn't like mutations and many packages complementary to Redux assume you never mutate the state. You can enforce this with dev-only packages like redux-immutable-state-invariant, use Immutable.js, or trust yourself and your team to write non-mutative code, but it's something you need to be aware of, and this needs to be a conscious decision accepted by your team.
You're going to have to carefully pick your packages. While Flux explicitly doesn't try to solve nearby problems such as undo/redo, persistence, or forms, Redux has extension points such as middleware and store enhancers, and it has spawned a young but rich ecosystem. This means most packages are new ideas and haven't received the critical mass of usage yet. You might depend on something that will be clearly a bad idea a few months later on, but it's hard to tell just yet.
You won't have a nice Flow integration yet. Flux currently lets you do very impressive static type checks which Redux doesn't support yet. We'll get there, but it will take some time.
I think the first is the biggest hurdle for the beginners, the second can be a problem for over-enthusiastic early adopters, and the third is my personal pet peeve. Other than that, I don't think using Redux brings any particular downsides that Flux avoids, and some people say it even has some upsides compared to Flux.
See also my answer on upsides of using Redux.
## [React JSX: Access Props in Quotes](https://stackoverflow.com/questions/21668025/react-jsx-access-props-in-quotes)
**207 Votes**, cantera
React (or JSX) doesn't support variable interpolation inside an attribute value, but you can put any JS expression inside curly braces as the entire attribute value, so this works:
```reactjs
<img className="image" src={"images/" + this.props.image} />
```
## [Correct modification of state arrays in ReactJS](https://stackoverflow.com/questions/26253351/correct-modification-of-state-arrays-in-reactjs)
**205 Votes**, fadedbee
The React docs says:
Treat this.state as if it were immutable.
Your `push` will mutate the state directly and that could potentially lead to error prone code, even if you are "resetting" the state again afterwards. F.ex, it could lead to that some lifecycle methods like `componentDidUpdate` wont trigger.
The recommended approach in later React versions is to use an updater function when modifying states to prevent race conditions:
```reactjs
this.setState(prevState => ({
arrayvar: [...prevState.arrayvar, newelement]
}))
```
The memory "waste" is not an issue compared to the errors you might face using non-standard state modifications.
Alternative syntax for earlier React versions
You can use `concat` to get a clean syntax since it returns a new array:
```reactjs
this.setState({
arrayvar: this.state.arrayvar.concat([newelement])
})
```
In ES6 you can use the Spread Operator:
```reactjs
this.setState({
arrayvar: [...this.state.arrayvar, newelement]
})
```
## [Hide keyboard in react-native](https://stackoverflow.com/questions/29685421/hide-keyboard-in-react-native)
**196 Votes**, MrMuetze
The problem with keyboard not dismissing gets more severe if you have `keyboardType='numeric'`, as there is no way to dismiss it.
Replacing View with ScrollView is not a correct solution, as if you have multiple `textInput`s or `button`s, tapping on them while the keyboard is up will only dismiss the keyboard.
Correct way is to encapsulate View with `TouchableWithoutFeedback` and calling `Keyboard.dismiss()`
If you have
```reactjs
<View style={styles.container}>
<TextInput keyboardType='numeric'/>
</View>
```
Change it to
```reactjs
import {Keyboard} from 'react-native'
<TouchableWithoutFeedback onPress={Keyboard.dismiss} accessible={false}>
<View style={styles.container}>
<TextInput keyboardType='numeric'/>
</View>
</TouchableWithoutFeedback>
```
EDIT: You can also create a Higher Order Component to dismiss the keyboard.
```reactjs
import React from 'react';
import { TouchableWithoutFeedback, Keyboard } from 'react-native';
const DismissKeyboardHOC (Comp) => {
return ({ children, ...props }) => (
<TouchableWithoutFeedback onPress={Keyboard.dismiss} accessible={false}>
<Comp {...props}>
{children}
</Comp>
</TouchableWithoutFeedback>
);
};
```
Simply use it like this
```reactjs
const DismissKeyboardView = DismissKeyboardHOC(View)
...
render() {
<DismissKeyboardView>
...
</DismissKeyboardView>
}
```
NOTE: the `accessible={false}` is required to make the input form continue to be accessible through VoiceOver. Visually impaired people will thank you!
## [What is mapDispatchToProps?](https://stackoverflow.com/questions/39419237/what-is-mapdispatchtoprops)
**191 Votes**, Code Whisperer
I feel like none of the answers have crystallized why `mapDispatchToProps` is useful.
This can really only be answered in the context of the `container-component` pattern, which I found best understood by first reading:
https://medium.com/@learnreact/container-components-c0e67432e005#.1a9j3w1jl
then
http://redux.js.org/docs/basics/UsageWithReact.html
In a nutshell, your `components` are supposed to be concerned only with displaying stuff. The only place they are supposed to get information from is their props.
Separated out from this is the concern about:
how you get the stuff to display,
and how you handle events.
That is what `containers` are for.
Therefore, a "well designed" `component` in the pattern look like this:
```reactjs
class FancyAlerter extends Component {
sendAlert = () => {
this.props.sendTheAlert()
}
render() {
<div>
<h1>Today's Fancy Alert is {this.props.fancyInfo}</h1>
<Button onClick={sendAlert}/>
</div>
}}
```
See how this component gets the info it displays from props (which came from the redux store via `mapStateToProps`) and it also gets its action function from its props: `sendTheAlert()`.
That's where `mapDispatchToProps` comes in: in the corresponding `container`
`FancyButtonContainer.js`
```reactjs
function mapDispatchToProps(dispatch) {
return({
sendTheAlert: () => {dispatch(ALERT_ACTION)}
})
}
function mapStateToProps(state} {
return({fancyInfo: "Fancy this:" + state.currentFunnyString})
}
export const FancyButtonContainer = connect(
mapStateToProps, mapDispatchToProps)(
FancyAlerter
)
```
I wonder if you can see, now that it's the `container` [1] that knows about redux and dispatch and store and state and ... stuff.
The `component` in the pattern, `FancyAlerter`, which does the rendering doesn't need to know about any of that stuff: it gets its method to call at `onClick` of the button, via its props.
And ... `mapDispatchToProps` was the useful means that redux provides to let the container easily pass that function into the wrapped component on its props.
All this looks very like the todo example in docs, and another answer here, but I have tried to cast it in the light of the pattern to emphasize why.
(Note: you can't use `mapStateToProps` for the same purpose as `mapDispatchToProps` for the basic reason that you don't have access to `dispatch` inside `mapStateToProp`. So you couldn't use `mapStateToProps` to give the wrapped component a method that uses `dispatch`.
I don't know why they chose to break it into two mapping functions - it might have been tidier to have `mapToProps(state, dispatch, props)` IE one function to do both!
[1] Note that I deliberately explicitly named the container `FancyButtonContainer`, to highlight that it is a "thing" - the identity (and hence existence!) of the container as "a thing" is sometimes lost in the shorthand
`export default connect(...)`
syntax that is shown in most examples
## [Where should ajax request be made in Flux app?](https://stackoverflow.com/questions/26632415/where-should-ajax-request-be-made-in-flux-app)
**188 Votes**, Eniz Glek
I'm a big proponent of putting async write operations in the action creators and async read operations in the store. The goal is to keep the store state modification code in fully synchronous action handlers; this makes them simple to reason about and simple to unit test. In order to prevent multiple simultaneous requests to the same endpoint (for example, double-reading), I'll move the actual request processing into a separate module that uses promises to prevent the multiple requests; for example:
```reactjs
class MyResourceDAO {
get(id) {
if (!this.promises[id]) {
this.promises[id] = new Promise((resolve, reject) => {
// ajax handling here...
});
}
return this.promises[id];
}
}
```
While reads in the store involve asynchronous functions, there is an important caveat that the stores don't update themselves in the async handlers, but instead fire an action and only fire an action when the response arrives. Handlers for this action end up doing the actual state modification.
For example, a component might do:
```reactjs
getInitialState() {
return { data: myStore.getSomeData(this.props.id) };
}
```
The store would have a method implemented, perhaps, something like this:
```reactjs
class Store {
getSomeData(id) {
if (!this.cache[id]) {
MyResurceDAO.get(id).then(this.updateFromServer);
this.cache[id] = LOADING_TOKEN;
// LOADING_TOKEN is a unique value of some kind
// that the component can use to know that the
// value is not yet available.
}
return this.cache[id];
}
updateFromServer(response) {
fluxDispatcher.dispatch({
type: "DATA_FROM_SERVER",
payload: {id: response.id, data: response}
});
}
// this handles the "DATA_FROM_SERVER" action
handleDataFromServer(action) {
this.cache[action.payload.id] = action.payload.data;
this.emit("change"); // or whatever you do to re-render your app
}
}
```
## [Programmatically navigate using react router V4](https://stackoverflow.com/questions/42123261/programmatically-navigate-using-react-router-v4)
**187 Votes**, Colin Witkamp
If you are targeting browser environments, you need to use `react-router-dom` package, instead of `react-router`. They are following the same approach as React did, in order to separate the core, (`react`) and the platform specific code, (`react-dom`, `react-native` ) with the subtle difference that you don't need to install two separate packages, so the environment packages contain everything you need. You can add it to your project as:
`yarn add react-router-dom`
or
`npm i -S react-router-dom`
The first thing you need to do is to provide a `<BrowserRouter>` as the top most parent component in your application. `<BrowserRouter>` uses the HTML5 `history` API and manages it for you, so you don't have to worry about instantiating it yourself and passing it down to the `<BrowserRouter>` component as a prop (as you needed to do in previous versions).
In V4, for navigating programatically you need to access the `history` object, which is available through React `context`, as long as you have a `<BrowserRouter>` provider component as the top most parent in your application. The library exposes through context the `router` object, that itself contains `history` as a property. The `history` interface offers several navigation methods, such as `push`, `replace` and `goBack`, among others. You can check the whole list of properties and methods here.
### Important Note to Redux/Mobx users
If you are using redux or mobx as your state management library in your application, you may have come across issues with components location-aware that are not re-rendered after triggering an URL update
That's happening because `react-router` passes `location` to components using the context model.
Both connect and observer create components whose shouldComponentUpdate methods do a shallow comparison of their current props and their next props. Those components will only re-render when at least one prop has changed. This means that in order to ensure they update when the location changes, they will need to be given a prop that changes when the location changes.
The 2 approaches for solving this are:
Wrap your connected component in a pathless `<Route />`. The current `location` object is one of the props that a `<Route>` passes to the component it renders
Wrap your connected component with the `withRouter` higher-order component, that in fact has the same effect and injects `location` as a prop
Setting that aside, there are four ways to navigate programatically, ordered by recommendation:
1.- Using a `<Route>` Component It promotes a declarative style. Prior to v4, `<Route />` components were placed at the top of your component hierarchy, having to think of your routes structure beforehand. However, now you can have `<Route>` components anywhere in your tree, allowing you to have a finer control for conditionally rendering depending on the URL. `Route` injects `match`, `location` and `history` as props into your component. The navigation methods (such as `push`, `replace`, `goBack`...) are available as properties of the `history` object.
There are 3 ways to render something with a `Route`, by using either `component`, `render` or `children` props, but don't use more than one in the same `Route`. The choice depends on the use case, but basically the first two options will only render your component if the `path` matches the url location, whereas with `children` the component will be rendered whether the path matches the location or not (useful for adjusting the UI based on URL matching).
If you want to customise your component rendering output, you need to wrap your component in a function and use the `render` option, in order to pass to your component any other props you desire, apart from `match`, `location` and `history`. An example to illustrate:
```reactjs
import { BrowserRouter as Router } from 'react-router-dom'
const ButtonToNavigate = ({ title, history }) => (
<button
type="button"
onClick={() => history.push('/my-new-location')}
>
{title}
</button>
);
const SomeComponent = () => (
<Route path="/" render={(props) => <ButtonToNavigate {...props} title="Navigate elsewhere" />} />
)
const App = () => (
<Router>
<SomeComponent /> // Notice how in v4 we can have any other component interleaved
<AnotherComponent />
</Router>
);
```
2.- Using `withRouter` HoC
This higher order component will inject the same props as `Route`. However, it carries along the limitation that you can have only 1 HoC per file.
```reactjs
import { withRouter } from 'react-router-dom'
const ButtonToNavigate = ({ history }) => (
<button
type="button"
onClick={() => history.push('/my-new-location')}
>
Navigate
</button>
);
ButtonToNavigate.propTypes = {
history: React.PropTypes.shape({
push: React.PropTypes.func.isRequired,
}),
};
export default withRouter(ButtonToNavigate);
```
3.- Using a `Redirect` component Rendering a `<Redirect>` will navigate to a new location. But keep in mind that, by default, the current location is replaced by the new one, like server-side redirects (HTTP 3xx). The new location is provided by `to` prop, that can be a string (URL to redirect to) or a `location` object. If you want to push a new entry onto the history instead, pass a `push` prop as well and set it to `true`
```reactjs
<Redirect to="/your-new-location" push />
```
4.- Accessing `router` manually through context A bit discouraged because context is still an experimental API and it is likely to break/change in future releases of React
```reactjs
const ButtonToNavigate = (props, context) => (
<button
type="button"
onClick={() => context.router.history.push('/my-new-location')}
>
Navigate to a new location
</button>
);
ButtonToNavigate.contextTypes = {
router: React.PropTypes.shape({
history: React.PropTypes.object.isRequired,
}),
};
```
Needless to say there are also other Router components that are meant to be for non browser ecosystems, such as `<NativeRouter>` that replicates a navigation stack in memory and targets React Native platform, available through `react-router-native` package.
For any further reference, don't hesitate to take a look at the official docs. There is also a video made by one of the co-authors of the library that provides a pretty cool introduction to react-router v4, highlighting some of the major changes.
## [Why calling react setState method doesn't mutate the state immediately?](https://stackoverflow.com/questions/30782948/why-calling-react-setstate-method-doesnt-mutate-the-state-immediately)
**184 Votes**, tarrsalah
From React's documentation:
`setState()` does not immediately mutate `this.state` but creates a
pending state transition. Accessing `this.state` after calling this
method can potentially return the existing value. There is no
guarantee of synchronous operation of calls to `setState` and calls may
be batched for performance gains.
If you want a function to be executed after the state change occurs, pass it in as a callback.
```reactjs
this.setState({value: event.target.value}, function () {
console.log(this.state.value);
});
```
## [What's the '@' (at symbol) in the Redux @connect decorator?](https://stackoverflow.com/questions/32646920/whats-the-at-symbol-in-the-redux-connect-decorator)
**174 Votes**,
The ``@ symbol is in fact a JavaScript expression currently proposed to signify decorators:
Decorators make it possible to annotate and modify classes and properties at design time.
Here's an example of setting up Redux without and with a decorator:
Without a decorator
```reactjs
import React from 'react';
import * as actionCreators from './actionCreators';
import { bindActionCreators } from 'redux';
import { connect } from 'react-redux';
function mapStateToProps(state) {
return { todos: state.todos };
}
function mapDispatchToProps(dispatch) {
return { actions: bindActionCreators(actionCreators, dispatch) };
}
class MyApp extends React.Component {
// ...define your main app here
}
export default connect(mapStateToProps, mapDispatchToProps)(MyApp);
```
Using a decorator
```reactjs
import React from 'react';
import * as actionCreators from './actionCreators';
import { bindActionCreators } from 'redux';
import { connect } from 'react-redux';
function mapStateToProps(state) {
return { todos: state.todos };
}
function mapDispatchToProps(dispatch) {
return { actions: bindActionCreators(actionCreators, dispatch) };
}
@connect(mapStateToProps, mapDispatchToProps)
export default class MyApp extends React.Component {
// ...define your main app here
}
```
Both examples above are equivalent, it's just a matter of preference. Also, the decorator syntax isn't built into any Javascript runtimes yet, and is still experimental and subject to change. If you want to use it, it is available using Babel.
## [What do multiple arrow functions mean in javascript?](https://stackoverflow.com/questions/32782922/what-do-multiple-arrow-functions-mean-in-javascript)
**171 Votes**, jhamm
That is a curried function
First, examine this function with two parameters
```reactjs
let add = (x,y) => x + y;
add(2,3); //=> 5
```
Here it is again in curried form
```reactjs
let add = x => y => x + y;
```
Here is the same code1 without arrow functions
```reactjs
let add = function (x) {
return function (y) {
return x + y;
};
};
```
Focus on `return`
It might help to visualize it another way. We know that arrow functions work like this let's pay particular attention to the return value.
```reactjs
let f = someParam => returnValue```
So our `add` function returns a function we can use parentheses for added clarity. The bolded text is the return value of our function `add`
```reactjs
let add = x => (y => x + y)```
In other words `add` of some number ``x returns a function
```reactjs
let x = 2;
add (2) // returns (y => 2 + y)
```
Calling curried functions
So in order to use our curried function, we have to call it a bit differently
```reactjs
add(2)(3); // returns 5
```
This is because the first (outer) function call returns a second (inner) function. Only after we call the second function do we actually get the result. This is more evident if we separate the calls on two lines
```reactjs
let add2 = add(2); // returns function(y) { return 2 + y }
add2(3); // returns 5
```
Applying our new understanding to your code
related: Whats the difference between binding, partial application, and currying?
OK, now that we understand how that works, let's look at your code
```reactjs
handleChange = field => e => {
e.preventDefault();
/// Do something here
}
```
We'll start by representing it without using arrow functions
```reactjs
handleChange = function(field) {
return function(e) {
e.preventDefault();
// Do something here
// return ...
};
};
```
However, because arrow functions lexically bind `this`, it would actually look more like this
```reactjs
handleChange = function(field) {
return function(e) {
e.preventDefault();
// Do something here
// return ...
}.bind(this);
}.bind(this);
```
Maybe now we can see what this is doing more clearly. The `handleChange` function is creating a function for a specified `field`. This is a handy React technique because you're required to setup your own listeners on each input in order to update your applications state. By using the `handleChange` function, we can eliminate all the duplicated code that would result in setting up `change` listeners for each field.
Cool !
1 Here I did not have to lexically bind `this` because the original `add` function does not use any context, so it is not important to preserve it in this case.
## [Rerender view on browser resize with React](https://stackoverflow.com/questions/19014250/rerender-view-on-browser-resize-with-react)
**156 Votes**, digibake
You can listen in componentDidMount, something like this component which just displays the window dimensions (like `<span>1024 x 768</span>`):
```reactjs
var WindowDimensions = React.createClass({
render: function() {
return <span>{this.state.width} x {this.state.height}</span>;
},
updateDimensions: function() {
this.setState({width: $(window).width(), height: $(window).height()});
},
componentWillMount: function() {
this.updateDimensions();
},
componentDidMount: function() {
window.addEventListener("resize", this.updateDimensions);
},
componentWillUnmount: function() {
window.removeEventListener("resize", this.updateDimensions);
}
});
```
| 41.565866 | 855 | 0.727267 | eng_Latn | 0.992305 |
a917a90bb4d9d040df061c2fd384b6e3e4f1eab0 | 6,485 | md | Markdown | docs/2014/relational-databases/performance/enable-or-disable-a-plan-guide.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/performance/enable-or-disable-a-plan-guide.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/performance/enable-or-disable-a-plan-guide.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-04T05:50:54.000Z | 2020-03-04T05:50:54.000Z | ---
title: Activer ou désactiver un repère de plan | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: performance
ms.topic: conceptual
helpviewer_keywords:
- plan guides [SQL Server], disabling
- enabling plan guides
- plan guides [SQL Server], enabling
- disabling plan guides
ms.assetid: b00ab550-5308-4cb8-8330-483cd1d25654
author: MikeRayMSFT
ms.author: mikeray
manager: craigg
ms.openlocfilehash: 7c64bf641a6519c42ad0d3a8cdfd578458f84439
ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 06/15/2019
ms.locfileid: "63150922"
---
# <a name="enable-or-disable-a-plan-guide"></a>Activer ou désactiver un repère de plan
Vous pouvez désactiver et activer les repères de plan dans [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] à l'aide de [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] ou de [!INCLUDE[tsql](../../includes/tsql-md.md)]. Il est possible d'activer ou de désactiver un seul repère de plan ou tous les repères de plan d'une base de données.
**Dans cette rubrique**
- **Avant de commencer :**
[Limitations et restrictions](#Restrictions)
[Sécurité](#Security)
- **Pour désactiver et activer des repères de plan, à l'aide de :**
[SQL Server Management Studio](#SSMSProcedure)
[Transact-SQL](#TsqlProcedure)
## <a name="BeforeYouBegin"></a> Avant de commencer
### <a name="Restrictions"></a> Limitations et restrictions
- Si vous tentez de supprimer ou de modifier une fonction, une procédure stockée ou un déclencheur DML référencé par un repère de plan, qu'il soit activé ou désactivé, une erreur se produit. Vérifiez toujours les dépendances avant de supprimer ou de modifier l'un des objets répertoriés ci-dessus.
- Désactiver un repère de plan désactivé, ou activer un repère de plan activé n'a pas d'effet et ne provoque pas d'erreur.
### <a name="Security"></a> Sécurité
#### <a name="Permissions"></a> Autorisations
La désactivation ou l'activation d'un repère de plan OBJECT nécessite l'autorisation ALTER sur l'objet (par exemple : fonction, procédure stockée) qui est référencé par le repère de plan. Tous les autres repères de plan nécessitent l'autorisation ALTER DATABASE.
## <a name="SSMSProcedure"></a> Utilisation de SQL Server Management Studio
#### <a name="to-disable-or-enable-a-plan-guide"></a>Pour désactiver ou activer un repère de plan
1. Cliquez sur le signe plus pour développer la base de données dans laquelle vous souhaitez désactiver ou activer un repère de plan, puis cliquez sur le signe plus pour développer le dossier **Programmabilité** .
2. Cliquez sur le signe plus (+) pour développer le dossier **Repères de plan** .
3. Cliquez avec le bouton droit sur le repère de plan que vous souhaitez désactiver ou activer et sélectionnez **Désactiver** ou **Activer**.
4. Dans la boîte de dialogue **Désactiver le repère de plan** ou **Activer le repère de plan** , vérifiez que l'action choisie a abouti, puis cliquez sur **Fermer**.
#### <a name="to-disable-or-enable-all-plan-guides-in-a-database"></a>Pour activer ou désactiver tous les repères de plan dans une base de données
1. Cliquez sur le signe plus pour développer la base de données dans laquelle vous souhaitez désactiver ou activer un repère de plan, puis cliquez sur le signe plus pour développer le dossier **Programmabilité** .
2. Cliquez avec le bouton droit sur le dossier **Repères de plan** , puis sélectionnez **Tout activer** ou **Tout désactiver**.
3. Dans la boîte de dialogue **Désactiver tous les repères de plan** ou **Activer tous les repères de plan** , vérifiez que l'action choisie a abouti, puis cliquez sur **Fermer**.
## <a name="TsqlProcedure"></a> Utilisation de Transact-SQL
#### <a name="to-disable-or-enable-a-plan-guide"></a>Pour désactiver ou activer un repère de plan
1. Dans l' **Explorateur d'objets**, connectez-vous à une instance de [!INCLUDE[ssDE](../../includes/ssde-md.md)].
2. Dans la barre d'outils standard, cliquez sur **Nouvelle requête**.
3. Copiez et collez l'exemple suivant dans la fenêtre de requête, puis cliquez sur **Exécuter**.
```
--Create a procedure on which to define the plan guide.
IF OBJECT_ID(N'Sales.GetSalesOrderByCountry', N'P') IS NOT NULL
DROP PROCEDURE Sales.GetSalesOrderByCountry;
GO
CREATE PROCEDURE Sales.GetSalesOrderByCountry
(@Country nvarchar(60))
AS
BEGIN
SELECT *
FROM Sales.SalesOrderHeader AS h
INNER JOIN Sales.Customer AS c ON h.CustomerID = c.CustomerID
INNER JOIN Sales.SalesTerritory AS t ON c.TerritoryID = t.TerritoryID
WHERE t.CountryRegionCode = @Country;
END
GO
--Create the plan guide.
EXEC sp_create_plan_guide N'Guide3',
N'SELECT *
FROM Sales.SalesOrderHeader AS h
INNER JOIN Sales.Customer AS c ON h.CustomerID = c.CustomerID
INNER JOIN Sales.SalesTerritory AS t ON c.TerritoryID = t.TerritoryID
WHERE t.CountryRegionCode = @Country',
N'OBJECT',
N'Sales.GetSalesOrderByCountry',
NULL,
N'OPTION (OPTIMIZE FOR (@Country = N''US''))';
--Disable the plan guide.
EXEC sp_control_plan_guide N'DISABLE', N'Guide3';
GO
--Enable the plan guide.
EXEC sp_control_plan_guide N'ENABLE', N'Guide3';
GO
```
#### <a name="to-disable-or-enable-all-plan-guides-in-a-database"></a>Pour activer ou désactiver tous les repères de plan dans une base de données
1. Dans l' **Explorateur d'objets**, connectez-vous à une instance de [!INCLUDE[ssDE](../../includes/ssde-md.md)].
2. Dans la barre d'outils standard, cliquez sur **Nouvelle requête**.
3. Copiez et collez l'exemple suivant dans la fenêtre de requête, puis cliquez sur **Exécuter**.
```
--Disable all plan guides in the database.
EXEC sp_control_plan_guide N'DISABLE ALL';
GO
--Enable all plan guides in the database.
EXEC sp_control_plan_guide N'ENABLE ALL';
GO
```
Pour plus d’informations, consultez [sp_control_plan_guide (Transact-SQL)](/sql/relational-databases/system-stored-procedures/sp-control-plan-guide-transact-sql).
| 45.669014 | 362 | 0.697918 | fra_Latn | 0.875172 |
a918a02dabec3ff1818d91d8fc2e9e9584622d78 | 1,954 | md | Markdown | content/publication/an-efficient-deep-model-for-day-ahead-electricity-load-forecasting-with-stacked-denoising-auto-encoders/index.md | Mengyuliu0520/starter-hugo-research-group-1 | 4965c5211c868808de27cffa598dfa0b3011adcc | [
"MIT"
] | null | null | null | content/publication/an-efficient-deep-model-for-day-ahead-electricity-load-forecasting-with-stacked-denoising-auto-encoders/index.md | Mengyuliu0520/starter-hugo-research-group-1 | 4965c5211c868808de27cffa598dfa0b3011adcc | [
"MIT"
] | null | null | null | content/publication/an-efficient-deep-model-for-day-ahead-electricity-load-forecasting-with-stacked-denoising-auto-encoders/index.md | Mengyuliu0520/starter-hugo-research-group-1 | 4965c5211c868808de27cffa598dfa0b3011adcc | [
"MIT"
] | 1 | 2021-11-11T22:52:59.000Z | 2021-11-11T22:52:59.000Z | ---
title: An Efficient Deep Model for Day-Ahead Electricity Load Forecasting with
Stacked Denoising Auto-Encoders
publication_types:
- "2"
authors:
- Chao Tong
- Jun Li
- Chao Lang
- Fanxin Kong
- Jianwei Niu
- and Joel J.P.C. Rodrigues
publication: "Journal of Parallel and Distributed Computing (JPDC), 2017. "
publication_short: ""
abstract: In the real world, it is quite meaningful to forecast the day-ahead
electricity load for an area, which is beneficial to the reduction of
electricity waste and rational arrangement of electric generator units. The
deployment of various sensors strongly pushes this forecasting research into a
“big data” era for a huge amount of information has been accumulated.
Meanwhile, the prosperous development of deep learning (DL) theory provides
powerful tools to handle massive data and often outperforms conventional
machine learning methods in many traditional fields. Inspired by these, we
propose a deep learning-based model which firstly refines features by stacked
denoising auto-encoders (SDAs) from history electricity load data and related
temperature parameters, subsequently trains a support vector regression (SVR)
model to forecast the day-ahead total electricity load. The most significant
contribution of this heterogeneous deep model is that the abstract features
extracted by SADs from original electricity load data are proven to describe
and forecast the load tendency more accurately with lower errors. We evaluate
this proposed model by comparing it with plain SVR and artificial neural
networks (ANNs) models, and the experimental results validate its performance
improvements.
draft: false
featured: false
tags:
- Deep learning
- Multi-modal
- Stacked denoising auto-encoders
- Feature extraction
- Support vector regression
image:
filename: featured
focal_point: Smart
preview_only: false
date: 2021-11-01T18:04:47.995Z
---
| 41.574468 | 80 | 0.787103 | eng_Latn | 0.997193 |
a919b17b64732c1b09cfce0491d6f46bc85c0d03 | 1,667 | md | Markdown | devops/03.groovy/groovy-tokenize-vs-split.md | qn9301/ReadAndLearn | aab57ca7921562443323dd7c7b43d0e99128f212 | [
"Apache-2.0"
] | 1 | 2022-03-17T23:53:39.000Z | 2022-03-17T23:53:39.000Z | devops/03.groovy/groovy-tokenize-vs-split.md | qn9301/ReadAndLearn | aab57ca7921562443323dd7c7b43d0e99128f212 | [
"Apache-2.0"
] | null | null | null | devops/03.groovy/groovy-tokenize-vs-split.md | qn9301/ReadAndLearn | aab57ca7921562443323dd7c7b43d0e99128f212 | [
"Apache-2.0"
] | null | null | null |
Groovy : tokenize() vs split() - 推酷 https://www.tuicool.com/articles/IV3I7r
Groovy : tokenize() vs split()
时间 2013-03-14 20:35:35 Intelligrape Groovy & Grails Blogs
原文 http://www.intelligrape.com/blog/2013/03/14/groovy-tokenize-vs-split/
主题 Groovy
The split() method returns a string [] instance and the tokenize() method returns a List instance
tokenize() ,which returns a List ,will ignore empty string (when a delimeter appears twice in succession) where as split() keeps such string.
String testString = 'hello brother'
assert testString.split() instanceof String[]
assert ['hello','brother']==testString.split() //split with no arguments
assert['he','','o brother']==testString.split('l')
// split keeps empty string
assert testString.tokenize() instanceof List
assert ['hello','brother']==testString.tokenize() //tokenize with no arguments
assert ['he','o brother']==testString.tokenize('l')
//tokenize ignore empty string
The tokenize() method uses each character of a String as delimeter where as split() takes the entire string as delimeter
String testString='hello world'
assert ['hel',' world']==testString.split('lo')
assert ['he',' w','r','d']==testString.tokenize('lo')
The split() can take regex as delimeter where as tokenize does not.
String testString='hello world 123 herload'
assert['hello world ',' herload']==testString.split(/\d{3}/)
I hope it helps, feel free to ask if you have any queries
This entry was posted on March 14th, 2013 at 6:05 pm and is filed under Grails . You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response , or trackback from your own site. | 57.482759 | 209 | 0.731854 | eng_Latn | 0.90271 |
a91a94655aa085857106e6e9c4e9b4d961652693 | 46 | md | Markdown | README.md | THE-R4U/BigDataLecture180618 | c831e4dded553cad964c305ee36bf3fb363ab74f | [
"MIT"
] | null | null | null | README.md | THE-R4U/BigDataLecture180618 | c831e4dded553cad964c305ee36bf3fb363ab74f | [
"MIT"
] | null | null | null | README.md | THE-R4U/BigDataLecture180618 | c831e4dded553cad964c305ee36bf3fb363ab74f | [
"MIT"
] | null | null | null | # BigDataLecture180618
빅데이터 분석 및 처리 SW 전문가 과정
| 15.333333 | 22 | 0.782609 | kor_Hang | 1.00001 |
a91ab9d672059871fc67249be42cf95e9f75638d | 772 | md | Markdown | README.md | dadiji007/easy-webpack | ea6e0a886bdff0588ec17c34ca460e985b7d36d0 | [
"MIT"
] | null | null | null | README.md | dadiji007/easy-webpack | ea6e0a886bdff0588ec17c34ca460e985b7d36d0 | [
"MIT"
] | 1 | 2021-05-10T19:23:55.000Z | 2021-05-10T19:23:55.000Z | README.md | dadiji007/easy-webpack | ea6e0a886bdff0588ec17c34ca460e985b7d36d0 | [
"MIT"
] | null | null | null | ## 目录树:
- example
- entry.js
- message.js
- name.js
- bundler.js
## 思路:
* 1. 分析依赖:
> webpack分析依赖,通过入口文件开始。webpack会通过入口文件读取内部信息(字符串),将内部信息转化为ast,形成一个数据结构(包含filename、dependencies等)。我们通过传入[文件路径]从而得到这么一个数据结构,而构成这个数据结构的函数在此称为 createAsset()
* 2. createAsset:(依赖数据结构)
> createAsset(),当传入一个文件路径时,会返回一个数据结构,里面包含:id、filename、dependencies(依赖)、code(模块)。得到数据后,提交到下一个函数进行遍历依赖 createGraph()
* 3. createGraph:(依赖图)
> createGraph(),在里面循环遍历(使用createAsset())每一个文件路径,获得所有的文件依赖(dependencies),广度遍历后,将返回每一个文件的数据,并将它们集合到一个数组中。最终用浏览器可以运行的方式在 bundle() 函数中进行处理,生成最终的样式。
* 4. bundle:(处理并返回最终能够在浏览器中执行的匿名函数)
> bundle,在里面将通过forEach()将createGraph()中集合的数组进行依赖关系的处理,返回一个能够处理所有需要打包的js文件能够运行的环境。
### 使用:
+ $ npm i
+ $ node bundler
+ $ node bundle
| 29.692308 | 159 | 0.720207 | zho_Hans | 0.164977 |
a91b1c99e6a29805941b6651939a0bbc8f436866 | 7,977 | md | Markdown | _posts/2017-01-03-introduction-to-robot-vision.md | dinhhuy2109/dinhhuy2109.github.io.underpreparation | c5aa8bdf32a3e1aa0be44c9eb2ca9282a53c67b9 | [
"MIT"
] | null | null | null | _posts/2017-01-03-introduction-to-robot-vision.md | dinhhuy2109/dinhhuy2109.github.io.underpreparation | c5aa8bdf32a3e1aa0be44c9eb2ca9282a53c67b9 | [
"MIT"
] | null | null | null | _posts/2017-01-03-introduction-to-robot-vision.md | dinhhuy2109/dinhhuy2109.github.io.underpreparation | c5aa8bdf32a3e1aa0be44c9eb2ca9282a53c67b9 | [
"MIT"
] | null | null | null | ---
layout: post
category: blog
title: "Introduction to robot vision"
excerpt: "A short introduction to robot vision including installations
and algorithms."
tags: [learning, opencv, pcl]
comments: true
---
### I-Introduction:
The purpose of robot vision is to locate the robot with respect to its
external environment so that it can safely and efficiently perform its
intended tasks. In mobile robotics, the external environment consists
of the robot's destination, routes, obstacles, etc. Knowing its
position and orientation with respect to these elements enables the
robot to safely navigate towards its destination.
In industrial settings, the external environment mainly consists of
obstacles and workpieces: parts to be assembled, panels to be drilled
on, products to be inspected, etc. The objective of this Chapter is to
present the algorithms and software tools to determine precisely the
location of workpieces with respect to the robot, which will next
enable the robot to perform its intended task–assembly, drilling,
inspection, etc.
{:refdef: style="text-align: center;"}
<figure>
<img src="{{ site.url }}/images/denso-ensenso-object.png">
<figcaption>"Fig.: Locating a workpiece with respect to the camera and to the
robot."</figcaption>
</figure>
{: refdef}
There are two main types of visual data: 2D (images) and 3D (point
clouds). Section 2D vision presents basic algorithms, such as
filtering or feature detection, and associated software tools to deal
with 2D images. Section 3D vision introduces algorithms for object location
using a 3D camera.
### II-Object pose estimation from 2D images:
Since there is already an [excellent OpenCV tutorial](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_tutorials.html),
we shall not duplicate the effort here. The reader is advised to go
through the whole tutorial, with particular attention to the following
sections:
1. [Canny edge detection](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_canny/py_canny.html#canny)
2. [Contours](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_table_of_contents_contours/py_table_of_contents_contours.html#table-of-content-contours)
3. [Hough line transform](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html#hough-lines)
4. [Feature detection and description](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_table_of_contents_feature2d/py_table_of_contents_feature2d.html#py-table-of-content-feature2d)
We shall see now, through an example, how OpenCV can be used in a
robotics setting.
> #### Example: Finding the 3D position of a hole using stereo vision
> Many robotic applications, such as assembly or riveting, require
> finding the 3D positions of circular holes. Fig. 26 shows a scene
> as captured by a stereo camera. This example demonstrates how to find the
> coordinates of the hole in the 2D images and how to subsequently
> determine its 3D position.
> {:refdef: style="text-align: center;"}
> <figure>
> <img src="{{ site.url }}/images/stereo_image.png">
> <figcaption>"Fig.: Scene captured by a stereo camera (left and right views)."</figcaption>
> </figure>
> {: refdef}
> First, make sure that you have installed
> [OpenCV](../installation/vision.md#installation).
> Define the function that finds the center of a hole in an image
>
> ``` python
> import numpy as np
> import cv2
> def get_hole_center2d(image):
> image_blur = cv2.blur(image, (5,5))
> image_edges = cv2.Canny(image_blur, 60, 120)
> (thresh, image_bw) = cv2.threshold(image_edges, 80, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
> if image_bw is None:
> return False, None
> image_contours, contours, hierarchy = cv2.findContours(image_bw, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
> image_contours = cv2.cvtColor(image_contours, cv2.COLOR_GRAY2RGB)
> for i,cnt in enumerate(contours):
> if len(np.squeeze(cnt)) > 5:
> rect = np.array(cv2.minAreaRect(cnt))
> (delta_u, delta_v) = rect[1]
> diameter_pixel = max(delta_u,delta_v)
> circularity = delta_u/delta_v if delta_v > delta_u else delta_v/delta_u
> good_circularity = circularity > 0.8
> good_diameter = diameter_pixel > 30
> if good_circularity and good_diameter:
> cv2.drawContours(image_contours, [cnt], 0, [0,0,255], 2)
> cv2.imwrite('image_contours.png',image_contours)
> return True, rect
> return False, False
> limage = cv2.imread('left_image.png', 0)
> success, lres = get_hole_center2d(limage)
> rimage = cv2.imread('right_image.png', 0)
> success, rres = get_hole_center2d(rimage)
> ```
>
> {:refdef: style="text-align: center;"}
> <figure>
> <img src="{{ site.url }}/images/image_contours.png">
> <figcaption>"Fig.: The hole contour is detected in the right image."</figcaption>
> </figure>
> {: refdef}
>
> Once the hole positions in the left and right camera views have been
> detected, one can use the camera information to reconstruct the 3D
> position by stereo vision. The theory for stereo vision can be found
> [here](http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html).
>
> ``` python
> # Load camera info from yaml files
> import yaml
> left_calib_data = yaml.load( open("left_camera_info.yaml", "r"))
> left_cam_matrix = left_calib_data["P"]
> right_calib_data = yaml.load( open("right_camera_info.yaml", "r"))
> right_cam_matrix = right_calib_data["P"]
> # Compute projection matrix Q
> Tx = right_cam_matrix[3]
> fx = right_cam_matrix[0]
> B = (-Tx / fx)
> lCx = left_cam_matrix[2]
> lCy = left_cam_matrix[6]
> rCx = right_cam_matrix[2]
> rCy = right_cam_matrix[6]
> Q = np.zeros((4,4))
> Q[3,2] = 1./B
> Q[0,3] = -lCx
> Q[1,3] = -lCy
> Q[2,3] = fx
> Q[3,3] = (rCx-lCx)/B
> # Reproject pixel point into 3D coordinates
> lhcenter = lres[0]
> rhcenter = rres[0]
> disparity = lhcenter[0] - rhcenter[0]
> XYZ = np.dot(Q, np.array([lhcenter[0], lhcenter[1], disparity, 1]))
> XYZ /= XYZ[-1]
> print "3D coordinates of the hole: ", XYZ[:3]
> ```
>
> 3D coordinates of the hole: [-0.22933565 -0.2300843 0.64514768]
### III-Processing 3D point clouds using PCL
Contrary to conventional 2D cameras, which provide 2D images of the
world, 3D cameras provide 3D information in the form of point
clouds. A point cloud is a collection of points described by their
X-Y-Z coordinates.
PCL is a good library that provides a number of functionalities to
manipulate point clouds. Unfortunately, contrary to OpenCV, the Python
bindings to PCL are very limited. This tutorial will therefore deal
with the C++ library.
Since there is already an [excellent PCL tutorial](http://pointclouds.org/documentation/tutorials/), we shall not
duplicate the effort here. The reader is advised to go through the
whole tutorial, with particular attention to the following
sections:
1. [Basic usage](http://pointclouds.org/documentation/tutorials/#basic-usage)
2. [I/O](http://pointclouds.org/documentation/tutorials/#i-o)
3. [Filtering](http://pointclouds.org/documentation/tutorials/#filtering-tutorial)
4. [Features](http://pointclouds.org/documentation/tutorials/#features-tutorial)
5. [Recoginition](http://pointclouds.org/documentation/tutorials/#recognition-tutorial)
6. [Registration](http://pointclouds.org/documentation/tutorials/#registration-tutorial)
We shall see now, through an example, how PCL can be
used in a robotics setting.
> #### Example::Object pose estimation in PCL
> Many robotic applications require finding the pose (rotation
> and translation) of an object in the scene. A nice tutorial can be
> found at
> [Correspondence Grouping](http://pointclouds.org/documentation/tutorials/alignment_prerejective.php#alignment-prerejective).
>It’s important to note that with different models and scene some parameter values might need to be adjusted. You **should** play around with them to see how they influence the final result.
| 42.430851 | 218 | 0.754419 | eng_Latn | 0.922234 |
a91bd0119480a1978de9ada02ca253313a24ca3d | 2,278 | md | Markdown | content/publication/2014-01-29-matrix-remodeling/index.md | dgrisafe/academic-kickstart | e8d7ff1da7090670b052d57bc5ca7f8d4b8e22ff | [
"MIT"
] | null | null | null | content/publication/2014-01-29-matrix-remodeling/index.md | dgrisafe/academic-kickstart | e8d7ff1da7090670b052d57bc5ca7f8d4b8e22ff | [
"MIT"
] | null | null | null | content/publication/2014-01-29-matrix-remodeling/index.md | dgrisafe/academic-kickstart | e8d7ff1da7090670b052d57bc5ca7f8d4b8e22ff | [
"MIT"
] | null | null | null | ---
title: 'Matrix Remodeling Accompanies In Vitro Articular Cartilage Shaping'
author: Dom Grisafe
date: '2013-06-26'
slug: matrix-remodeling
categories:
- Conference Abstract
tags:
- Tissue Engineering
- Articular Cartilage
subtitle: ''
summary: 'Articular cartilage (AC) supports and distributes loads in synovial joints while maintaining a nearly frictionless surface. The objective of this study was to determine the presence, magnitude and depth-dependence of collagen (COL) and glycosaminoglycans (GAG) remodeling that accompany the shape change of AC.'
authors:
- Nathan Balcom
- admin
- Juan Gutierrez-Franco
- Daniel Crawford
- Chris Raub
- Esther Cory
- Albert Chen
- Scott Hazelwood
- Stephen Klisch
- Robert Sah
lastmod: '2019-11-07T22:16:06-08:00'
featured: no
publication: In *American Society of Mechanical Engineers 2013 Summer Bioengineering Conference*
publication_short: In *ASME '13*
image:
caption: ''
focal_point: ''
preview_only: no
projects: []
links:
- name: Conference Abstract, ASME '13
url: https://asmedigitalcollection.asme.org/SBC/proceedings/SBC2013/55614/V01BT62A001/287591
url_pdf: 'https://drive.google.com/file/d/17hZYex4sUMlIdTPyzZjG7AWMC-Taz29l/view?usp=sharing'
url_code: ''
url_dataset: ''
url_poster: ''
url_project: ''
url_slides: ''
url_source: ''
url_video: ''
---
**Abstract**
Articular cartilage (AC) supports and distributes loads in synovial joints while maintaining a nearly frictionless surface. Successful replacement of large AC defects with an osteochondral graft requires an appropriate geometrical match with the defect region.
1. In AC, collagen (COL) provides tensile support to the tissue, and glycosaminoglycans (GAG) provide a fixed negative charge that produce swelling and contribute to the compressive properties of the tissue.
2. Previous studies (Fig. 1) have shown that 4 days of bending can reshape immature AC, but without a change in the total COL and GAG concentrations.
3. We hypothesized that more localized COL and/or GAG remodeling occurs during AC reshaping and may support the shape change.
The objective of this study was to determine the presence, magnitude and depth-dependence of COL and GAG remodeling that accompany the shape change of AC.
| 39.964912 | 321 | 0.77568 | eng_Latn | 0.969769 |
a91be311941b4777365b8614e28eb85a4574e185 | 5,723 | md | Markdown | articles/marketplace/gtm-your-marketplace-benefits.md | hcyang1012/azure-docs.ko-kr | e0844b72effe5055ad69dc006a9eb3c4656efc80 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/marketplace/gtm-your-marketplace-benefits.md | hcyang1012/azure-docs.ko-kr | e0844b72effe5055ad69dc006a9eb3c4656efc80 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/marketplace/gtm-your-marketplace-benefits.md | hcyang1012/azure-docs.ko-kr | e0844b72effe5055ad69dc006a9eb3c4656efc80 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 시장 진출 서비스 - 상용 마켓플레이스 혜택 | Azure
description: 이 섹션에서는 게시자가 사용할 수 있는 시장 진출 서비스 - Microsoft 리소스에 대해 설명합니다.
author: dsindona
ms.service: marketplace
ms.subservice: partnercenter-marketplace-publisher
ms.topic: conceptual
ms.date: 02/08/2020
ms.author: dsindona
ms.openlocfilehash: d34fb989e7f656df93faae63ef4dc42ed0fe5abe
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 03/28/2020
ms.locfileid: "80286237"
---
# <a name="your-commercial-marketplace-benefits"></a>상업적 시장 혜택
마켓플레이스에 게시한 후 이제 오퍼가 성공하기를 원합니다. 판매, 기술 및 마케팅과 같은 이점을 제공하므로 오퍼의 성장을 가속화해야 합니다.
오퍼를 라이브로 진행하면 마켓플레이스 리워즈 팀이 연락하여 적격 혜택에 대해 귀하와 함께 일하기 시작합니다. 마켓플레이스 리워드 혜택은 상업적인 마켓플레이스 참여 및 판매에 따라 발생합니다. 더 많이 참여할수록 더 많이 돌아올 수 있습니다.
## <a name="marketplace-rewards"></a>Marketplace Rewards
마켓플레이스 리워드는 첫 번째 고객을 확보하도록 돕는 인식 활동부터 시작하여 특정 성장 단계에서 고객을 지원하도록 설계되었습니다. 마켓플레이스를 통해 성장함에 따라 고객을 전환하고 거래를 성사시킬 수 있도록 설계된 새로운 혜택을 잠금 해제할 수 있습니다.
이 프로그램은 긍정적인 피드백 루프를 만듭니다: 성장의 각 단계에서 혜택은 다음 단계로 진행하도록 설계되어 Microsoft 고객, Microsoft 필드 및 Microsoft 채널을 통해 비즈니스를 성장시킬 수 있습니다. 상용 시장을 플랫폼으로 제공합니다.
혜택은 혜택이 [목록, 평가판 또는 거래](https://docs.microsoft.com/azure/marketplace/determine-your-listing-type#choose-a-publishing-option)여부에 따라 차별화됩니다.
귀하의 혜택이 실시간 에 이루어질 때 리워즈 팀의 일원에게 연락하여 자격에 따라 연락을 드립니다.
Transact 파트너의 경우 마켓플레이스 플랫폼을 통해 청구된 매출을 늘리면 계층당 더 큰 이점을 얻을 수 있습니다.
저장소에 게시하는 최소 요구 사항은 MPNID이므로 MPN 역량 상태 또는 파트너 유형에 관계없이 모든 파트너가 이러한 혜택을 사용할 수 있습니다. 각 파트너는 플랫폼으로 마켓플레이스를 통해 비즈니스를 성장시킬 수 있습니다.
사용 가능한 리소스를 이해하고 직접 검토할 수 있는 모범 사례를 구현하는 데 대한 지원을 받을 수 [있습니다.](https://partner.microsoft.com/asset/collection/azure-marketplace-and-appsource-publisher-toolkit#/)
모든 프로그램 혜택에 대한 자세한 설명은 [마켓플레이스 리워드 프로그램 데크에서](https://aka.ms/marketplacerewards)확인할 수 있습니다.
시작하기 위한 단계는 다음과 같습니다.
1. Microsoft 앱소스 또는 Azure 마켓플레이스에서 오퍼를 게시합니다.
2. 팀은 마켓플레이스 오퍼당 "소유자" 또는 "기본 연락처"에 연락합니다. 정기적으로 확인하는 개인 이나 별칭을 입력 하는 info@company.com 것이 좋습니다.
>[!Note]
>쿠폰이 4주 이상 라이브로 사용되었지만 메시지를 받지 못한 경우 Cloud 파트너 포털 또는 파트너 센터에서 해당 오퍼의 소유자인 조직 내에서 확인하십시오. 통신 및 다음 단계가 있어야 합니다. <br> <br> 소유자를 확인할 수 없거나 소유자가 회사를 떠난 경우 에서 https://aka.ms/marketplacepublishersupport지원 티켓을 올릴 수 있습니다.
마켓플레이스에서 제품을 확장함에 따라 사용 가능한 활동 범위도 확장됩니다. 모든 목록은 리소스 및 모범 사례의 셀프 서비스 이메일의 일부로 기본 수준의 최적화 권장 사항 및 프로모션을 받습니다.
## <a name="list-trial-and-consulting-benefits"></a>목록, 평가판 및 컨설팅 혜택
평가판 또는 컨설팅 개념 증명, 구현 또는 워크샵을 게시하는 경우 3개월 동안 참여 관리자가 배정되어 성공적인 시장 진출 계획을 안내할 수 있습니다. 새 오퍼를 게시할 때마다 이 3개월 간의 참여를 반복하여 각 새로운 오퍼가 성공으로 시작하도록 도울 수 있습니다.
아래 표는 목록 및 평가판 제공에 대한 자격 요건을 요약한 것입니다.

이러한 모든 혜택에 대한 자세한 설명은 [마켓플레이스 리워드 프로그램 데크에서](https://aka.ms/marketplacerewards)확인할 수 있습니다.
## <a name="marketplace-rewards-for-transact-partners"></a>거래 파트너를 위한 마켓플레이스 보상
Azure Marketplace 또는 Microsoft AppSource에서 거래 가능한 오퍼를 실행하면 상업용 마켓플레이스를 통해 판매되는 청구된 판매 거래 또는 시트를 늘리면서 추가 혜택에 액세스할 수 있습니다.
>[!Note]
>시트 판매 임계값은 Microsoft 365 앱(Microsoft 팀, 사무실, Outlook 또는 SharePoint와 통합된 SaaS 응용 프로그램)에만 적용되며 혜택은 2020년 6월 30일까지 완료되어야 합니다.
이러한 혜택은 마케팅, 영업 및 기술 활동을 지원하여 더 많은 방문자를 얻고, 더 많은 잠재 고객을 받고, 더 많은 비즈니스를 전환할 수 있도록 돕기 위해 고안되었습니다.
라이브 오퍼가 있는 모든 파트너는 전담 참여 관리자와 협력하여 마켓플레이스 오퍼 포트폴리오에 가장 가치 있는 활동을 선택할 수 있습니다. 이 참여는 제안당 상록이므로 회사의 광범위한 마케팅 및 영업 전략의 타이밍에 맞게 활동과 이러한 활동의 타이밍을 선택할 수 있습니다.



\*시트 판매 임계값은 Microsoft 365 앱(Microsoft 팀, 사무실, Outlook 또는 SharePoint와 통합된 SaaS 응용 프로그램)에만 적용되며 혜택은 2020년 6월 30일까지 완료되어야 합니다.
이러한 모든 혜택에 대한 자세한 설명은 [마켓플레이스 리워드 프로그램 데크에서](https://aka.ms/marketplacerewards)확인할 수 있습니다.
보상 혜택 외에도 Microsoft AppSource에 게시된 Dynamics 제품 과 비즈니스 응용 프로그램 파트너는 [ISV Connect를](https://partner.microsoft.com/solutions/business-applications/isv-overview)통해 추가 프로그래밍을 사용할 수 있습니다. 여기에는 비즈니스 응용 프로그램 파트너의 요구에 특화된 기술, 마케팅 및 영업 지원이 포함됩니다.
## <a name="marketplace-rewards-requirements-and-restrictions"></a>마켓플레이스 리워드 요구 사항 및 제한 사항
### <a name="publisher-agreement"></a>게시자 계약
이 페이지에 설명된 모든 활동은 [마켓플레이스 게시자 계약의](https://go.microsoft.com/fwlink/?LinkID=699560) 적용을 받고 있으며 상업적 혜택 프로그램 부록에 따라 다룹니다.
### <a name="cancellation-policy"></a>취소 정책
[목록 및 평가판](https://docs.microsoft.com/azure/marketplace/determine-your-listing-type) 게시자는 쿠폰 게시당 활동을 옵트인하거나 옵트아웃할 수 있습니다. 파트너는 언제든지 참여를 거부할 수 있습니다.
Microsoft는 다음과 같은 게시자에게 마켓플레이스 리워드 혜택을 취소하고 해지할 권리를 보유합니다.
* 마켓플레이스 목록을 사용하여 불법 행위에 관여할 수 있습니다.
* 상업 시장에서 상장 폐지됩니다.
* 저작권 또는 상표법을 위반하는 마케팅 또는 기타 콘텐츠를 표시하기 위해 쿠폰을 사용합니다.
* 자체 내부 운영 또는 비트코인 마이닝에 Azure 스폰서십 자금을 사용하는 것을 포함하되 이에 국한되지 않는 Azure [스폰서십 프로그램의](https://azure.microsoft.com/offers/ms-azr-0036p/)정책을 위반합니다.
### <a name="offer-availability"></a>제품 가용성
이 제안은 Azure 마켓플레이스 또는 Microsoft AppSource에서 라이브 오퍼가 있는 모든 파트너를 대상으로 영어로 진행됩니다.
사기로 입증된 거래는 [목록, 평가판, 컨설팅 섹션,](#list-trial-and-consulting-benefits) [거래 파트너 혜택](#marketplace-rewards-for-transact-partners) 섹션 및 자세한 프로그램 [데크에](https://aka.ms/marketplacepublisherrewards)명시된 대로 게시자의 [청구된 판매 프로그램 계층에](https://aka.ms/marketplacepublisherrewards)포함되지 않습니다. Microsoft는 사기가 제거된 후 실제 청구된 판매를 기반으로 파트너를 자격 계층에 할당합니다.
## <a name="next-steps"></a>다음 단계
[클라우드 파트너 포털에서](https://cloudpartner.azure.com)쿠폰을 만든 경우 로그인하여 쿠폰을 만들거나 구성합니다.
[파트너 센터에서](https://partner.microsoft.com/en-us/dashboard/commercial-marketplace/overview)쿠폰을 만든 경우 로그인하여 쿠폰을 만들거나 구성합니다.
사용 [가능한 셀프 서비스 리소스를 검토합니다.](https://partner.microsoft.com/asset/collection/azure-marketplace-and-appsource-publisher-toolkit#/)
[Microsoft AppSource 및 Azure 마켓플레이스 커뮤니티 포럼에](https://www.microsoftpartnercommunity.com/t5/Azure-Marketplace-and-AppSource/bd-p/2222) 등록하고 관련 주제에 대해 알아보거나 토론에 참여하세요.
---
| 48.5 | 327 | 0.749257 | kor_Hang | 1.00001 |
a91c2d9ac060915eda89641a5b59e482adaa07a5 | 337 | md | Markdown | README.md | mikra01/nimdb | 4dfd6c00e88d9f561f205f19d9d1c4936bae60d1 | [
"MIT"
] | 1 | 2020-10-07T05:14:28.000Z | 2020-10-07T05:14:28.000Z | README.md | mikra01/nimdb | 4dfd6c00e88d9f561f205f19d9d1c4936bae60d1 | [
"MIT"
] | null | null | null | README.md | mikra01/nimdb | 4dfd6c00e88d9f561f205f19d9d1c4936bae60d1 | [
"MIT"
] | null | null | null | # nimdb
nim relational database API.
At the moment only sqlite3 is supported.
## [API Documentation](https://mikra01.github.io/nimdb/nimdb_sqlite3.html)
### Remarks
at least native sqlite3-library version 3.26.0 required
### dependencies
sqlite3-library and STDL only
### tests
wip
Comments, bug reports and PR´s always welcome.
| 17.736842 | 74 | 0.756677 | eng_Latn | 0.751837 |
a91cb25d3d232db1057dd329cfa48879830369d2 | 43 | md | Markdown | README.md | rouillonh/Practicas-de-gramificacion | 5c7d54da5c0943ded6cd34a5d30ecb52ee38212a | [
"MIT"
] | null | null | null | README.md | rouillonh/Practicas-de-gramificacion | 5c7d54da5c0943ded6cd34a5d30ecb52ee38212a | [
"MIT"
] | null | null | null | README.md | rouillonh/Practicas-de-gramificacion | 5c7d54da5c0943ded6cd34a5d30ecb52ee38212a | [
"MIT"
] | null | null | null | # Practicas-de-gramificacion
Gramificacion
| 14.333333 | 28 | 0.860465 | oci_Latn | 0.524731 |
a91d32e6ef4f1a2b8a1672f11b6785d3b4dc6076 | 15,548 | md | Markdown | programming-languages/lisp/lisp.md | jatin2003/knowledge-1 | ba100ddf6e1858477b7a3bcfa088e5438f4ee410 | [
"CC-BY-4.0"
] | null | null | null | programming-languages/lisp/lisp.md | jatin2003/knowledge-1 | ba100ddf6e1858477b7a3bcfa088e5438f4ee410 | [
"CC-BY-4.0"
] | null | null | null | programming-languages/lisp/lisp.md | jatin2003/knowledge-1 | ba100ddf6e1858477b7a3bcfa088e5438f4ee410 | [
"CC-BY-4.0"
] | null | null | null | # Lisp
[SICP](http://sarabander.github.io/sicp/html/index.xhtml) and [Practical Common Lisp](http://www.gigamonkeys.com/book/) are great books.
## Notes
- [What did Alan Kay mean by, "Lisp is the greatest single programming language ever designed"?](https://www.quora.com/What-did-Alan-Kay-mean-by-Lisp-is-the-greatest-single-programming-language-ever-designed/answer/Alan-Kay-11)
## Links
- [Racket documentation](https://docs.racket-lang.org/)
- [Lisp-like DSL for Rust language](https://github.com/JunSuzukiJapan/macro-lisp)
- [Carp](https://github.com/carp-lang/Carp) - Statically typed lisp, without a GC, for real-time applications.
- [How Lisp Became God's Own Programming Language](https://twobithistory.org/2018/10/14/lisp.html) ([HN](https://news.ycombinator.com/item?id=18225870)) ([HN 2](https://news.ycombinator.com/item?id=23163596))
- [ELS 2018 Keynote: This Old Lisp](https://www.youtube.com/watch?v=MgVuqPgKJQc)
- [Parinfer](https://github.com/shaunlebron/parinfer) - Let's simplify the way we write Lisp.
- [Build me a LISP](https://kirit.com/Build%20me%20a%20LISP) ([HN](https://news.ycombinator.com/item?id=19121828))
- [Wasp](https://github.com/wasplang/wasp) - Lisp programming language for extremely performant and concise web assembly modules.
- [Lisp Koans](https://github.com/google/lisp-koans) - Language learning exercise in the same vein as the ruby koans, python koans and others. ([HN](https://news.ycombinator.com/item?id=19313850))
- [I Built a Lisp Compiler (2019)](https://mpov.timmorgan.org/i-built-a-lisp-compiler/) ([Lobsters](https://lobste.rs/s/rp0xy0/i_built_lisp_compiler))
- [g-fu](https://github.com/codr7/g-fu) - Pragmatic Lisp developed and embedded in Go.
- [Land of Lisp](http://landoflisp.com/) ([HN](https://news.ycombinator.com/item?id=19677292))
- [Anarki](https://github.com/arclanguage/anarki) - Community-managed fork of the Arc dialect of Lisp.
- [Lisp Machine Manual](https://hanshuebner.github.io/lmman/frontpage.html)
- [C-Mera](https://github.com/kiselgra/c-mera) - Next-level syntax for C-like languages.
- [LISP Reference Manual](http://www.softwarepreservation.net/projects/LISP/starlisp/starlisp-reference-manual-version-5-0.pdf)
- [femtolisp](https://github.com/JeffBezanson/femtolisp) - Lightweight, robust, scheme-like lisp implementation. ([HN](https://news.ycombinator.com/item?id=22094722))
- [Performance and Evaluation of Lisp Systems (1985)](http://rpgpoet.com/Files/Timrep.pdf)
- [Formula One](https://github.com/iwillspeak/formula-one) - Experiment in ways to ergonomically build syntax trees and transformations in Rust.
- [Bel](http://paulgraham.com/bel.html) - Spec for a new dialect of Lisp, written in itself. ([HN](https://news.ycombinator.com/item?id=21231208))
- [Let Over Lambda -- 50 Years of Lisp book](https://letoverlambda.com/)
- [William Byrd on "The Most Beautiful Program Ever Written" (2017)](https://www.youtube.com/watch?v=OyfBQmvr2Hc)
- [Awesome Lisp Languages](https://github.com/dundalek/awesome-lisp-languages)
- [Programming Algorithms book: Dynamic Programming](http://lisp-univ-etc.blogspot.com/2019/12/programming-algorithms-dp.html)
- [LISP programmer's manual (1960)](http://history.siam.org/sup/Fox_1960_LISP.pdf)
- [femto](https://github.com/peeley/femto) - Minimal Lisp interpreter in Haskell.
- [Rhine](https://github.com/artagnon/rhine-ml) - Clojure-inspired Lisp on LLVM JIT featuring variable-length untyped arrays, first-class functions, closures, and macros.
- [arpilisp](https://github.com/marcpaq/arpilisp) - Lisp interpreter for Raspberry Pi implemented in a single ARM assembly file.
- [Lisp: Good News, Bad News, How to Win Big](http://www.dreamsongs.com/WIB.html)
- [Lisping at JPL (2002)](http://flownet.com/gat/jpl-lisp.html) ([HN](https://news.ycombinator.com/item?id=22087419))
- [Small minimalistic LISP interpreter in Node](https://github.com/mafintosh/minilisp)
- [Understanding the Power of LISP (2020)](https://joshbradley.me/understanding-the-power-of-lisp/)
- [GLISP](https://github.com/baku89/glisp) - LISP-based graphic design tool.
- [Programming Algorithms in Lisp](https://leanpub.com/progalgs)
- [What is the best way to learn Lisp in 2020?](https://news.ycombinator.com/item?id=22913750)
- [hy](https://github.com/hylang/hy) - Dialect of Lisp that's embedded in Python.
- [Simple lisp interpreter written from scratch in TS](https://github.com/christianscott/lisp)
- [Why is Lisp not as popular as Python? (2020)](https://lobste.rs/s/f0rlcw/why_is_lisp_not_as_popular_as_python)
- [Janet](https://janet-lang.org/) - Lightweight, expressive and modern Lisp. ([HN](https://news.ycombinator.com/item?id=23164614)) ([Code](https://github.com/janet-lang/janet)) ([Awesome](https://github.com/ahungry/awesome-janet)) ([Why I am Janet (2021)](https://pan.earth/posts/why-i-am-janet.html)) ([Lobsters](https://lobste.rs/s/pwkit0/why_i_am_janet))
- [History of Lisp - John McCarthy (1979)](http://jmc.stanford.edu/articles/lisp/lisp.pdf) ([Web](http://jmc.stanford.edu/articles/lisp.html)) ([HN](https://news.ycombinator.com/item?id=23201888))
- [Quasiquote - Literal Magic (2020)](https://weinholt.se/articles/quasiquote-literal-magic/) ([Lobsters](https://lobste.rs/s/dqhszz/quasiquote_literal_magic))
- [Ronin](https://100r.co/site/ronin.html) - Lisp-based image processing tool. ([HN](https://news.ycombinator.com/item?id=23211273))
- [Ask HN: Production Lisp in 2020?](https://news.ycombinator.com/item?id=23231701)
- [Reading Lisp code: parentheses and indentation](https://nl.movim.eu/?blog/phoe%40movim.eu/cd3577f6-fb1d-45f5-b881-7b9a68ee822e)
- [Hissp](https://github.com/gilch/hissp) - Modular Lisp implementation that compiles to a functional subset of Python—Syntactic macro metaprogramming with full access to the Python ecosystem.
- [Acid Lisp](https://github.com/dymynyc/acidlisp) - Lisp that compile to web assembly.
- [slip](https://github.com/sp4ghet/slip) - Lisp interpreter implemented in C.
- [A baseline compiler for guile (2020)](http://wingolog.org/archives/2020/06/03/a-baseline-compiler-for-guile)
- [Kalyn](https://github.com/raxod502/kalyn) - Self-hosting compiler from a Haskell-like Lisp directly to x86-64, from scratch.
- [SedLisp](https://github.com/shinh/sedlisp) - Lisp implementation in sed.
- [LispMicrocontroller](https://github.com/jbush001/LispMicrocontroller) - Microcontroller that natively executes a simple LISP dialect.
- [Toy Lisp 1.5 interpreter in Go by Rop Pike](https://github.com/robpike/lisp)
- [Lisp as the Maxwell’s equations of software (2012)](http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/) ([HN](https://news.ycombinator.com/item?id=9038505))
- [Closos: Specification of a Lisp operating system (2013)](http://metamodular.com/closos.pdf) ([HN](https://news.ycombinator.com/item?id=23730107))
- [uLisp](http://www.ulisp.com/) - Lisp for microcontrollers. Lisp for Arduino, Adafruit M0/M4, Micro:bit, ESP8266/32, and RISC-V boards. ([HN](https://news.ycombinator.com/item?id=27036317)) ([Code](https://github.com/technoblogy/ulisp))
- [Lisp Badge](http://www.ulisp.com/show?2L0C) - Single-board computer that you can program in uLisp. ([HN](https://news.ycombinator.com/item?id=23729970))
- [MIT CADR Lisp Machine Emulation](http://www.unlambda.com/lisp/cadr.page)
- [LambdaDelta](https://github.com/dseagrav/ld) - Emulator of the LMI Lambda Lisp Machine.
- [Meroko](http://www.unlambda.com/lisp/meroko.page) - Lisp machine emulator.
- [Typed Lisp, A Primer (2019)](https://alhassy.github.io/TypedLisp.html) ([HN](https://news.ycombinator.com/item?id=23878612))
- [The Many Faces of an Undying Programming Language (2020)](http://jakob.space/blog/thoughts-on-lisps.html) ([Lobsters](https://lobste.rs/s/chamtu/many_faces_undying_programming_language))
- [Interface Builder's Alternative Lisp timeline (2013)](https://paulhammant.com/2013/03/28/interface-builders-alternative-lisp-timeline/) ([Lobsters](https://lobste.rs/s/qcyzt0/interface_builder_s_alternative_lisp))
- [Embeddable lisp/scheme interpreter written in C](https://github.com/justinmeiners/lisp-interpreter)
- [Boring Benefits of Lisp (2020)](https://justinmeiners.github.io/boring-benefits-of-lisp/)
- [Review of Paul Graham's Bel, Chris Granger's Eve, and a Silly VR Rant](https://gist.github.com/wtaysom/7e5fda6d65807073c3fa6b92b1e25a32) ([HN](https://news.ycombinator.com/item?id=24162703))
- [Sild](https://github.com/jfo/sild) - Lisp Dialect.
- [Mal](https://github.com/kanaka/mal) - Make a Clojure inspired Lisp interpreter. ([HN](https://news.ycombinator.com/item?id=26924344))
- [Compiling a Lisp: Overture](https://bernsteinbear.com/blog/compiling-a-lisp-0/) ([Lobsters](https://lobste.rs/s/hwekzx/compiling_lisp_overture))
- [Compiling a Lisp: Primitive unary functions](https://bernsteinbear.com/blog/compiling-a-lisp-4/) ([HN](https://news.ycombinator.com/item?id=24386826))
- [newLISP](http://www.newlisp.org/) - Lisp-like, general-purpose scripting language.
- [Structure and Interpretation of Computer Programs](https://sarabander.github.io/sicp/html/index.xhtml) ([Code](https://github.com/sarabander/sicp)) ([Racket SICP](https://docs.racket-lang.org/sicp-manual/index.html#%28part._.Installation%29)) ([HN](https://news.ycombinator.com/item?id=24428907))
- [A micro-manual for LISP Implemented in C (2010)](https://nakkaya.com/2010/08/24/a-micro-manual-for-lisp-implemented-in-c/)
- [Lisp Operating System (2013)](http://metamodular.com/Common-Lisp/lispos.html) ([Lobsters](https://lobste.rs/s/8seq7v/lisp_operating_system_2013))
- [Little Bits of Lisp video series](https://www.youtube.com/playlist?list=PL2VAYZE_4wRJi_vgpjsH75kMhN4KsuzR_)
- [What Made Lisp Different (2001)](http://www.paulgraham.com/diff.html)
- [Successful Lisp Book Contents](https://dept-info.labri.fr/~strandh/Teaching/MTP/Common/David-Lamkins/contents.html)
- [Lisp and Haskell (2015)](https://markkarpov.com/post/lisp-and-haskell.html) ([HN](https://news.ycombinator.com/item?id=24712207))
- [LISP From Nothing](http://t3x.org/lfn/index.html) ([Lobsters](https://lobste.rs/s/xojcvn/lisp_from_nothing)) ([HN](https://news.ycombinator.com/item?id=24809293))
- [How are Lisp REPLs different from Python or Ruby REPLs? (2020)](https://lisp-journey.gitlab.io/blog/how-are-lisp-repls-different-from-python-or-ruby-repls/)
- [An Intuition for Lisp Syntax](https://stopa.io/post/265) ([HN](https://news.ycombinator.com/item?id=24892297)) ([Lobsters](https://lobste.rs/s/pg30t6/intuition_for_lisp_syntax))
- [The Nature of Lisp (2006)](http://www.defmacro.org/ramblings/lisp.html)
- [awklisp](https://github.com/darius/awklisp) - Lisp interpreter written in Awk.
- [Pixie](https://github.com/pixie-lang/pixie) - Lightweight lisp suitable for both general use as well as shell scripting.
- [Awesome Lisp Companies](https://github.com/azzamsa/awesome-lisp-companies) - Curated lisp for companies that use Lisp Extensively in their stack.
- [LISP – Notes on its past and future, by John McCarthy (1980)](http://jmc.stanford.edu/articles/lisp20th/lisp20th.pdf)
- [Klisp](https://github.com/thesephist/klisp) - Minimal LISP written in about 200 lines of Ink. ([Article](https://dotink.co/posts/klisp/))
- [baremetalisp](https://github.com/ytakano/baremetalisp)
- [Between two Lisps (2020)](https://ane.github.io/2020/10/05/between-two-lisps.html) ([HN](https://news.ycombinator.com/item?id=25313311))
- [Sugar – a typed lispy language targeting webasm/wat (2020)](https://ph1lter.bitbucket.io/blog/2020-12-06-sugar-compiler.html) ([HN](https://news.ycombinator.com/item?id=25322596))
- [Ebisp](https://github.com/tsoding/ebisp) - Embedded Lisp.
- [Zuko](https://github.com/ravern/zuko) - Basic Lisp-like programming language.
- [Lisp Books](https://www.pinterest.co.uk/vseloved/lisp-books/)
- [Ask HN: I want to start learning Lisp. Where do I begin? (2020)](https://news.ycombinator.com/item?id=25441664)
- [Getting started with Lisp in 2019](https://smalldata.tech/blog/2019/08/16/getting-started-with-lisp-in-2019) ([HN](https://news.ycombinator.com/item?id=25493495))
- [Lisp Hackers](https://leanpub.com/lisphackers/read) - Interviews with 100x More Productive Programmers.
- [Wisp](https://github.com/adam-mcdaniel/wisp) - Light lisp written in C++. ([HN](https://news.ycombinator.com/item?id=25559291))
- [Maru](https://github.com/attila-lendvai/maru) - Tiny self-hosting lisp dialect. ([Web](https://www.piumarta.com/software/maru/))
- [ToriLisp – an ersatz Lisp for tiny birds (2020)](http://blog.fogus.me/2020/12/22/torilisp-an-ersatz-lisp-for-tiny-birds/) ([Code](https://github.com/fogus/tori-lisp))
- [On repl-driven programming (2020)](http://mikelevins.github.io/posts/2020-12-18-repl-driven/) ([Lobsters](https://lobste.rs/s/0dvrpg/on_repl_driven_programming)) ([HN](https://news.ycombinator.com/item?id=25620256))
- [Fleck](https://github.com/chr15m/flk) - Clojure-like LISP that runs wherever Bash is.
- [A rabbit hole full of Lisp (2021)](https://www.murilopereira.com/a-rabbit-hole-full-of-lisp/) ([HN](https://news.ycombinator.com/item?id=25760381))
- [lexpr-rs](https://github.com/rotty/lexpr-rs) - Rust Lisp expression parser and serializer.
- [Lisp, Jazz, Aikido – Three Expressions of a Single Essence (2018)](https://arxiv.org/ftp/arxiv/papers/1804/1804.00485.pdf)
- [REPL as a Service (2021)](https://speechcode.com/blog/repl-as-service)
- [LispE](https://github.com/naver/lispe) - Version of Lisp that is ultra-minimal but contains all the basic instructions of the language. ([HN](https://news.ycombinator.com/item?id=25940439))
- [Why I still Lisp (2021)](https://mendhekar.medium.com/why-i-still-lisp-and-you-should-too-18a2ae36bd8) ([HN](https://news.ycombinator.com/item?id=25978190))
- [A Lisp REPL as my main shell](https://ambrevar.xyz/lisp-repl-shell/index.html) ([HN](https://news.ycombinator.com/item?id=26059023))
- [Ask HN: Why should we learn Lisp? (2021)](https://news.ycombinator.com/item?id=26162522)
- [My experience of writing Lisp in Pony (2020)](https://stereobooster.com/posts/my-experience-of-writing-lisp-in-pony/)
- [Bootstrapping LISP in a Boot Sector](https://github.com/jart/sectorlisp)
- [Fancy defines](https://idiomdrottning.org/fancy-defines) ([Lobsters](https://lobste.rs/s/mgfnix/fancy_defines))
- [Swift LispKit](https://github.com/objecthub/swift-lispkit) - Framework for building Lisp-based extension and scripting languages for macOS and iOS applications.
- [Datalisp: Overview of design decisions (2021)](https://cloudflare-ipfs.com/ipfs/Qmeg9cAPVC18bdGuQtGJKtP7VcRQErnCApbcbgn1FaSq9T/datalisp.pdf)
- [Lisp as an Alternative to Java (2000)](https://norvig.com/java-lisp.html) ([HN](https://news.ycombinator.com/item?id=26720403))
- [Spaik](https://github.com/snyball/spaik) - Lisp compiler/VM with a moving GC written in Rust.
- [Orion](https://github.com/Wafelack/orion) - High level, purely functional Lisp dialect written in Rust.
- [Joxa](https://github.com/joxa/joxa) - Modern Lisp for the Erlang VM.
- [Lets LISP like it's 1959 (2019)](https://www.youtube.com/watch?v=hGY3uBHVVr4)
- [LIPS](https://github.com/jcubic/lips) - Scheme based powerful lisp interpreter written in JavaScript.
- [Original Hacker News Source Code (2009)](https://github.com/wting/hackernews) ([HN](https://news.ycombinator.com/item?id=27452276))
- [Parentheses are Just Typechecking (2021)](https://adam.nels.onl/blog/parentheses-are-just-typechecking/)
- [BLisp](https://github.com/ytakano/blisp) - Statically Typed Lisp Like Language. ([Docs](https://ytakano.github.io/blisp/)) ([HN](https://news.ycombinator.com/item?id=27640984))
| 116.902256 | 358 | 0.750515 | yue_Hant | 0.410033 |
a91d67a48852e34b52273809b04de414c0da9f66 | 5,462 | md | Markdown | articles/supply-chain/procurement/tasks/create-procurement-catalog.md | MicrosoftDocs/Dynamics-365-Operations.fr-fr | 9f97b0553ee485dfefc0a57ce805f740f4986a7e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T17:14:08.000Z | 2021-04-20T21:13:46.000Z | articles/supply-chain/procurement/tasks/create-procurement-catalog.md | MicrosoftDocs/Dynamics-365-Operations.fr-fr | 9f97b0553ee485dfefc0a57ce805f740f4986a7e | [
"CC-BY-4.0",
"MIT"
] | 6 | 2017-12-13T18:31:58.000Z | 2019-04-30T11:46:19.000Z | articles/supply-chain/procurement/tasks/create-procurement-catalog.md | MicrosoftDocs/Dynamics-365-Operations.fr-fr | 9f97b0553ee485dfefc0a57ce805f740f4986a7e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-12T18:19:20.000Z | 2019-10-12T18:19:20.000Z | ---
title: Créer un catalogue d’approvisionnement
description: Cette rubrique illustre la création d’un catalogue d’approvisionnement.
author: Henrikan
ms.date: 07/19/2019
ms.topic: business-process
ms.prod: ''
ms.technology: ''
ms.search.form: ProcCategoryHierarchyManagement, CatProcureCatalogListPage, CatProcureCatalogCreate, CatProcureCatalogEdit, SysPolicyListPage, SysPolicy, CatCatalogPolicyRule, PurchReqTableListPage, PurchReqCreate, PurchReqTable, PurchReqAddItem
audience: Application User
ms.reviewer: kamaybac
ms.search.region: Global
ms.author: henrikan
ms.search.validFrom: 2016-06-30
ms.dyn365.ops.version: AX 7.0.0
ms.openlocfilehash: ef3747874d43143925bd08dbecc2d60f4e38701a
ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 09/29/2021
ms.locfileid: "7565421"
---
# <a name="create-a-procurement-catalog"></a>Créer un catalogue d’approvisionnement
[!include [banner](../../includes/banner.md)]
Cette rubrique illustre la création d’un catalogue d’approvisionnement. Cette tâche est généralement effectuée par un professionnel de l’approvisionnement. Vous apprendrez également comment les employés peuvent utiliser le catalogue quand ils créent une demande. Avant que vous puissiez créer un catalogue, il doit y avoir une hiérarchie des catégories d’approvisionnement dans votre système. La hiérarchie est héritée par le nouveau catalogue, avec tous les produits qui sont dans la hiérarchie. Vous pouvez utiliser ce guide dans les données de démonstration de la société USMF où la hiérarchie des catégories d’approvisionnement est disponible, ainsi que les exemples utilisés dans les étapes de procédure.
## <a name="ensure-that-a-procurement-category-hierarchy-exists"></a>Assurez-vous qu’une hiérarchie des catégories d’approvisionnement existe
1. Accédez au **Volet de navigation > Modules > Approvisionnements > Catégories d’approvisionnement**. Une hiérarchie de catégories d’approvisionnement est disponible dans la société fictive USMF et les produits ont été ajoutés à la catégorie **Équipements de bureau/Ordinateurs**. Si vous exécutez cette procédure en tant que guide des tâches, vous devrez peut-être déverrouiller le guide pour parcourir la catégorie. Si une hiérarchie n’est pas disponible, vous devez la créer en cliquant sur **Nouveau**. Ceci peut seulement être fait une fois.
2. Fermez la page.
## <a name="create-a-catalog"></a>Création d’un catalogue
1. Accédez au **Volet de navigation > Modules > Approvisionnements > Catalogues > Catalogues d’approvisionnement**.
2. Sélectionnez **Nouveau catalogue d’approvisionnement** pour ouvrir la boîte de dialogue.
3. Tapez une valeur dans le champ **Nom**.
4. Cliquez sur **OK**.
5. Dans l’arborescence, développez **CATÉGORIES D’APPROVISIONNEMENT DE L’ENTREPRISE**.
6. Dans l’arborescence, développez **ORDINATEURS DE BUREAU**.
7. Dans l’arborescence, sélectionnez **Ordinateurs**.
- Les produits du catalogue d’approvisionnement sont affichés dans la liste. Si vous voulez ajouter un produit à la catégorie vous devez le faire sur la page **Hiérarchie des catégories d’approvisionnement** ou sur la page **Détails de l’article**.
- Le type de mise à jour de **défaut** détermine si de nouveaux produits nouveaux qui ont été ajoutés à la hiérarchie des catégories d’approvisionnement sont immédiatement visibles dans le catalogue. Si le type de mise à jour est défini sur **Dynamique**, les changements sont visibles immédiatement. Si le type de mise à jour est **Statique**, les nouveaux produits sont seulement visibles pour les utilisateurs employant le catalogue une fois qu’il a été republié. L’action **Publier** est disponible sur le volet Action en haut de la page. Si des produits sont supprimés de la hiérarchie des catégories d’approvisionnement, le changement est immédiatement visible, indépendamment de la valeur du champ Type de mise à jour **par défaut**.
8. Dans le volet Actions, sélectionnez **Navigation de catégorie** et assurez-vous que **Activer** est sélectionné.
9. Sélectionner **Activer le catalogue**.
10. Fermez la page.
## <a name="make-the-catalog-visible"></a>Rendre le contrôle visible
1. Accédez au **Volet de navigation > Modules > Approvisionnements > Paramétrage > Stratégies > Stratégies d’achat**.
2. Sélectionnez **Choisissez Stratégie d’approvisionnement USMF**. Vous devez choisir la politique d’achat pour l’entité juridique pour laquelle le collaborateur associé à votre profil utilisateur est autorisé à commander des produits. Dans les données de démonstration de la société fictive USMF, l’utilisateur Admin est relié à la collaboratrice appelée **Julia Funderburk**, et elle commande des produits dans USMF par défaut.
3. Sélectionnez le catalogue que vous venez de créer.
4. Cliquez sur **OK**.
## <a name="use-the-catalog"></a>Utiliser le catalogue
1. Allez dans le **volet de navigation > Modules > Approvisionnement > Demandes d’achat > Toutes les demandes d’achat**.
2. Sélectionnez **Nouveau**.
3. Tapez une valeur dans le champ **Nom**.
4. Cliquez sur **OK**.
5. Sélectionnez **Ajouter les produits**.
6. Dans la liste, recherchez et sélectionnez l’enregistrement souhaité. Vous pouvez utiliser la hiérarchie de catégorie du côté gauche ou le filtre en haut de la liste pour filtrer les produits.
7. Sélectionnez **Ajouter aux lignes**.
8. Cliquez sur **OK**.
[!INCLUDE[footer-include](../../../includes/footer-banner.md)] | 80.323529 | 744 | 0.789088 | fra_Latn | 0.976562 |
a91d718b60d3178735a278cfdf4c3ea8602d76fd | 2,005 | md | Markdown | clai/server/plugins/gitbot/README.md | cohmoti/clai | 3215e3676b4a0857a56a1e126a052f089be5ff03 | [
"MIT"
] | 391 | 2019-12-08T03:34:39.000Z | 2022-03-04T12:14:01.000Z | clai/server/plugins/gitbot/README.md | Pycomet/clai | 4d8e661f1335ce35fd077ad812b56da361565d57 | [
"MIT"
] | 74 | 2020-01-28T16:53:00.000Z | 2022-03-12T00:48:26.000Z | clai/server/plugins/gitbot/README.md | Pycomet/clai | 4d8e661f1335ce35fd077ad812b56da361565d57 | [
"MIT"
] | 73 | 2020-02-06T14:46:13.000Z | 2022-03-04T12:46:29.000Z | # gitbot
`NLP` `Support` `Automation`
This skill lets you manage and organize your github repository in natural language.
It also lets you use natural language commands to issue popular git commands.
## Implementation
The skill demonstrates hooks into two interesting design patterns:
+ Similar to the [`nlc2cmd`](../nlc2cmd/) skill, it demonstrates natural language to
command patterns. However, in contrast to the nlc2cmd implementation, here we demonstrate
how to use a natural language classifier local to the machine -- using [RASA](https://rasa.com/) --
instead of calling an external service like [Watson Assistant](https://www.ibm.com/cloud/watson-assistant/).
+ This skill also demonstrates instances of workflow automation in the context of code
development by using the [GitHub Actions API](https://github.com/features/actions).
Similar to the `nlc2cmd` and `ibmcloud` skills, this skill is also merely illustrative
of integration of natural language and automation in code management through GitHub.
Contributions are welcome to improve the accuracy of the natural language interpretation,
the breadth of the use cases covered for workflow automation, or new features!
Before trying it out:
> Fill out `sample_config.json` and rename it to `config.json`
> run `./run_rasa.sh 5556`
## Example Usage

## [xkcd](https://uni.xkcd.com/)
If that doesn't fix it, git.txt contains the phone number of a friend of mine who understands git. Just wait through a few minutes of 'It's really pretty simple, just think of branches as...' and eventually you'll learn the commands that will fix everything.

| 50.125 | 313 | 0.774564 | eng_Latn | 0.995337 |
a91e33820199ab0aa50f7c9a2aec4802c1cf9bd8 | 419 | md | Markdown | lib/cmake/README.md | code2love/SpaceNavigation | 7991c807b8dd767f4419a420020731234de6ae24 | [
"MIT"
] | null | null | null | lib/cmake/README.md | code2love/SpaceNavigation | 7991c807b8dd767f4419a420020731234de6ae24 | [
"MIT"
] | 1 | 2021-11-15T15:27:13.000Z | 2021-11-15T15:27:13.000Z | lib/cmake/README.md | code2love/SpaceNavigation | 7991c807b8dd767f4419a420020731234de6ae24 | [
"MIT"
] | null | null | null | # `cmake`
contains all `cmake` files to find used libraries (`C++` libraries like `LAPACKE` or `MKL`). Maybe needs a change for other tasks, depending on the needed `C++` libraries
<br/><br/>
-------
## <a href='FindLAPACKE.cmake' target='_blank'>`FindLAPACKE.cmake`</a>
`cmake` file to find `LAPACKE`
<br/><br/>
-------
## <a href='FindMKL.cmake' target='_blank'>`FindMKL.cmake`</a>
`cmake` file to find `MKL` | 22.052632 | 170 | 0.646778 | eng_Latn | 0.709414 |
a91e77599031a78221153257b307f68f97cf8ecc | 1,880 | md | Markdown | guide/arabic/react/react-router/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 10 | 2019-08-09T19:58:19.000Z | 2019-08-11T20:57:44.000Z | guide/arabic/react/react-router/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 2,056 | 2019-08-25T19:29:20.000Z | 2022-02-13T22:13:01.000Z | guide/arabic/react/react-router/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 5 | 2018-10-18T02:02:23.000Z | 2020-08-25T00:32:41.000Z | ---
title: React Router
localeTitle: React Router
---
# React راوتر للمبتدئين
# التركيب
تم تقسيم "React Router" إلى ثلاث حزم: `react-router` ، `react-router-dom` ، `react-router` ، `react-router-dom` ، `react-router-native` .
يجب ألا تضطر مطلقًا إلى تثبيت موجه الاستجابة مباشرة. توفر هذه الحزمة مكونات التوجيه الأساسية ووظائف تطبيقات React Router. أما النوعان الآخران فيوفران مكونات خاصة بالبيئة (المستعرض والتفاعلية الأصلية) ، ولكنهما يعيدان أيضًا تصدير جميع صادرات أجهزة التوجيه التفاعلية.
نحن نبني موقعًا على الويب (وهو أمر سيتم تشغيله في المتصفحات) ، لذا سنقوم بتثبيت رد جهاز التوجيه.
`npm install --save react-router-dom`
# جهاز التوجيه
عند بدء مشروع جديد ، تحتاج إلى تحديد نوع جهاز التوجيه الذي يجب استخدامه. للمشاريع القائمة على المتصفح ، هناك و المكونات. يجب استخدام `<BrowserRouter>` عندما يكون لديك خادم يتعامل مع الطلبات الديناميكية (يعرف كيفية الرد على أي URI محتمل) ، بينما يجب استخدامه لمواقع الويب الثابتة (حيث يمكن للملقم الاستجابة فقط لطلبات الملفات التي يعرفها).
عادة ما يكون من الأفضل استخدام `<BrowserRouter>` ، ولكن إذا كان موقعك مستضافًا على خادم يخدم ملفات ثابتة فقط ، فإن `<HashRouter>` هو حل جيد.
بالنسبة `<BrowserRouter>` ، سنفترض أن الموقع سيتم دعمه من خلال خادم ديناميكي ، لذا فإن مكون الموجه المفضل لدينا هو `<BrowserRouter>` .
# بيان الاستيراد
`import { BrowserRouter as Router, Switch, Route, Link } from 'react-router-dom';
`
## IndexRoute والروابط
الآن ، دعنا نضيف التنقل للوصول بنا بين الصفحات.
للقيام بذلك ، `<Link>` مكون `<Link>` . يشبه `<Link>` استخدام علامة ارتساء html.
من المستندات:
الطريقة الأساسية للسماح للمستخدمين بالتنقل حول تطبيقك. سيجعل علامة ارتساء يمكن الوصول إليها بالكامل مع href الصحيح. للقيام بذلك ، دعنا أولاً إنشاء مكون Nav. سيحتوي مكون Nav الخاص بنا على مكونات `<Link>` ، وسيبدو كما يلي:
`const Nav = () => (
<div>
<Link to='/'>Home</Link>
<Link to='/address'>Address</Link>
</div>
)
` | 40.869565 | 338 | 0.737234 | arb_Arab | 0.998632 |
a91ef5bb6cf0b85233475ea37a3a2b169cd11358 | 1,005 | md | Markdown | src/safe-guides/coding_practice/data-type/enum/G.TYP.Enum.05.md | SparrowLii/rust-coding-guidelines-zh | 63188322888bbca124b87f51d7d3eeaad59765d1 | [
"MIT"
] | 527 | 2021-04-02T05:38:25.000Z | 2022-03-31T04:16:24.000Z | src/safe-guides/coding_practice/data-type/enum/G.TYP.Enum.05.md | SparrowLii/rust-coding-guidelines-zh | 63188322888bbca124b87f51d7d3eeaad59765d1 | [
"MIT"
] | 18 | 2021-11-01T11:39:27.000Z | 2022-02-22T03:04:17.000Z | src/safe-guides/coding_practice/data-type/enum/G.TYP.Enum.05.md | SparrowLii/rust-coding-guidelines-zh | 63188322888bbca124b87f51d7d3eeaad59765d1 | [
"MIT"
] | 55 | 2021-04-02T06:33:10.000Z | 2022-03-30T10:33:42.000Z | ## G.TYP.Enum.05 对外导出的公开Enum,宜添加`#[non_exhaustive]`属性
**【级别】** 建议
**【描述】**
作为对外公开的 Enum,为了保持稳定性,应该使用 `#[non_exhaustive]`属性,避免因为将来Enum 枚举变体的变化而影响到下游的使用。
**【反例】**
在 `#[non_exhaustive]` 属性稳定之前,社区内还有一种约定俗成的写法来达到防止下游自定义枚举方法。通过 `manual_non_exhaustive` 可以监控这类写法。
```rust
enum E {
A,
B,
#[doc(hidden)]
_C, // 这里用 下划线作为前缀定义的变体,作为隐藏的变体,不对外展示
}
// 用户无法自定义实现该 枚举的方法,达到一种稳定公开枚举的目的。
```
**【正例】**
```rust
#[non_exhaustive]
enum E {
A,
B,
}
```
**【Lint 检测】**
| lint name | Clippy 可检测 | Rustc 可检测 | Lint Group | level |
| ------------------------------------------------------------ | ------------- | ------------ | ----------- | ----- |
| [exhaustive_enums](https://rust-lang.github.io/rust-clippy/master/#exhaustive_enums) | yes | no | restriction | allow |
| [manual_non_exhaustive](https://rust-lang.github.io/rust-clippy/master/#manual_non_exhaustive) | yes | no | style | warn |
| 23.372093 | 151 | 0.521393 | yue_Hant | 0.303871 |
a91f39fac4142b7a0455d74577a37575b9bb46bb | 1,609 | md | Markdown | README.md | ElTuna/dbd.ts-documentation | 7a89433d0d5515835ff78223f9912e94b86aa631 | [
"MIT"
] | 3 | 2021-10-31T11:49:01.000Z | 2021-12-19T18:33:50.000Z | README.md | ElTuna/dbd.ts-documentation | 7a89433d0d5515835ff78223f9912e94b86aa631 | [
"MIT"
] | 1 | 2021-11-08T20:33:07.000Z | 2021-11-08T20:33:07.000Z | README.md | ElTuna/dbd.ts-documentation | 7a89433d0d5515835ff78223f9912e94b86aa631 | [
"MIT"
] | 3 | 2021-11-08T15:35:04.000Z | 2021-12-19T18:33:52.000Z | # ✨ dbd.ts
[](https://discord.gg/HMUfMXDQsV)
[](https://www.npmjs.com/package/dbd.ts)
[](https://www.npmjs.com/package/dbd.ts)\
DBD.TS is a simple feature-rich package for creating Discord bots.
<br />
<p>
<a href="https://discord.gg/HMUfMXDQsV"><img src="https://cdn.discordapp.com/attachments/838976217561563197/869269773589049374/68747470733a2f2f63646e2e646973636f72646170702e636f6d2f6174746163686d656e74732f3830343530353333353339.png" alt="dbd.ts" /></a>
</p>
### ⚙️ Installation
**Node.JS 16.6.0 or newer is required.**
```sh-session
npm install dbd.ts
```
### 🛠️ Main File
Once DBD.TS has been installed, you can paste-in and modify this example in your `index.js` file.
```javascript
const dbd = require("dbd.ts")
const bot = new dbd.Bot({
intents: ["GUILDS", "GUILD_MESSAGES"],
prefix: "PREFIX"
})
bot.addEvent([
"onMessage",
"onInteraction"
])
bot.commands.add({
type: "basicCommand",
name: "ping",
code: `🏓 Pong! $pingms`
})
bot.commands.add({
type: "basicCommand",
name: "eval",
code: `$onlyForIDs[$botOwnerID;]
$eval[$message]`
})
bot.login("TOKEN")
```
> 'PREFIX' must be replaced with a prefix. 'TOKEN' must be replaced with your bot's token.
### 🔧 Support
If you need support or have questions, you can join our [Discord Server](https://discord.gg/HMUfMXDQsV). We are happy to help!
| 29.796296 | 256 | 0.698571 | eng_Latn | 0.319934 |
a91f990d07d9859aa8e58ee5efed325a730d3cf9 | 5,898 | md | Markdown | _posts/2015-05-21-intersection-of-two-sets.md | likejazz/likejazz.github.io | 4e3d384c9cf81417fcfe52c7700940f4d2b9b8f8 | [
"MIT"
] | 28 | 2017-02-07T22:06:37.000Z | 2021-01-18T09:10:26.000Z | _posts/2015-05-21-intersection-of-two-sets.md | likejazz/likejazz.github.io | 4e3d384c9cf81417fcfe52c7700940f4d2b9b8f8 | [
"MIT"
] | 21 | 2017-11-02T04:19:24.000Z | 2022-03-10T14:45:40.000Z | _posts/2015-05-21-intersection-of-two-sets.md | likejazz/likejazz.github.io | 4e3d384c9cf81417fcfe52c7700940f4d2b9b8f8 | [
"MIT"
] | 4 | 2018-03-22T19:44:40.000Z | 2020-04-30T21:24:25.000Z | ---
layout: post
title: Java의 두 Set 교집합 최적화
tags: ["Software Engineering"]
---
<div class="message">
중복 제거를 위해 250개의 문서를 상호 비교할 경우 총 31,125번의 비교를 진행하게 된다. 이에 따른 CPU 비용이 상당히 높은데 이를 보다 최적화하여 개선할 수 있는 방안을 찾아본다.
</div>
<small>
*2015년 5월 19일 초안 작성*
</small>
<!-- TOC -->
- [서론](#서론)
- [실험 및 평가](#실험-및-평가)
- [Straight up Java(JDK)](#straight-up-javajdk)
- [Guava](#guava)
- [No intermediate HashSet(NIH)](#no-intermediate-hashsetnih)
- [결론](#결론)
<!-- /TOC -->
## 서론
두 문서의 유사도를 계산하는 [w-shingling 알고리즘](http://en.wikipedia.org/wiki/W-shingling)은 A/B 두 문서의 슁글(shingle) 사이즈를 아래와 같이 간단한 교집합(intersection)과 합집합(union)으로 유사도를 표현한다.

자카드 계수(Jaccard coefficient) 정의와도 동일한데 따라서 교집합(intersection)을 빠르게 계산하는 것이 성능 개선의 핵심이다. 이 부분은 3만회 이상 비교를 진행하는 핫스팟(Hot Spot)이고 약간의 수치 개선 만으로도 전체적인 성능 개선 효과가 크다.
자바에서 [두 Set 간 교집합(intersection)을 계산하는 방법](http://www.leveluplunch.com/java/examples/intersection-of-two-sets/)은 여러가지가 있으며, 이를 이용해 다양한 교집합을 구해보고 성능을 측정하기로 한다.
성능 측정은 튀는 값을 배제하기 위해 평균(average)이 아닌 중앙값(median)을 취하기로 한다. 중앙값 메소드는 [SO에 있는 코드](http://stackoverflow.com/a/11955900/3513266)를 참조하여 아래와 같이 작성했다.
{% highlight java %}
private static double median(long[] arr) {
Arrays.sort(arr);
double median;
if (arr.length % 2 == 0)
median = ((double) arr[arr.length / 2] +
(double) arr[arr.length / 2 - 1]
) / 2;
else
median = (double) arr[arr.length / 2];
return median;
}
{% endhighlight %}
시간 계산은 아래 형태로 100회씩 진행하여 중앙값을 구했다.
{% highlight java %}
for (int i = 0;i<100;i++) {
long startTimeMillis = System.currentTimeMillis();
// do stuff
...
long lastTime = System.currentTimeMillis();
elapsed[i] = lastTime - startTimeMillis;
}
System.out.println("median(elapsed) = " + median(elapsed));
{% endhighlight %}
## 실험 및 평가
### Straight up Java(JDK)
기존 JDK 기본 라이브러리를 사용한 코드는 아래와 같다.
{% highlight java %}
// JDK 기본 라이브러리
Set<Integer> intersection = new HashSet<Integer>(preSet);
intersection.retainAll(curSet);
intersectionSize = intersection.size();
{% endhighlight %}
`retainAll()` 메소드는 `java.util.Collection`에서 제공하는 JDK 기본 메소드이며 일치하는 Set을 남겨주는 역할을 한다. 즉, 교집합에 해당하는 만큼을 남기며 원본을 조작하므로 원본을 손실하지 않기 위해선 위와 같이 별도의 HashSet을 생성해 담아야 한다.
기존에는 이 기본 메소드를 사용했으며 100회 계산시 109ms 응답 속도 중앙값을 얻었다.
### Guava
{% highlight java %}
// Guava
Set<Integer> intersection = Sets.intersection(curSet, preSet);
intersectionSize = intersection.size();
{% endhighlight %}
Guava에는 `Sets.intersection` 이라는 좋은 교집합 메소드를 제공하며 크기가 작은 Sets가 앞에 있을 경우 성능이 더 좋다.
> Note: The returned view performs slightly better when set1 is the smaller of the two sets. If you have reason to believe one of your sets will generally be smaller than the other, pass it first.
마찬가지로 100회 계산 중앙값은 58ms. 일반적으로 Guava 가 제공하는 기능들은 성능에 있어서 만큼은 좋은 결과를 보여주고 있으며 실제로 이 또한 기존 대비 1.8배 이상 성능 개선 효과가 있다.
### No intermediate HashSet(NIH)
기존 보다 응답속도가 훨씬 빨라졌지만 더 성능을 낼 수 있는 방법이 없을까 고민하던차, SO에서 좋은 질문을 발견했다.
[Efficiently compute Intersection of two Sets in Java?](http://stackoverflow.com/questions/7574311/efficiently-compute-intersection-of-two-sets-in-java)
게다가 본인이 친절히 성능 테스트를 진행하고 아래와 같이 결과를 제시했다.
{% highlight bash %}
Running tests for 1x1
IntersectTest$PostMethod@6cc2060e took 13.9808544 count=1000000
IntersectTest$MyMethod1@7d38847d took 2.9893732 count=1000000
IntersectTest$MyMethod2@9826ac5 took 7.775945 count=1000000
Running tests for 1x10
IntersectTest$PostMethod@67fc9fee took 12.4647712 count=734000
IntersectTest$MyMethod1@7a67f797 took 3.1567252 count=734000
IntersectTest$MyMethod2@3fb01949 took 6.483941 count=734000
Running tests for 1x100
IntersectTest$PostMethod@16675039 took 11.3069326 count=706000
IntersectTest$MyMethod1@58c3d9ac took 2.3482693 count=706000
IntersectTest$MyMethod2@2207d8bb took 4.8687103 count=706000
Running tests for 1x1000
IntersectTest$PostMethod@33d626a4 took 10.28656 count=729000
IntersectTest$MyMethod1@3082f392 took 2.3478658 count=729000
IntersectTest$MyMethod2@65450f1f took 4.109205 count=729000
...
{% endhighlight %}
- `PostMethod`: 기존 JDK에서 제공하는 `retainAll()` 메소드
- `MyMethod1` : No intermediate HashSet(이하 NIH) 방식
- `MyMethod2` : With intermediate HashSet(이하 WIH) 방식
결과를 살펴보면 어떠한 경우에도 `retainAll()` 이 가장 느리고 NIH 방식이 WIH 에 비해 두 배 정도 빠름을 확인할 수 있다. WIH 가 정확히 중간(intermediate) 단계가 추가된 만큼 더 느리다.
이에 따르면 성능 개선의 핵심은 중간 단계 배제(No intermediate)였다. 즉, 교집합을 보관할 별도의 Set 을 만들지 않고 단순히 비교해서 수치만 뽑아낼 경우 `retainAll()` 보다 3배 이상, 중간 단계를 만들때보다도 2배 이상 빠른 응답 속도를 보여준다. 앞서 언급했듯 단순 비교일때는 무시할만한 수치일 수 있으나 현재 이 부분은 3만회 이상 비교를 진행하는 핫스팟(Hot Spot)이고 약간의 수치 개선만으로 전체적인 성능 개선 효과가 크다.
더욱이 w-shingling 알고리즘은 교집합의 수 만 중요하지 교집합의 원소가 무엇인지는 필요하지 않기 때문에 중간 단계 없이 교집합의 수 만 int 로 구해도 충분하다.
`MyMethod1`의 코드는 아래와 같다.
{% highlight java %}
// No intermediate HashSet
public static int MyMethod1(Set<Integer> set1, Set<Integer> set2) {
Set<Integer> a;
Set<Integer> b;
if (set1.size() <= set2.size()) {
a = set1;
b = set2;
} else {
a = set2;
b = set1;
}
int count = 0;
for (Integer e : a) {
if (b.contains(e)) {
count++;
}
}
return count;
}
{% endhighlight %}
마찬가지로 100회 계산했고 32ms에 불과하다. 가장 빠르다.
또한 Guava 와 마찬가지로 크기가 작은 Set 가 앞에 왔을때 성능이 더 좋다. 크기 비교를 하지 않고 단순 대입으로 진행한 경우 조금 더 느린 41ms가 나왔다. `size()`를 여러번 호출해도 성능에 별 영향을 끼치지 않으므로 반드시 크기 비교를 통해 작은 Set 을 앞에 두도록 한다.
## 결론
교집합을 구하는 방식을 기존 JDK 방식에서 NIH 방식으로 바꿔 **기존 대비 3.4배 성능 개선 효과**를 얻었다.
<table>
<thead>
<tr>
<th>방식</th>
<th>문서 수 / 비교횟수</th>
<th>응답 속도</th>
</tr>
</thead>
<tbody>
<tr>
<td>JDK</td>
<td>250 / 31,125</td>
<td>109ms</td>
</tr>
<tr>
<td>Guava</td>
<td>250 / 31,125</td>
<td>58ms</td>
</tr>
<tr>
<td>NIH</td>
<td>250 / 31,125</td>
<td><strong>32ms</strong></td>
</tr>
</tbody>
</table>
| 30.71875 | 261 | 0.680231 | kor_Hang | 0.999949 |
a91fa71186be08a5a50a66abd57f3445730cde22 | 34 | md | Markdown | README.md | izzydoesit/parkBot | 52615a8d62d14a8ee7f1e3212ca847ceb77d564b | [
"Apache-2.0"
] | null | null | null | README.md | izzydoesit/parkBot | 52615a8d62d14a8ee7f1e3212ca847ceb77d564b | [
"Apache-2.0"
] | null | null | null | README.md | izzydoesit/parkBot | 52615a8d62d14a8ee7f1e3212ca847ceb77d564b | [
"Apache-2.0"
] | null | null | null | # parkBot
parking spot-sharing AI
| 11.333333 | 23 | 0.794118 | eng_Latn | 0.707217 |
a9203e33280dc30af1cf19c16ee5d18cb1c022b3 | 1,350 | md | Markdown | README.md | photo/mobile-android | 24d1d105629403ec6fc19aa1f3cb4aa45c80d391 | [
"Apache-2.0"
] | 27 | 2015-01-17T16:14:52.000Z | 2020-09-05T22:18:12.000Z | README.md | photo/mobile-android | 24d1d105629403ec6fc19aa1f3cb4aa45c80d391 | [
"Apache-2.0"
] | 2 | 2015-10-02T15:08:53.000Z | 2017-04-06T08:38:18.000Z | README.md | photo/mobile-android | 24d1d105629403ec6fc19aa1f3cb4aa45c80d391 | [
"Apache-2.0"
] | 23 | 2015-01-14T13:29:15.000Z | 2022-01-19T07:34:21.000Z | # Test coverage :
cd <inside the app directory>
android update project --path .
android update test-project -m <full path to app directory (not just .)> -p ../test/
cd ../test/
ant coverage
# Code Formatting :
- Please use the following xml file as formatting in eclipse and make it to be applied whenever a file is saved.
https://github.com/android/platform_development/raw/master/ide/eclipse/android-formatting.xml
https://github.com/photo/mobile-android/blob/master/README.md
# Facebook SDK :
Install Facebook SDK by cloning the GitHub repository! [GitHub repository: git clone git://github.com/facebook/facebook-android-sdk.git]
# Fake Account Credentials for Trovebox :
The file `FakeAccountTroveboxApi.java` is a fake implementation for the interface `IAccountTroveboxApi.java`.
This fake implementation will return credentials to the site http://apigee.trovebox.com
# Environment :
-To make the environment working we need to **Add Support Library** in the project `ActionBarSherlock`.
These are the steps on Eclipse:
1. Click with right button in the project `ActionBarSherlock`
2. Select _Android Tools_
3. Select _Add Support Library_
4. Accept and Finish.
# Advanced Enironment Setup instructions :
https://github.com/photo/mobile-android/wiki/Eclipse-build-environment-setup
| 40.909091 | 141 | 0.753333 | eng_Latn | 0.795233 |
a920fb00cedb778e804463fab63cb2a655c262fb | 10,433 | md | Markdown | articles/service-fabric/service-fabric-reliable-services-diagnostics.md | sommerfn/azure-docs | ecdf8c2475e910a8ff28a2f9fb8e6515b0cd6d1f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-reliable-services-diagnostics.md | sommerfn/azure-docs | ecdf8c2475e910a8ff28a2f9fb8e6515b0cd6d1f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-reliable-services-diagnostics.md | sommerfn/azure-docs | ecdf8c2475e910a8ff28a2f9fb8e6515b0cd6d1f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure Service Fabric Stateful Reliable Services diagnostics | Microsoft Docs
description: Diagnostic functionality for Stateful Reliable Services in Azure Service Fabric
services: service-fabric
documentationcenter: .net
author: dkkapur
manager: chackdan
editor: ''
ms.assetid: ae0e8f99-69ab-4d45-896d-1fa80ed45659
ms.service: service-fabric
ms.devlang: dotnet
ms.topic: conceptual
ms.tgt_pltfrm: NA
ms.workload: NA
ms.date: 8/24/2018
ms.author: dekapur
---
# Diagnostic functionality for Stateful Reliable Services
The Azure Service Fabric Stateful Reliable Services StatefulServiceBase class emits [EventSource](https://msdn.microsoft.com/library/system.diagnostics.tracing.eventsource.aspx) events that can be used to debug the service, provide insights into how the runtime is operating, and help with troubleshooting.
## EventSource events
The EventSource name for the Stateful Reliable Services StatefulServiceBase class is "Microsoft-ServiceFabric-Services." Events from this event source appear in the
[Diagnostics Events](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md#view-service-fabric-system-events-in-visual-studio) window when the service is being [debugged in Visual Studio](service-fabric-debugging-your-application.md).
Examples of tools and technologies that help in collecting and/or viewing EventSource events are [PerfView](https://www.microsoft.com/download/details.aspx?id=28567),
[Azure Diagnostics](../cloud-services/cloud-services-dotnet-diagnostics.md), and the
[Microsoft TraceEvent Library](https://www.nuget.org/packages/Microsoft.Diagnostics.Tracing.TraceEvent).
## Events
| Event name | Event ID | Level | Event description |
| --- | --- | --- | --- |
| StatefulRunAsyncInvocation |1 |Informational |Emitted when the service RunAsync task is started |
| StatefulRunAsyncCancellation |2 |Informational |Emitted when the service RunAsync task is canceled |
| StatefulRunAsyncCompletion |3 |Informational |Emitted when the service RunAsync task is finished |
| StatefulRunAsyncSlowCancellation |4 |Warning |Emitted when the service RunAsync task takes too long to complete cancellation |
| StatefulRunAsyncFailure |5 |Error |Emitted when the service RunAsync task throws an exception |
## Interpret events
StatefulRunAsyncInvocation, StatefulRunAsyncCompletion, and StatefulRunAsyncCancellation events are useful to the service writer to understand the lifecycle of a service, as well as the timing for when a service starts, cancels, or finishes. This information can be useful when debugging service issues or understanding the service lifecycle.
Service writers should pay close attention to StatefulRunAsyncSlowCancellation and StatefulRunAsyncFailure events because they indicate issues with the service.
StatefulRunAsyncFailure is emitted whenever the service RunAsync() task throws an exception. Typically, an exception thrown indicates an error or bug in the service. Additionally, the exception causes the service to fail, so it is moved to a different node. This operation can be expensive and can delay incoming requests while the service is moved. Service writers should determine the cause of the exception and, if possible, mitigate it.
StatefulRunAsyncSlowCancellation is emitted whenever a cancellation request for the RunAsync task takes longer than four seconds. When a service takes too long to complete cancellation, it affects
the ability of the service to be quickly restarted on another node. This scenario might affect the overall availability of the service.
## Performance counters
The Reliable Services runtime defines the following performance counter categories:
| Category | Description |
| --- | --- |
| Service Fabric Transactional Replicator |Counters specific to the Azure Service Fabric Transactional Replicator |
| Service Fabric TStore |Counters specific to the Azure Service Fabric TStore |
The Service Fabric Transactional Replicator is used by the [Reliable State Manager](service-fabric-reliable-services-reliable-collections-internals.md) to replicate transactions within a given set of [replicas](service-fabric-concepts-replica-lifecycle.md).
The Service Fabric TStore is a component used in [Reliable Collections](service-fabric-reliable-services-reliable-collections-internals.md) for storing and retrieving key-value pairs.
The [Windows Performance Monitor](https://technet.microsoft.com/library/cc749249.aspx) application that is available by default in the Windows operating system can be used to collect and view performance counter data. [Azure Diagnostics](../cloud-services/cloud-services-dotnet-diagnostics.md) is another option for collecting performance counter data and uploading it to Azure tables.
### Performance counter instance names
A cluster that has a large number of reliable services or reliable service partitions will have a large number of transactional replicator performance counter instances. This is also the case for TStore performance counters, but is also multiplied by the number of Reliable Dictionaries and Reliable Queues used. The performance counter instance names can help in identifying the specific [partition](service-fabric-concepts-partitioning.md), service replica, and state provider in the case of TStore, that the performance counter instance is associated with.
#### Service Fabric Transactional Replicator category
For the category `Service Fabric Transactional Replicator`, the counter instance names are in the following format:
`ServiceFabricPartitionId:ServiceFabricReplicaId`
*ServiceFabricPartitionId* is the string representation of the Service Fabric partition ID that the performance counter instance is associated with. The partition ID is a GUID, and its string representation is generated through [`Guid.ToString`](https://msdn.microsoft.com/library/97af8hh4.aspx) with format specifier "D".
*ServiceFabricReplicaId* is the ID associated with a given replica of a reliable service. Replica ID is included in the performance counter instance name to ensure its uniqueness and avoid conflict with other performance counter instances generated by the same partition. Further details about replicas and their role in reliable services can be found [here](service-fabric-concepts-replica-lifecycle.md).
The following counter instance name is typical for a counter under the `Service Fabric Transactional Replicator` category:
`00d0126d-3e36-4d68-98da-cc4f7195d85e:131652217797162571`
In the preceding example, `00d0126d-3e36-4d68-98da-cc4f7195d85e` is the string representation of the Service Fabric partition ID, and `131652217797162571` is the replica ID.
#### Service Fabric TStore category
For the category `Service Fabric TStore`, the counter instance names are in the following format:
`ServiceFabricPartitionId:ServiceFabricReplicaId:ServiceFabricStateProviderId_PerformanceCounterInstanceDifferentiator`
*ServiceFabricPartitionId* is the string representation of the Service Fabric partition ID that the performance counter instance is associated with. The partition ID is a GUID, and its string representation is generated through [`Guid.ToString`](https://msdn.microsoft.com/library/97af8hh4.aspx) with format specifier "D".
*ServiceFabricReplicaId* is the ID associated with a given replica of a reliable service. Replica ID is included in the performance counter instance name to ensure its uniqueness and avoid conflict with other performance counter instances generated by the same partition. Further details about replicas and their role in reliable services can be found [here](service-fabric-concepts-replica-lifecycle.md).
*ServiceFabricStateProviderId* is the ID associated with a state provider within a reliable service. State Provider ID is included in the performance counter instance name to differentiate a TStore from another.
*PerformanceCounterInstanceDifferentiator* is a differentiating ID associated with a performance counter instance within a state provider. This differentiator is included in the performance counter instance name to ensure its uniqueness and avoid conflict with other performance counter instances generated by the same state provider.
The following counter instance name is typical for a counter under the `Service Fabric TStore` category:
`00d0126d-3e36-4d68-98da-cc4f7195d85e:131652217797162571:142652217797162571_1337`
In the preceding example, `00d0126d-3e36-4d68-98da-cc4f7195d85e` is the string representation of the Service Fabric partition ID, `131652217797162571` is the replica ID, `142652217797162571` is the state provider ID, and `1337` is the performance counter instance differentiator.
### Transactional Replicator performance counters
The Reliable Services runtime emits the following events under the `Service Fabric Transactional Replicator` category
Counter name | Description |
| --- | --- |
| Begin Txn Operations/sec | The number of new write transactions created per second.|
| Txn Operations/sec | The number of add/update/delete operations performed on reliable collections per second.|
| Avg. Flush Latency (ms) | The number of bytes being flushed to the disk by the Transactional Replicator per second |
| Throttled Operations/sec | The number of operations rejected every second by the Transactional Replicator due to throttling. |
| Avg. Transaction ms/Commit | Average commit latency per transaction in milliseconds |
| Avg. Flush Latency (ms) | Average duration of disk flush operations initiated by the Transactional Replicator in milliseconds |
### TStore performance counters
The Reliable Services runtime emits the following events under the `Service Fabric TStore` category
Counter name | Description |
| --- | --- |
| Item Count | The number of items in the store.|
| Disk Size | The total disk size, in bytes, of checkpoint files for the store.|
| Checkpoint File Write Bytes/sec | The number of bytes written per second for the most recent checkpoint file.|
| Copy Disk Transfer Bytes/sec | The number of disk bytes read (on the primary replica) or written (on a secondary replica) per second during a store copy.|
## Next steps
[EventSource providers in PerfView](https://blogs.msdn.microsoft.com/vancem/2012/07/09/introduction-tutorial-logging-etw-events-in-c-system-diagnostics-tracing-eventsource/)
| 82.149606 | 560 | 0.806288 | eng_Latn | 0.991699 |
a92116048932c18adb2de2d71a29c0192609956a | 59 | md | Markdown | README.md | telminov/cram-word | 786a285adb1fafb0a430ffa97682ac3e84fca0f9 | [
"MIT"
] | null | null | null | README.md | telminov/cram-word | 786a285adb1fafb0a430ffa97682ac3e84fca0f9 | [
"MIT"
] | 14 | 2019-09-06T13:27:07.000Z | 2021-06-10T18:16:44.000Z | README.md | telminov/cram-word | 786a285adb1fafb0a430ffa97682ac3e84fca0f9 | [
"MIT"
] | null | null | null | # cram-word
Карточки памяти, для зубрежки иностранных слов
| 19.666667 | 46 | 0.813559 | rus_Cyrl | 0.946784 |
a92124e1da5e306e1096865748b6b5f9d8fe3a17 | 1,442 | md | Markdown | fr/_posts/2016-03-29-128-algorithmes.md | jilljenn/tryalgo.org | 5d62c2457d52817de2f189c71fe73410b2cd5a97 | [
"MIT"
] | 18 | 2016-03-03T18:33:25.000Z | 2021-02-06T10:53:48.000Z | fr/_posts/2016-03-29-128-algorithmes.md | o-ikne/tryalgo.org | efeb139ff9711bbedcd6d72d58e42f0e8e1bf9aa | [
"MIT"
] | 7 | 2016-02-04T02:44:54.000Z | 2021-02-11T18:26:49.000Z | fr/_posts/2016-03-29-128-algorithmes.md | o-ikne/tryalgo.org | efeb139ff9711bbedcd6d72d58e42f0e8e1bf9aa | [
"MIT"
] | 12 | 2016-11-24T22:23:50.000Z | 2021-01-26T12:49:30.000Z | ---
layout: fr
title: 128 algorithmes
excerpt_separator: <!--more-->
---
Voici les slides de notre [conférence](https://paris.numa.co/Evenements/128-algorithmes) du 29 mars 2016 au NUMA.
<!--more-->
1. [Introduction](/static/128algos/intro.pdf)
1. [Quelle structure de données choisir pour faire ses courses ?](/static/128algos/structures.pdf)
1. [Comment apparier clients et chauffeurs de taxi efficacement ?](/static/128algos/taxis.pdf)
1. [tryalgo : des labyrinthes jusqu'à Paris](/static/128algos/graphes.pdf)
1. [Programmation dynamique dans ElasticSearch](/static/128algos/elasticsearch.pdf)
1. [D'autres problèmes algorithmiques](/static/128algos/extras.pdf)
Pour jouer avec le graphe de Paris, c'est ici :
- De façon [interactive](http://mybinder.org/repo/jilljenn/128algos) grâce à [Jupyter Notebook](http://jupyter.org) et [Binder](http://mybinder.org)
- De façon [statique](http://nbviewer.jupyter.org/github/jilljenn/128algos/blob/master/TryAlgo%20in%20Paris.ipynb), plus rapide à charger.
À part ça, le [Google Code Jam](https://code.google.com/codejam), c'est la journée du samedi 9 avril !
## Autres ressources
- [Algorithme de Pledge](https://interstices.info/jcms/c_46065/l-algorithme-de-pledge) pour sortir d'un labyrinthe à tous les coups
- [Algorithme d'ElasticSearch](https://www.elastic.co/guide/en/elasticsearch/guide/current/fuzziness.html), reposant sur un automate de Levenshtein
| 49.724138 | 149 | 0.74896 | fra_Latn | 0.549646 |
a922c120f262b1c7ad6461e7467acda1d48e1a8b | 3,010 | md | Markdown | README.md | gitawego/ChromecastCordova | 5ded1d0b0e02a03a1a5c4cf2ccfb9550d113c351 | [
"MIT"
] | null | null | null | README.md | gitawego/ChromecastCordova | 5ded1d0b0e02a03a1a5c4cf2ccfb9550d113c351 | [
"MIT"
] | null | null | null | README.md | gitawego/ChromecastCordova | 5ded1d0b0e02a03a1a5c4cf2ccfb9550d113c351 | [
"MIT"
] | 1 | 2019-09-26T17:22:31.000Z | 2019-09-26T17:22:31.000Z | =======
Chromecast - Cordova
=================
Plugin chromecast for Cordova Android Application.
you can use the chromecast javascript API in a webview in Phonegap application like you did it with a desktop chrome application.
=========
##Installation
* prepare libs and tools with this [tutorial](https://developers.google.com/cast/docs/android_sender)
* copy adt_path/sdk/extras/android/support/v7/mediarouter, adt_path/sdk/extras/android/support/v7/appcompat and adt_path/sdk/extras/google/google_play_services/libproject/google-play-services_lib to a custom folder.
* go to each lib folder via terminal, and update the project:
```
android update lib-project -p .
```
* **mediarouter** depends on **appcompat**, so in **mediarouter** folder , edit **project.properties**, add appcompat as dependency:
```
android.library.reference.1=path/to/appcompat
```
* after added platform android, edit your_project/platforms/android/project.properties. content should like this:
```
android.library.reference.1=CordovaLib
android.library.reference.2=path/to/appcompat
android.library.reference.3=path/to/mediarouter
# Project target.
target=android-19
```
* you can use after_prepare hook as well to achieve this, just copy hoos/after_prepare/add_libs_to_project.properties.js to your_project/hooks/after_prepare/,
and copy hooks/after_prepare/dependencies.json to your_project/config/
* make sure all the android build target are the same (in file project.properties). For lib mediarouter, target must be bigger than 18
```
target=android-19
```
##now you can use cordova to build the project.
About [SesamTV](http://www.sesamtv.com)
==========
Since 2003, Sesam TV has pioneered home media convergence with its famous SesamTV Media Center software and more than 2 millions users. Bringing innovation, experience and products to the connected home, SesamTV empowers leading TV Operators and actors in the industry.
Licence :
Copyright (C) <2013> <SesamTV>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| 54.727273 | 464 | 0.778738 | eng_Latn | 0.806866 |
a922f82fa68f39709038eb87d3dcfa1f99ee4c8b | 1,125 | md | Markdown | README.md | dbostr/weidu-syntax | 60e938bb9c85955f4f73796d95346314b6ed6671 | [
"MIT"
] | null | null | null | README.md | dbostr/weidu-syntax | 60e938bb9c85955f4f73796d95346314b6ed6671 | [
"MIT"
] | 1 | 2018-12-22T09:09:50.000Z | 2018-12-28T14:37:07.000Z | README.md | dbostr/weidu-syntax | 60e938bb9c85955f4f73796d95346314b6ed6671 | [
"MIT"
] | null | null | null | # WeiDU syntax highlighting README
Syntax highlighting for WeiDU code (.tp2, .d, .2da, .tra) for Visual Studio Code.
## Features
### Highlighting

### Smart brackets
- ~~
- /* */
## Requirements
VS Code
## Installation
Clone to .vscode/extensions folder (default location for Windows is %userprofile%/.vscode/extensions).
- Folder may need to be named 'weidu'.
- Requires restart of VS Code.
- Changes to settings locally require Ctrl-Shift-P >Reload Window.
TODO: Make available from VS Code extensions.
## Extension Settings
None for now.
## Known Issues
- There are WeiDU functions not highlighted
- Highlighting has been selected more based on colors from Dark+ than for proper syntax type, i.e. translation @2 is set to variable.other.less, %variable% is set to entity.other.attribute-name
- TODO: Determine which functions should be highlighted and not
## Release Notes
### 0.0.1
Initial release. Rudimentary highlighting based on Dark+.
-----------------------------------------------------------------------------------------------------------
| 23.93617 | 193 | 0.672 | eng_Latn | 0.972209 |
a92366814eb23c86b65577dd17ce1b8b0ed053d6 | 3,126 | md | Markdown | README.md | Denis-leparteleg/oneminutepitch | 183e330e945b79234fa724d04d89b38b51e066f1 | [
"MIT"
] | null | null | null | README.md | Denis-leparteleg/oneminutepitch | 183e330e945b79234fa724d04d89b38b51e066f1 | [
"MIT"
] | null | null | null | README.md | Denis-leparteleg/oneminutepitch | 183e330e945b79234fa724d04d89b38b51e066f1 | [
"MIT"
] | null | null | null | # oneminutepitch
This is an application that allows users to use that one minute wisely. The users will submit their one minute pitches and other users will vote on them and leave comments to give their feedback on them.
## Specifications
+ user can see the pitches other people have posted.
+ user can vote on the pitch you like and give it a downvote or upvote.
+ user can be signed in to leave a comment
+ user can receive a welcoming email once you sign up.
+ user can view the pitches you have created in your profile page.
+ user can comment on the different pitches and leave feedback.
+ user can submit a pitch in any category.
+ user can view the different categories.
## Setup/Installation Requirements
* With the lates web browser installed in your computer or tablet one can access the site from anywhere. The only installation needed is a web browser ranging from chrome,explorer, or firefox
## Known Bugs
There are no unresolved issues in regards to this code that I know of.
## Prerequisites
To be able to run this web application, you will need to have a web browser, preferably Google Chrome.
Just open the url link deployed on GitHub and run it.
## Technologies Used
I worked on this code on Linux OS. I used python as my primary language and flask model to work on the Application. I worked on the code using vs code,Git applications, postgresql and heruko.
## Support and contact details
If there are any issues on how the code runs, concerns, questions or ideas, kindly reach out to me on my email address;
denislepartelegke@gmail.com.
## <a href="https://denis-oneminutepitch.herokuapp.com/">live link to site</a>
## Authors
* **Denis Leparteleg** - [Denis-leparteleg]()
## License
This project is licensed under the MIT License -
MIT License
Copyright (c) 2021 Denis Leparteleg
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Copyright (c) 2021 **Denis Leparteleg**
## Acknowledgments
* A big up to Moringa a School for the lessons on how to program.
* Many thanks to my classmates and Technical mentors at Moringa School for working together
with me and assisting where the code didn't work properly.
| 45.970588 | 203 | 0.78247 | eng_Latn | 0.992003 |
a9237410374f385271b0f63a36df05f9e3feef10 | 1,888 | md | Markdown | README.md | mustafa01991/6502CPUEmulator | 7cf6ff95e7e1c8d2d003e17d2fffe34281ecf581 | [
"MIT"
] | null | null | null | README.md | mustafa01991/6502CPUEmulator | 7cf6ff95e7e1c8d2d003e17d2fffe34281ecf581 | [
"MIT"
] | null | null | null | README.md | mustafa01991/6502CPUEmulator | 7cf6ff95e7e1c8d2d003e17d2fffe34281ecf581 | [
"MIT"
] | null | null | null | # 6502 CPU Emulator
Fully working [6502 CPU](https://en.wikipedia.org/wiki/MOS_Technology_6502) Emulator written in C# with DotNet Core 3.0.
## Getting Started
Example of setting up the CPU with 65,535 bytes of RAM and a program that increase register X indefinitely.
```csharp
static void Main(string[] args) {
var cpu = new Cpu();
// This will make the cpu output it's state to the console each step
cpu.debug = true;
// Create bus with memory of size: 0xFFFF = 65,535 bytes
var bus = new Bus(cpu, 0xFFFF);
// Start reading code at address 0
bus.Write(0xFFFC, 0); // The cpu will start by jumping to the address at 0xFFFC
bus.Write(0xFFFD, 0);
// Example Code of adding to X
// Increment register X by 1
bus.Write(0x0000, 0xE8); // INX
// Jump to address 0
bus.Write(0x0001, 0x4C); // JMP
bus.Write(0x0002, 0x00); // 0 (abs address)
// Run with each Instruction step taking 1000 milliseconds
bus.Run(1000);
}
```
This will output:
```
=== CPU State
A=$00 X=$00 Y=$00 SP=$FF PC=$01
NV-BDIZC
00100000
=== Next instruction
INX Implied
=== CPU State
A=$00 X=$01 Y=$00 SP=$FF PC=$02
NV-BDIZC
00100000
=== Next instruction
JMP Absolute 004C
=== CPU State
A=$00 X=$01 Y=$00 SP=$FF PC=$4D
NV-BDIZC
00100000
=== Next instruction
BRK Implied
=== CPU State
NV-BDIZC
00110000
=== Next instruction
INX Implied
=== CPU State
A=$00 X=$02 Y=$00 SP=$FC PC=$02
NV-BDIZC
00110000
```
The CUP state are the cpu registers:
* A // accumulator
* X // X index
* Y // Y index
* SP // Stack pointer
* PC // Program counter
* NV-BDIZC // The CPU Flags
To add IO devices to the system override or change the `Write()` and `Read()` methods in the `Bus` class.
## Authors
* **Mustafa Al-Ghayeb** - [mustafa01991](https://github.com/mustafa01991)
## License
This project is licensed under the MIT License | 24.842105 | 120 | 0.663136 | eng_Latn | 0.748092 |
a923d1083e0cb717850d480cf6139288c2c63aac | 24 | md | Markdown | README.md | Unrealplace/flutterapp | adde64cf3791ba4775ed49dda997ad35ea1ea07c | [
"MIT"
] | null | null | null | README.md | Unrealplace/flutterapp | adde64cf3791ba4775ed49dda997ad35ea1ea07c | [
"MIT"
] | null | null | null | README.md | Unrealplace/flutterapp | adde64cf3791ba4775ed49dda997ad35ea1ea07c | [
"MIT"
] | null | null | null | # flutterapp
flutterapp
| 8 | 12 | 0.833333 | nob_Latn | 0.535699 |
a924622de830ee8d9580a8350c2a2dc8e6467ca6 | 2,912 | md | Markdown | README.md | nyu-med-ai/pytorch_tutorial | 117040452926199ecff70dd4cdf686b22c2a8e04 | [
"MIT"
] | 1 | 2019-11-24T12:53:36.000Z | 2019-11-24T12:53:36.000Z | README.md | nyu-med-ai/pytorch_tutorial | 117040452926199ecff70dd4cdf686b22c2a8e04 | [
"MIT"
] | null | null | null | README.md | nyu-med-ai/pytorch_tutorial | 117040452926199ecff70dd4cdf686b22c2a8e04 | [
"MIT"
] | null | null | null |
# 90-Minute PyTorch Blitz
A repository with an example for training a basic denoising CNN in PyTorch.
Parts of this tutorial are based on the 60-Minute PyTorch Blitz by Soumith
Chintala, available at
```url
https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
```
## Goals
1) Introduce attendees to PyTorch.
2) Provide code for interacting with Matlab data.
3) Demonstrate coding and training of a basic, 4-layer CNN.
## Installation
First, you'll need to install Anaconda from here:
```url
https://www.anaconda.com/distribution/
```
Anaconda is available for Windows, Mac, and Linux.
Then, you'll need to download this repository. After you download this
repository, open the Anaconda prompt (on Linux or MacOS, this should just be
a terminal. On Windows it's a program called "Anaconda Prompt") and navigate
to the directory where you downloaded the repository. Then, type the following:
```sh
conda create --name pytorch_tutorial
conda activate pytorch_tutorial
bash anaconda_setup.bash
```
If you are on Windows you can copy-paste the contentds of anaconda_setup.bash
into your Anaconda prompt. You should get a bunch of installation messages.
After this is complete, you should verify your installation. To do this, first
type `python`. Then, once the Python interpreter is running, type
```python
import torch
print(torch.__version__)
```
If you see 1.2.0 or greater for the version, you should be able to run all
examples in this repository.
## Basic Example
### Data
Data is automatically loaded from a ```pytorch_tutorial_data/``` folder in the
root directory of the repository.
Data for this code is provided internally to NYU researchers. For external
researchers, executing the examples in this repository can be done with any set
of multicoil, Cartesian k-space data. One such set of data can be downloaded at
```https://fastmri.med.nyu.edu/``` (with some conversions for the dataloader
in this repository). The included dataloader expects the raw data to be stored
as contiguous, multicoil complex Matlab arrays. Behavior can be inferred by
inspecting ```data/kneedata.py``` and the corresponding transform modules.
### Running the example
Jupyter notebooks can be found in ```tensors_and_autograd_tutorial.ipynb``` and
```neural_networks_tutorial.ipynb``` in the base directory. These examples are
primarily meant to be didactic - to fully explore the code, we recommend
running the main function.
To run the main example, after installing Anaconda with the required packages
you should just have to run
```python
python denoise_main.py
```
After that the training will begin. outputs from the training are logged using
tensorboard into a new `logs/` directory in the folder, along with the best
model. If the training is interrupted, rerunning `denoise_main.py` will search
the `logs/` folder for the previous best model and continue training from
there.
| 33.471264 | 79 | 0.784341 | eng_Latn | 0.997738 |
a925275c78262c903b2e39f52ee7113bff4a9696 | 19 | md | Markdown | README.md | napzZztar/pivotal-manager | 50e667afa63a5e372bef4df10249619621b8c120 | [
"MIT"
] | null | null | null | README.md | napzZztar/pivotal-manager | 50e667afa63a5e372bef4df10249619621b8c120 | [
"MIT"
] | null | null | null | README.md | napzZztar/pivotal-manager | 50e667afa63a5e372bef4df10249619621b8c120 | [
"MIT"
] | null | null | null | ## Pivotal Manager
| 9.5 | 18 | 0.736842 | nob_Latn | 0.266797 |
a9253e6197f6e9f600b925daa4bbfa05342052cd | 48,213 | md | Markdown | sdk-api-src/content/ws2tcpip/nf-ws2tcpip-getaddrinfo.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/ws2tcpip/nf-ws2tcpip-getaddrinfo.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/ws2tcpip/nf-ws2tcpip-getaddrinfo.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:ws2tcpip.getaddrinfo
title: getaddrinfo function (ws2tcpip.h)
description: Provides protocol-independent translation from an ANSI host name to an address.
helpviewer_keywords: ["GetAddrInfoA","_win32_getaddrinfo_2","getaddrinfo","getaddrinfo function [Winsock]","winsock.getaddrinfo_2","ws2tcpip/getaddrinfo"]
old-location: winsock\getaddrinfo_2.htm
tech.root: WinSock
ms.assetid: 7034b866-346e-4a3b-b81b-72816d95b1d6
ms.date: 12/05/2018
ms.keywords: GetAddrInfoA, _win32_getaddrinfo_2, getaddrinfo, getaddrinfo function [Winsock], winsock.getaddrinfo_2, ws2tcpip/getaddrinfo
req.header: ws2tcpip.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows 8.1, Windows Vista [desktop apps \| UWP apps]
req.target-min-winversvr: Windows Server 2003 [desktop apps \| UWP apps]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Ws2_32.lib
req.dll: Ws2_32.dll
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- getaddrinfo
- ws2tcpip/getaddrinfo
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- DllExport
api_location:
- Ws2_32.dll
api_name:
- getaddrinfo
---
# getaddrinfo function
## -description
The
<b>getaddrinfo</b> function provides protocol-independent translation from an ANSI host name to an address.
## -parameters
### -param pNodeName [in, optional]
A pointer to a <b>NULL</b>-terminated ANSI string that contains a host (node) name or a numeric host address string. For the Internet protocol, the numeric host address string is a dotted-decimal IPv4 address or an IPv6 hex address.
### -param pServiceName [in, optional]
A pointer to a <b>NULL</b>-terminated ANSI string that contains either a service name or port number represented as a string.
A service name is a string alias for a port number. For example, “http” is an alias for port 80 defined by the Internet Engineering Task Force (IETF) as the default port used by web servers for the HTTP protocol. Possible values for the <i>pServiceName</i> parameter when a port number is not specified are listed in the following file:
<code>%WINDIR%\system32\drivers\etc\services</code>
### -param pHints [in, optional]
A pointer to an
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure that provides hints about the type of socket the caller supports.
The <b>ai_addrlen</b>, <b>ai_canonname</b>, <b>ai_addr</b>, and <b>ai_next</b> members of the <a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure pointed to by the <i>pHints</i> parameter must be zero or <b>NULL</b>. Otherwise the <a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-getaddrinfoexa">GetAddrInfoEx</a> function will fail with <a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSANO_RECOVERY</a>.
See the Remarks for more details.
### -param ppResult [out]
A pointer to a linked list of one or more
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structures that contains response information about the host.
## -returns
Success returns zero. Failure returns a nonzero Windows Sockets error code, as found in the
<a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">Windows Sockets Error Codes</a>.
Most nonzero error codes returned by the
<b>getaddrinfo</b> function map to the set of errors outlined by Internet Engineering Task Force (IETF) recommendations. The following table lists these error codes and their WSA equivalents. It is recommended that the WSA error codes be used, as they offer familiar and comprehensive error information for Winsock programmers.
<table>
<tr>
<th>Error value</th>
<th>WSA equivalent</th>
<th>Description</th>
</tr>
<tr>
<td>EAI_AGAIN</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSATRY_AGAIN</a></td>
<td>A temporary failure in name resolution occurred.</td>
</tr>
<tr>
<td>EAI_BADFLAGS</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAEINVAL</a></td>
<td>An invalid value was provided for the <b>ai_flags</b> member of the <i>pHints</i> parameter.</td>
</tr>
<tr>
<td>EAI_FAIL</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSANO_RECOVERY</a></td>
<td>A nonrecoverable failure in name resolution occurred.</td>
</tr>
<tr>
<td>EAI_FAMILY</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAEAFNOSUPPORT</a></td>
<td>The <b>ai_family</b> member of the <i>pHints</i> parameter is not supported.</td>
</tr>
<tr>
<td>EAI_MEMORY</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSA_NOT_ENOUGH_MEMORY</a></td>
<td>A memory allocation failure occurred.</td>
</tr>
<tr>
<td>EAI_NONAME</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAHOST_NOT_FOUND</a></td>
<td>The name does not resolve for the supplied parameters or the <i>pNodeName</i> and <i>pServiceName</i> parameters were not provided.</td>
</tr>
<tr>
<td>EAI_SERVICE</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSATYPE_NOT_FOUND</a></td>
<td>The <i>pServiceName</i> parameter is not supported for the specified <b>ai_socktype</b> member of the <i>pHints</i> parameter.</td>
</tr>
<tr>
<td>EAI_SOCKTYPE</td>
<td><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAESOCKTNOSUPPORT</a></td>
<td>The <b>ai_socktype</b> member of the <i>pHints</i> parameter is not supported.</td>
</tr>
</table>
Use the
<a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-gai_strerrora">gai_strerror</a> function to print error messages based on the EAI codes returned by the
<b>getaddrinfo</b> function. The
<b>gai_strerror</b> function is provided for compliance with IETF recommendations, but it is not thread safe. Therefore, use of traditional Windows Sockets functions such as
<a href="/windows/desktop/api/winsock/nf-winsock-wsagetlasterror">WSAGetLastError</a> is recommended.
<table>
<tr>
<th>Error code</th>
<th>Meaning</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSA_NOT_ENOUGH_MEMORY</a></b></dt>
</dl>
</td>
<td width="60%">
There was insufficient memory to perform the operation.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAEAFNOSUPPORT</a></b></dt>
</dl>
</td>
<td width="60%">
An address incompatible with the requested protocol was used. This error is returned if the <b>ai_family</b> member of the
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure pointed to by the <i>pHints</i> parameter is not supported.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAEINVAL</a></b></dt>
</dl>
</td>
<td width="60%">
An invalid argument was supplied. This error is returned if an invalid value was provided for the <b>ai_flags</b> member of the
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure pointed to by the <i>pHints</i> parameter.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAESOCKTNOSUPPORT</a></b></dt>
</dl>
</td>
<td width="60%">
The support for the specified socket type does not exist in this address family. This error is returned if the <b>ai_socktype</b> member of the
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure pointed to by the <i>pHints</i> parameter is not supported.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAHOST_NOT_FOUND</a></b></dt>
</dl>
</td>
<td width="60%">
No such host is known. This error is returned if the name does not resolve for the supplied parameters or the <i>pNodeName</i> and <i>pServiceName</i> parameters were not provided.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSANO_DATA</a></b></dt>
</dl>
</td>
<td width="60%">
The requested name is valid, but no data of the requested type was found.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSANO_RECOVERY</a></b></dt>
</dl>
</td>
<td width="60%">
A nonrecoverable error occurred during a database lookup. This error is returned if nonrecoverable error in name resolution occurred.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSANOTINITIALISED</a></b></dt>
</dl>
</td>
<td width="60%">
A successful
<a href="/windows/desktop/api/winsock/nf-winsock-wsastartup">WSAStartup</a> call must occur before using this function.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSATRY_AGAIN</a></b></dt>
</dl>
</td>
<td width="60%">
This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server. This error is returned when a temporary failure in name resolution occurred.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b><a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSATYPE_NOT_FOUND</a></b></dt>
</dl>
</td>
<td width="60%">
The specified class was not found. The <i>pServiceName</i> parameter is not supported for the specified <b>ai_socktype</b> member of the
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure pointed to by the <i>pHints</i> parameter.
</td>
</tr>
</table>
## -remarks
The <b>getaddrinfo</b> function is the ANSI version of a function that provides protocol-independent translation from host name to address. The Unicode version of this function is <a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-getaddrinfow">GetAddrInfoW</a>. Developers are encouraged to use the <b>GetAddrInfoW</b> Unicode function rather than the <b>getaddrinfo</b> ANSI function.
The <b>getaddrinfo</b> function returns results for the <b>NS_DNS</b> namespace. The <b>getaddrinfo</b> function aggregates all responses if more than
one namespace provider returns information. For use with the IPv6 and IPv4 protocol, name resolution can be by the Domain Name System (DNS), a local <i>hosts</i> file, or by other naming mechanisms for the <b>NS_DNS</b> namespace.
Another name that can be used for the <b>getaddrinfo</b> function is <b>GetAddrInfoA</b>. Macros in the <i>Ws2tcpip.h</i> header file define <b>GetAddrInfoA</b> to <b>getaddrinfo</b>.
Macros in the <i>Ws2tcpip.h</i> header file define a mixed-case function name of <b>GetAddrInfo</b> and a <b>ADDRINFOT</b> structure. This <b>GetAddrInfo</b> function should be called with the <i>pNodeName</i> and <i>pServiceName</i> parameters of a pointer of type <b>TCHAR</b> and the <i>pHints</i> and <i>ppResult</i> parameters of a pointer of type <b>ADDRINFOT</b>. When UNICODE or _UNICODE is not defined, <b>GetAddrInfo</b> is defined to <b>getaddrinfo</b>, the ANSI version of the function, and <b>ADDRINFOT</b> is defined to the <a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure. When <b>UNICODE</b> or <b>_UNICODE</b> is defined, <b>GetAddrInfo</b> is defined to <a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-getaddrinfow">GetAddrInfoW</a>, the Unicode version of the function, and <b>ADDRINFOT</b> is defined to the <a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfow">addrinfoW</a> structure.
The parameter names and parameter types for the <b>getaddrinfo</b> function defined in the <i>Ws2tcpip.h</i> header file on the Platform Software Development Kit (SDK) for Windows Server 2003, and Windows XP were different.
One or both of the <i>pNodeName</i> or <i>pServiceName</i> parameters must point to a <b>NULL</b>-terminated ANSI string; generally both are provided.
Upon success, a linked list of
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structures is returned in the <i>ppResult</i> parameter. The list can be processed by following the pointer provided in the <b>ai_next</b> member of each returned
<b>addrinfo</b> structure until a <b>NULL</b> pointer is encountered. In each returned
<b>addrinfo</b> structure, the <b>ai_family</b>, <b>ai_socktype</b>, and <b>ai_protocol</b> members correspond to respective arguments in a
<a href="/windows/desktop/api/winsock2/nf-winsock2-socket">socket</a> or <a href="/windows/desktop/api/winsock2/nf-winsock2-wsasocketa">WSASocket</a> function call. Also, the <b>ai_addr</b> member in each returned
<b>addrinfo</b> structure points to a filled-in socket address structure, the length of which is specified in its <b>ai_addrlen</b> member.
If the <i>pNodeName</i> parameter points to a computer name, all permanent addresses for the computer that can be used as a source address are returned. On Windows Vista and later, these addresses would include all unicast IP addresses returned by the <a href="/windows/desktop/api/netioapi/nf-netioapi-getunicastipaddresstable">GetUnicastIpAddressTable</a> or <a href="/windows/desktop/api/netioapi/nf-netioapi-getunicastipaddressentry">GetUnicastIpAddressEntry</a> functions in which the <b>SkipAsSource</b> member is set to false in the <a href="/windows/desktop/api/netioapi/ns-netioapi-mib_unicastipaddress_row">MIB_UNICASTIPADDRESS_ROW</a> structure.
If the <i>pNodeName</i> parameter points to a string equal to "localhost", all loopback addresses on the local computer are returned.
If the <i>pNodeName</i> parameter contains an empty string, all registered addresses on the local computer are returned.
On Windows Server 2003 and later if the <i>pNodeName</i> parameter points to a string equal to "..localmachine", all registered addresses on the local computer are returned.
If the <i>pNodeName</i> parameter refers to a cluster virtual server name, only virtual server addresses are returned. On Windows Vista and later, these addresses would include all unicast IP addresses returned by the <a href="/windows/desktop/api/netioapi/nf-netioapi-getunicastipaddresstable">GetUnicastIpAddressTable</a> or <a href="/windows/desktop/api/netioapi/nf-netioapi-getunicastipaddressentry">GetUnicastIpAddressEntry</a> functions in which the <b>SkipAsSource</b> member is set to true in the <a href="/windows/desktop/api/netioapi/ns-netioapi-mib_unicastipaddress_row">MIB_UNICASTIPADDRESS_ROW</a> structure. See <a href="/previous-versions/windows/desktop/mscs/windows-clustering">Windows Clustering</a> for more information about clustering.
Windows 7 with Service Pack 1 (SP1) and Windows Server 2008 R2 with Service Pack 1 (SP1) add support to Netsh.exe for setting the SkipAsSource attribute on an IP address. This also changes the behavior such that if the <b>SkipAsSource</b> member in the <a href="/windows/desktop/api/netioapi/ns-netioapi-mib_unicastipaddress_row">MIB_UNICASTIPADDRESS_ROW</a> structure is set to false, the IP address will be registered in DNS. If the <b>SkipAsSource</b> member is set to true, the IP address is not registered in DNS.
A hotfix is available for Windows 7 and Windows Server 2008 R2 that adds support to Netsh.exe for setting the SkipAsSource attribute on an IP address. This hotfix also changes behavior such that if the <b>SkipAsSource</b> member in the <a href="/windows/desktop/api/netioapi/ns-netioapi-mib_unicastipaddress_row">MIB_UNICASTIPADDRESS_ROW</a> structure is set to false, the IP address will be registered in DNS. If the <b>SkipAsSource</b> member is set to true, the IP address is not registered in DNS. For more information, see <a href="https://support.microsoft.com/kb/2386184">Knowledge Base (KB) 2386184</a>.
A similar hotfix is also available for Windows Vista with Service Pack 2 (SP2) and Windows Server 2008 with Service Pack 2 (SP2) that adds support to Netsh.exe for setting the SkipAsSource attribute on an IP address. This hotfix also changes behavior such that if the <b>SkipAsSource</b> member in the <a href="/windows/desktop/api/netioapi/ns-netioapi-mib_unicastipaddress_row">MIB_UNICASTIPADDRESS_ROW</a> structure is set to false, the IP address will be registered in DNS. If the <b>SkipAsSource</b> member is set to true, the IP address is not registered in DNS. For more information, see <a href="https://support.microsoft.com/kb/975808">Knowledge Base (KB) 975808</a>.
Callers of the
<b>getaddrinfo</b> function can provide hints about the type of socket supported through an
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure pointed to by the <i>pHints</i> parameter. When the <i>pHints</i> parameter is used, the following rules apply to its associated
<b>addrinfo</b> structure:
<ul>
<li>A value of <b>AF_UNSPEC</b> for <b>ai_family</b> indicates the caller will accept only the <b>AF_INET</b> and <b>AF_INET6</b> address families. Note that <b>AF_UNSPEC</b> and <b>PF_UNSPEC</b> are the same.</li>
<li>A value of zero for <b>ai_socktype</b> indicates the caller will accept any socket type.</li>
<li>A value of zero for <b>ai_protocol</b> indicates the caller will accept any protocol.</li>
<li>The <b>ai_addrlen</b> member must be set to zero.</li>
<li>The <b>ai_canonname</b> member must be set to <b>NULL</b>.</li>
<li>The <b>ai_addr</b> member must be set to <b>NULL</b>.</li>
<li>The <b>ai_next</b> member must be set to <b>NULL</b>.</li>
</ul>
A value of <b>AF_UNSPEC</b> for <b>ai_family</b> indicates the caller will accept any protocol family. This value can be used to return both IPv4 and IPv6 addresses for the host name pointed to by the <i>pNodeName</i> parameter. On Windows Server 2003 and Windows XP, IPv6 addresses are returned only if IPv6 is installed on the local computer.
Other values in the
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure provided in the <i>pHints</i> parameter indicate specific requirements. For example, if the caller handles only IPv4 and does not handle IPv6, the <b>ai_family</b> member should be set to AF_INET. For another example, if the caller handles only TCP and does not handle UDP, the <b>ai_socktype</b> member should be set to <b>SOCK_STREAM</b>.
If the <i>pHints</i> parameter is a <b>NULL</b> pointer, the
<b>getaddrinfo</b> function treats it as if the
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure in <i>pHints</i> were initialized with its <b>ai_family</b> member set to <b>AF_UNSPEC</b> and all other members set to zero.
On Windows Vista and later when <b>getaddrinfo</b> is called from a service, if the operation is the result of a user process calling the service, then the service should impersonate the user. This is to allow security to be properly enforced.
The
<b>getaddrinfo</b> function can be used to convert a text string representation of an IP address to an <a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure that contains a <a href="/windows/desktop/WinSock/sockaddr-2">sockaddr</a> structure for the IP address and other information. To be used in this way, the string pointed to by the <i>pNodeName</i> parameter must contain a text representation of an IP address and the <b>addrinfo</b> structure pointed to by the <i>pHints</i> parameter must have the AI_NUMERICHOST flag set in the <b>ai_flags</b> member. The string pointed to by the <i>pNodeName</i> parameter may contain a text representation of either an IPv4 or an IPv6 address. The text IP address is converted to an <b>addrinfo</b> structure pointed to by the <i>ppResult</i> parameter. The returned <b>addrinfo</b> structure contains a <b>sockaddr</b> structure for the IP address along with addition information about the IP address. For this method to work with an IPv6 address string on Windows Server 2003 and Windows XP, the IPv6 protocol must be installed on the local computer. Otherwise, the <a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSAHOST_NOT_FOUND</a> error is returned.
<h3><a id="Freeing_Address_Information_from_Dynamic_Allocation"></a><a id="freeing_address_information_from_dynamic_allocation"></a><a id="FREEING_ADDRESS_INFORMATION_FROM_DYNAMIC_ALLOCATION"></a>Freeing Address Information from Dynamic Allocation</h3>
All information returned by the
<b>getaddrinfo</b> function pointed to by the <i>ppResult</i> parameter is dynamically allocated, including all
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structures, socket address structures, and canonical host name strings pointed to by
<b>addrinfo</b> structures. Memory allocated by a successful call to this function must be released with a subsequent call to <a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-freeaddrinfo">freeaddrinfo</a>.
<h3><a id="Example_Code"></a><a id="example_code"></a><a id="EXAMPLE_CODE"></a>Example Code</h3>
The following code example shows how to use the <b>getaddrinfo</b> function.
```cpp
#undef UNICODE
#include <winsock2.h>
#include <ws2tcpip.h>
#include <stdio.h>
// link with Ws2_32.lib
#pragma comment (lib, "Ws2_32.lib")
int __cdecl main(int argc, char **argv)
{
//-----------------------------------------
// Declare and initialize variables
WSADATA wsaData;
int iResult;
INT iRetval;
DWORD dwRetval;
int i = 1;
struct addrinfo *result = NULL;
struct addrinfo *ptr = NULL;
struct addrinfo hints;
struct sockaddr_in *sockaddr_ipv4;
// struct sockaddr_in6 *sockaddr_ipv6;
LPSOCKADDR sockaddr_ip;
char ipstringbuffer[46];
DWORD ipbufferlength = 46;
// Validate the parameters
if (argc != 3) {
printf("usage: %s <hostname> <servicename>\n", argv[0]);
printf("getaddrinfo provides protocol-independent translation\n");
printf(" from an ANSI host name to an IP address\n");
printf("%s example usage\n", argv[0]);
printf(" %s www.contoso.com 0\n", argv[0]);
return 1;
}
// Initialize Winsock
iResult = WSAStartup(MAKEWORD(2, 2), &wsaData);
if (iResult != 0) {
printf("WSAStartup failed: %d\n", iResult);
return 1;
}
//--------------------------------
// Setup the hints address info structure
// which is passed to the getaddrinfo() function
ZeroMemory( &hints, sizeof(hints) );
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = IPPROTO_TCP;
printf("Calling getaddrinfo with following parameters:\n");
printf("\tnodename = %s\n", argv[1]);
printf("\tservname (or port) = %s\n\n", argv[2]);
//--------------------------------
// Call getaddrinfo(). If the call succeeds,
// the result variable will hold a linked list
// of addrinfo structures containing response
// information
dwRetval = getaddrinfo(argv[1], argv[2], &hints, &result);
if ( dwRetval != 0 ) {
printf("getaddrinfo failed with error: %d\n", dwRetval);
WSACleanup();
return 1;
}
printf("getaddrinfo returned success\n");
// Retrieve each address and print out the hex bytes
for(ptr=result; ptr != NULL ;ptr=ptr->ai_next) {
printf("getaddrinfo response %d\n", i++);
printf("\tFlags: 0x%x\n", ptr->ai_flags);
printf("\tFamily: ");
switch (ptr->ai_family) {
case AF_UNSPEC:
printf("Unspecified\n");
break;
case AF_INET:
printf("AF_INET (IPv4)\n");
sockaddr_ipv4 = (struct sockaddr_in *) ptr->ai_addr;
printf("\tIPv4 address %s\n",
inet_ntoa(sockaddr_ipv4->sin_addr) );
break;
case AF_INET6:
printf("AF_INET6 (IPv6)\n");
// the InetNtop function is available on Windows Vista and later
// sockaddr_ipv6 = (struct sockaddr_in6 *) ptr->ai_addr;
// printf("\tIPv6 address %s\n",
// InetNtop(AF_INET6, &sockaddr_ipv6->sin6_addr, ipstringbuffer, 46) );
// We use WSAAddressToString since it is supported on Windows XP and later
sockaddr_ip = (LPSOCKADDR) ptr->ai_addr;
// The buffer length is changed by each call to WSAAddresstoString
// So we need to set it for each iteration through the loop for safety
ipbufferlength = 46;
iRetval = WSAAddressToString(sockaddr_ip, (DWORD) ptr->ai_addrlen, NULL,
ipstringbuffer, &ipbufferlength );
if (iRetval)
printf("WSAAddressToString failed with %u\n", WSAGetLastError() );
else
printf("\tIPv6 address %s\n", ipstringbuffer);
break;
case AF_NETBIOS:
printf("AF_NETBIOS (NetBIOS)\n");
break;
default:
printf("Other %ld\n", ptr->ai_family);
break;
}
printf("\tSocket type: ");
switch (ptr->ai_socktype) {
case 0:
printf("Unspecified\n");
break;
case SOCK_STREAM:
printf("SOCK_STREAM (stream)\n");
break;
case SOCK_DGRAM:
printf("SOCK_DGRAM (datagram) \n");
break;
case SOCK_RAW:
printf("SOCK_RAW (raw) \n");
break;
case SOCK_RDM:
printf("SOCK_RDM (reliable message datagram)\n");
break;
case SOCK_SEQPACKET:
printf("SOCK_SEQPACKET (pseudo-stream packet)\n");
break;
default:
printf("Other %ld\n", ptr->ai_socktype);
break;
}
printf("\tProtocol: ");
switch (ptr->ai_protocol) {
case 0:
printf("Unspecified\n");
break;
case IPPROTO_TCP:
printf("IPPROTO_TCP (TCP)\n");
break;
case IPPROTO_UDP:
printf("IPPROTO_UDP (UDP) \n");
break;
default:
printf("Other %ld\n", ptr->ai_protocol);
break;
}
printf("\tLength of this sockaddr: %d\n", ptr->ai_addrlen);
printf("\tCanonical name: %s\n", ptr->ai_canonname);
}
freeaddrinfo(result);
WSACleanup();
return 0;
}
```
<div class="alert"><b>Note</b> Ensure that the development environment targets the newest version of <i>Ws2tcpip.h</i> which includes structure and function definitions for <a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> and <b>getaddrinfo</b>, respectively.</div>
<div> </div>
<h3><a id="IDNs"></a><a id="idns"></a><a id="IDNS"></a>Internationalized Domain Names</h3>
Internet host names typically consist of a very restricted set of characters:
<ul>
<li>Upper and lower case ASCII letters from the English alphabet.
</li>
<li>Digits from 0 to 9.</li>
<li>ASCII hyphen characters.
</li>
</ul>
With the growth of the Internet, there is a growing need to identify Internet host names for other languages not represented by the ASCII character set. Identifiers which facilitate this need and allow non-ASCII characters (Unicode) to be represented as special ASCII character strings are known as Internationalized Domain Names (IDNs). A mechanism called
Internationalizing Domain Names in Applications (IDNA) is used to handle
IDNs in a standard fashion. The specifications for IDNs and IDNA are documented in <a href="http://tools.ietf.org/html/rfc3490">RFC 3490</a>, <a href="http://tools.ietf.org/html/rfc5890">RTF 5890</a>, and <a href="http://tools.ietf.org/html/rfc6365">RFC 6365</a> published by the Internet Engineering Task Force (IETF).
On Windows 8 and Windows Server 2012, the <b>getaddrinfo</b> function provides support for Internationalized Domain Name (IDN) parsing applied to the name passed in the <i>pNodeName</i> parameter. Winsock performs Punycode/IDN encoding and conversion. This behavior can be disabled using the <b>AI_DISABLE_IDN_ENCODING</b> flag discussed below.
On Windows 7 and Windows Server 2008 R2 or earlier, the <b>getaddrinfo</b> function does not currently provide support IDN parsing applied to the name passed in the <i>pNodeName</i> parameter. Winsock does not perform any Punycode/IDN conversion. The wide character version of the <b>GetAddrInfo</b> function does not use Punycode to convert an IDN as per <a href="http://tools.ietf.org/html/rfc3490">RFC 3490</a>. The wide character version of the <b>GetAddrInfo</b> function when querying DNS encodes the Unicode name in UTF-8 format, the format used by Microsoft DNS servers in an enterprise environment.
Several functions on Windows Vista and later support conversion between Unicode labels in an IDN to their ASCII equivalents. The resulting representation of each Unicode label contains only ASCII characters and starts with the xn-- prefix if the Unicode label contained any non-ASCII characters. The reason for this is to support existing DNS servers on the Internet, since some DNS tools and servers only support ASCII characters (see <a href="http://tools.ietf.org/html/rfc3490">RFC 3490</a>).
The <a href="/windows/desktop/api/winnls/nf-winnls-idntoascii">IdnToAscii</a> function uses Punycode to convert an IDN to the ASCII representation of the original Unicode string using the standard algorithm defined in <a href="http://tools.ietf.org/html/rfc3490">RFC 3490</a>. The <a href="/windows/desktop/api/winnls/nf-winnls-idntounicode">IdnToUnicode</a> function converts the ASCII form of an IDN to the normal Unicode UTF-16 encoding syntax. For more information and links to related draft standards, see <a href="/windows/desktop/Intl/handling-internationalized-domain-names--idns">Handling Internationalized Domain Names (IDNs)</a>.
The <a href="/windows/desktop/api/winnls/nf-winnls-idntoascii">IdnToAscii</a> function can be used to convert an IDN name to the ASCII form that then can be passed in the <i>pNodeName</i> parameter to the <b>getaddrinfo</b> function. To pass this IDN name to the <b>GetAddrInfo</b> function when the wide character version of this function is used (when UNICODE or _UNICODE is defined), you can use the <a href="/windows/desktop/api/stringapiset/nf-stringapiset-multibytetowidechar">MultiByteToWideChar</a> function to convert the <b>CHAR</b> string into a <b>WCHAR</b> string.
<h3><a id="ai_flags"></a><a id="AI_FLAGS"></a>Use of ai_flags in the pHints parameter</h3>
Flags in the <b>ai_flags</b> member of the optional
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure provided in the <i>pHints</i> parameter modify the behavior of the function.
These flag bits are defined in the <i>Ws2def.h</i> header file on the Microsoft Windows Software Development Kit (SDK) for Windows 7. These flag bits are defined in the <i>Ws2tcpip.h</i> header file on the Windows SDK for Windows Server 2008 and Windows Vista. These flag bits are defined in the <i>Ws2tcpip.h</i> header file on the Platform SDK for Windows Server 2003, and Windows XP.
The flag bits can be a combination of the following:
<table>
<tr>
<th>Flag Bits</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<a id="AI_PASSIVE"></a><a id="ai_passive"></a><b>AI_PASSIVE</b>
</td>
<td width="60%">
Setting the <b>AI_PASSIVE</b> flag indicates the caller intends to use the returned socket address structure in a call to the
<a href="/windows/desktop/api/winsock/nf-winsock-bind">bind</a> function. When the <b>AI_PASSIVE</b> flag is set and <i>pNodeName</i> is a <b>NULL</b> pointer, the IP address portion of the socket address structure is set to <b>INADDR_ANY</b> for IPv4 addresses and <b>IN6ADDR_ANY_INIT</b> for IPv6 addresses.
When the <b>AI_PASSIVE</b> flag is not set, the returned socket address structure is ready for a call to the
<a href="/windows/desktop/api/winsock2/nf-winsock2-connect">connect</a> function for a connection-oriented protocol, or ready for a call to either the
<b>connect</b>,
<a href="/windows/desktop/api/winsock/nf-winsock-sendto">sendto</a>, or
<a href="/windows/desktop/api/winsock2/nf-winsock2-send">send</a> functions for a connectionless protocol. If the <i>pNodeName</i> parameter is a <b>NULL</b> pointer in this case, the IP address portion of the socket address structure is set to the loopback address.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_CANONNAME"></a><a id="ai_canonname"></a><b>AI_CANONNAME</b>
</td>
<td width="60%">
If neither <b>AI_CANONNAME</b> nor <b>AI_NUMERICHOST</b> is used, the
<b>getaddrinfo</b> function attempts resolution. If a literal string is passed
<b>getaddrinfo</b> attempts to convert the string, and if a host name is passed the
<b>getaddrinfo</b> function attempts to resolve the name to an address or multiple addresses.
When the <b>AI_CANONNAME</b> bit is set, the <i>pNodeName</i> parameter cannot be <b>NULL</b>. Otherwise the
<b>getaddrinfo</b> function will fail with <a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSANO_RECOVERY</a>.
When the <b>AI_CANONNAME</b> bit is set and the
<b>getaddrinfo</b> function returns success, the <b>ai_canonname</b> member in the <i>ppResult</i> parameter points to a <b>NULL</b>-terminated string that contains the canonical name of the specified node.
<div class="alert"><b>Note</b> The <b>getaddrinfo</b> function can return success when the <b>AI_CANONNAME</b> flag is set, yet the <b>ai_canonname</b> member in the associated
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure is <b>NULL</b>. Therefore, the recommended use of the <b>AI_CANONNAME</b> flag includes testing whether the <b>ai_canonname</b> member in the associated
<b>addrinfo</b> structure is <b>NULL</b>.</div>
<div> </div>
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_NUMERICHOST"></a><a id="ai_numerichost"></a><b>AI_NUMERICHOST</b>
</td>
<td width="60%">
When the <b>AI_NUMERICHOST</b> bit is set, the <i>pNodeName</i> parameter must contain a non-<b>NULL</b> numeric host address string, otherwise the <b>EAI_NONAME</b> error is returned. This flag prevents a name resolution service from being called.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_NUMERICSERV"></a><a id="ai_numericserv"></a><b>AI_NUMERICSERV</b>
</td>
<td width="60%">
When the <b>AI_NUMERICSERV</b> bit is set, the <i>pServiceName</i> parameter must contain a non-<b>NULL</b> numeric port number, otherwise the <b>EAI_NONAME</b> error is returned. This flag prevents a name resolution service from being called.
The <b>AI_NUMERICSERV</b> flag is defined on Windows SDK for Windows Vista and later. The <b>AI_NUMERICSERV</b> flag is not supported by Microsoft providers.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_ALL"></a><a id="ai_all"></a><b>AI_ALL</b>
</td>
<td width="60%">
If the <b>AI_ALL</b> bit is set, a request is made for IPv6 addresses and IPv4 addresses with <b>AI_V4MAPPED</b>.
The <b>AI_ALL</b> flag is defined on the Windows SDK for Windows Vista and later. The <b>AI_ALL</b> flag is supported on Windows Vista and later.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_ADDRCONFIG"></a><a id="ai_addrconfig"></a><b>AI_ADDRCONFIG</b>
</td>
<td width="60%">
If the <b>AI_ADDRCONFIG</b> bit is set, <b>getaddrinfo</b> will resolve only if a global address is configured. If <b>AI_ADDRCONFIG</b> flag is specified, IPv4 addresses shall be
returned only if an IPv4 address is configured on the local system,
and IPv6 addresses shall be returned only if an IPv6 address is
configured on the local system. The IPv4 or IPv6 loopback address is not
considered a valid global address.
The <b>AI_ADDRCONFIG</b> flag is defined on the Windows SDK for Windows Vista and later. The <b>AI_ADDRCONFIG</b> flag is supported on Windows Vista and later.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_V4MAPPED"></a><a id="ai_v4mapped"></a><b>AI_V4MAPPED</b>
</td>
<td width="60%">
If the <b>AI_V4MAPPED</b> bit is set and a request for IPv6 addresses fails, a name service request is made for IPv4 addresses and these addresses are converted to IPv4-mapped IPv6 address format.
The <b>AI_V4MAPPED</b> flag is defined on the Windows SDK for Windows Vista and later. The <b>AI_V4MAPPED</b> flag is supported on Windows Vista and later.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_NON_AUTHORITATIVE"></a><a id="ai_non_authoritative"></a><b>AI_NON_AUTHORITATIVE</b>
</td>
<td width="60%">
If the <b>AI_NON_AUTHORITATIVE</b> bit is set, the <b>NS_EMAIL</b> namespace provider returns both authoritative and non-authoritative results. If the <b>AI_NON_AUTHORITATIVE</b> bit is not set, the <b>NS_EMAIL</b> namespace provider returns only authoritative results.
The <b>AI_NON_AUTHORITATIVE</b> flag is defined on the Windows SDK for Windows Vista and later. The <b>AI_NON_AUTHORITATIVE</b> flag is supported on Windows Vista and later and applies only to the <b>NS_EMAIL</b> namespace.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_SECURE"></a><a id="ai_secure"></a><b>AI_SECURE</b>
</td>
<td width="60%">
If the <b>AI_SECURE</b> bit is set, the <b>NS_EMAIL</b> namespace provider will return results that were obtained with enhanced security to minimize possible spoofing.
The <b>AI_SECURE</b> flag is defined on the Windows SDK for Windows Vista and later. The <b>AI_SECURE</b> flag is supported on Windows Vista and later and applies only to the <b>NS_EMAIL</b> namespace.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_RETURN_PREFERRED_NAMES"></a><a id="ai_return_preferred_names"></a><b>AI_RETURN_PREFERRED_NAMES</b>
</td>
<td width="60%">
If the <b>AI_RETURN_PREFERRED_NAMES</b> is set, then no name should be provided in the <i>pNodeName</i> parameter. The <b>NS_EMAIL</b> namespace provider will return preferred names for publication.
The <b>AI_RETURN_PREFERRED_NAMES</b> flag is defined on the Windows SDK for Windows Vista and later. The <b>AI_RETURN_PREFERRED_NAMES</b> flag is supported on Windows Vista and later and applies only to the <b>NS_EMAIL</b> namespace.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_FQDN"></a><a id="ai_fqdn"></a><b>AI_FQDN</b>
</td>
<td width="60%">
If the <b>AI_FQDN</b> is set and a flat name (single label) is specified, <b>getaddrinfo</b> will return the fully qualified domain name that the name eventually resolved to. The fully qualified domain name is returned in the <b>ai_canonname</b> member in the associated
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure. This is different than <b>AI_CANONNAME</b> bit flag that returns the canonical name registered in DNS which may be different than the fully qualified domain name that the flat name resolved to. Only one of the <b>AI_FQDN</b> and <b>AI_CANONNAME</b> bits can be set. The <b>getaddrinfo</b> function will fail if both flags are present with <b>EAI_BADFLAGS</b>.
When the <b>AI_FQDN</b> bit is set, the <i>pNodeName</i> parameter cannot be <b>NULL</b>. Otherwise the
<a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-getaddrinfoexa">GetAddrInfoEx</a> function will fail with <a href="/windows/desktop/WinSock/windows-sockets-error-codes-2">WSANO_RECOVERY</a>.
<b>Windows 7: </b>The <b>AI_FQDN</b> flag is defined on the Windows SDK for Windows 7 and later. The <b>AI_FQDN</b> flag is supported on Windows 7 and later.
</td>
</tr>
<tr>
<td width="40%">
<a id="AI_FILESERVER"></a><a id="ai_fileserver"></a><b>AI_FILESERVER</b>
</td>
<td width="60%">
If the <b>AI_FILESERVER</b> is set, this is a hint to the namespace provider that the hostname being queried is being used in file share scenario. The namespace provider may ignore this hint.
<b>Windows 7: </b>The <b>AI_FILESERVER</b> flag is defined on the Windows SDK for Windows 7 and later. The <b>AI_FILESERVER</b> flag is supported on Windows 7 and later.
</td>
</tr>
</table>
<h3><a id="Example_code_using_AI_NUMERICHOST"></a><a id="example_code_using_ai_numerichost"></a><a id="EXAMPLE_CODE_USING_AI_NUMERICHOST"></a>Example code using AI_NUMERICHOST</h3>
The following code example shows how to use the <b>getaddrinfo</b> function to convert a text string representation of an IP address to an <a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a> structure that contains a <a href="/windows/desktop/WinSock/sockaddr-2">sockaddr</a> structure for the IP address and other information.
```cpp
#undef UNICODE
#include <winsock2.h>
#include <ws2tcpip.h>
#include <stdio.h>
// link with Ws2_32.lib
#pragma comment (lib, "Ws2_32.lib")
int __cdecl main(int argc, char **argv)
{
//-----------------------------------------
// Declare and initialize variables
WSADATA wsaData;
int iResult;
DWORD dwRetval;
int i = 1;
struct addrinfo *result = NULL;
struct addrinfo *ptr = NULL;
struct addrinfo hints;
// Validate the parameters
if (argc != 2) {
printf("usage: %s <IP Address String>\n", argv[0]);
printf(" getaddrinfo determines the IP binary network address\n");
printf(" %s 207.46.197.32\n", argv[0]); /* www.contoso.com */
return 1;
}
// Initialize Winsock
iResult = WSAStartup(MAKEWORD(2, 2), &wsaData);
if (iResult != 0) {
printf("WSAStartup failed: %d\n", iResult);
return 1;
}
//--------------------------------
// Setup the hints address info structure
// which is passed to the getaddrinfo() function
ZeroMemory( &hints, sizeof(hints) );
hints.ai_flags = AI_NUMERICHOST;
hints.ai_family = AF_UNSPEC;
// hints.ai_socktype = SOCK_STREAM;
// hints.ai_protocol = IPPROTO_TCP;
//--------------------------------
// Call getaddrinfo(). If the call succeeds,
// the result variable will hold a linked list
// of addrinfo structures containing response
// information
dwRetval = getaddrinfo(argv[1], NULL, &hints, &result);
if ( dwRetval != 0 ) {
printf("getaddrinfo failed with error: %d\n", dwRetval);
WSACleanup();
return 1;
}
printf("getaddrinfo returned success\n");
// Retrieve each address and print out the hex bytes
for(ptr=result; ptr != NULL ;ptr=ptr->ai_next) {
printf("getaddrinfo response %d\n", i++);
printf("\tFlags: 0x%x\n", ptr->ai_flags);
printf("\tFamily: ");
switch (ptr->ai_family) {
case AF_UNSPEC:
printf("Unspecified\n");
break;
case AF_INET:
printf("AF_INET (IPv4)\n");
break;
case AF_INET6:
printf("AF_INET6 (IPv6)\n");
break;
case AF_NETBIOS:
printf("AF_NETBIOS (NetBIOS)\n");
break;
default:
printf("Other %ld\n", ptr->ai_family);
break;
}
printf("\tSocket type: ");
switch (ptr->ai_socktype) {
case 0:
printf("Unspecified\n");
break;
case SOCK_STREAM:
printf("SOCK_STREAM (stream)\n");
break;
case SOCK_DGRAM:
printf("SOCK_DGRAM (datagram) \n");
break;
case SOCK_RAW:
printf("SOCK_RAW (raw) \n");
break;
case SOCK_RDM:
printf("SOCK_RDM (reliable message datagram)\n");
break;
case SOCK_SEQPACKET:
printf("SOCK_SEQPACKET (pseudo-stream packet)\n");
break;
default:
printf("Other %ld\n", ptr->ai_socktype);
break;
}
printf("\tProtocol: ");
switch (ptr->ai_protocol) {
case 0:
printf("Unspecified\n");
break;
case IPPROTO_TCP:
printf("IPPROTO_TCP (TCP)\n");
break;
case IPPROTO_UDP:
printf("IPPROTO_UDP (UDP) \n");
break;
default:
printf("Other %ld\n", ptr->ai_protocol);
break;
}
printf("\tLength of this sockaddr: %d\n", ptr->ai_addrlen);
printf("\tCanonical name: %s\n", ptr->ai_canonname);
}
freeaddrinfo(result);
WSACleanup();
return 0;
}
```
<h3><a id="Support_for_getaddrinfo_on_Windows_2000_and_older_versions_"></a><a id="support_for_getaddrinfo_on_windows_2000_and_older_versions_"></a><a id="SUPPORT_FOR_GETADDRINFO_ON_WINDOWS_2000_AND_OLDER_VERSIONS_"></a>Support for getaddrinfo on Windows 2000 and older versions
</h3>
The <b>getaddrinfo</b> function was added to the Ws2_32.dll on Windows XP and later. To execute an application that uses this function on earlier versions of Windows, then you need to include the <i>Ws2tcpip.h</i> and <i>Wspiapi.h</i> files. When the <i>Wspiapi.h</i> include file is added, the <b>getaddrinfo</b> function is defined to the <b>WspiapiGetAddrInfo</b> inline function in the <i>Wspiapi.h</i> file. At runtime, the <b>WspiapiGetAddrInfo</b> function is implemented in such a way that if the Ws2_32.dll or the Wship6.dll (the file containing <b>getaddrinfo</b> in the IPv6 Technology Preview for Windows 2000) does not include <b>getaddrinfo</b>, then a version of <b>getaddrinfo</b> is implemented inline based on code in the Wspiapi.h header file. This inline code will be used on older Windows platforms that do not natively support the <b>getaddrinfo</b> function.
The IPv6 protocol is supported on Windows 2000 when the IPv6 Technology Preview for Windows 2000 is installed. Otherwise <b>getaddrinfo</b> support on versions of Windows earlier than Windows XP is limited to handling IPv4 name resolution.
The <a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-getaddrinfow">GetAddrInfoW</a> function is the Unicode version of <b>getaddrinfo</b>. The <b>GetAddrInfoW</b> function was added to the Ws2_32.dll in Windows XP with Service Pack 2 (SP2). The <b>GetAddrInfoW</b> function cannot be used on versions of Windows earlier than Windows XP with SP2.
<b>Windows Phone 8:</b> This function is supported for Windows Phone Store apps on Windows Phone 8 and later.
<b>Windows 8.1</b> and <b>Windows Server 2012 R2</b>: This function is supported for Windows Store apps on Windows 8.1, Windows Server 2012 R2, and later.
## -see-also
<a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-getaddrinfoexa">GetAddrInfoEx</a>
<a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-getaddrinfow">GetAddrInfoW</a>
<a href="/windows/desktop/api/winnls/nf-winnls-idntoascii">IdnToAscii</a>
<a href="/windows/desktop/api/winnls/nf-winnls-idntounicode">IdnToUnicode</a>
<a href="/windows/desktop/api/winsock/nf-winsock-wsagetlasterror">WSAGetLastError</a>
<a href="/windows/desktop/api/winsock2/nf-winsock2-wsasocketa">WSASocket</a>
<a href="/windows/desktop/WinSock/winsock-functions">Winsock Functions</a>
<a href="/windows/desktop/WinSock/winsock-reference">Winsock Reference</a>
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoa">addrinfo</a>
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfow">addrinfoW</a>
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoexw">addrinfoex</a>
<a href="/windows/desktop/api/ws2def/ns-ws2def-addrinfoex2w">addrinfoex2</a>
<a href="/windows/desktop/api/winsock/nf-winsock-bind">bind</a>
<a href="/windows/desktop/api/winsock2/nf-winsock2-connect">connect</a>
<a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-freeaddrinfo">freeaddrinfo</a>
<a href="/windows/desktop/api/ws2tcpip/nf-ws2tcpip-gai_strerrora">gai_strerror</a>
<a href="/windows/desktop/api/winsock2/nf-winsock2-send">send</a>
<a href="/windows/desktop/api/winsock/nf-winsock-sendto">sendto</a>
<a href="/windows/desktop/api/winsock2/nf-winsock2-socket">socket</a> | 48.996951 | 1,245 | 0.704167 | eng_Latn | 0.907051 |
a9255b40022d1a2498e57a10853d15489b49432a | 39 | md | Markdown | README.md | KotlinID/android-movie | 46edd2aeb57ecaa44ecd01a0026128969af4eeab | [
"MIT"
] | null | null | null | README.md | KotlinID/android-movie | 46edd2aeb57ecaa44ecd01a0026128969af4eeab | [
"MIT"
] | null | null | null | README.md | KotlinID/android-movie | 46edd2aeb57ecaa44ecd01a0026128969af4eeab | [
"MIT"
] | null | null | null | # Android Movie
Movie Application List
| 13 | 22 | 0.820513 | kor_Hang | 0.62159 |
a9262438c7bb46c074e8a93856572bf7f7a3f43b | 19,593 | md | Markdown | docs/graphics-games/cocossharp/fruity-falls.md | Bloodorange0313/xamarin-docs.ja-jp | 6afd8071f809d733f84f2c2f881926eee1e7964c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/graphics-games/cocossharp/fruity-falls.md | Bloodorange0313/xamarin-docs.ja-jp | 6afd8071f809d733f84f2c2f881926eee1e7964c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/graphics-games/cocossharp/fruity-falls.md | Bloodorange0313/xamarin-docs.ja-jp | 6afd8071f809d733f84f2c2f881926eee1e7964c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Fruity Falls game の詳細
description: このガイドでは、一般的な CocosSharp と物理学、コンテンツ管理、ゲームの状態、およびゲームの設計などのゲーム開発の概念を説明、Fruity がゲームを確認します。
ms.prod: xamarin
ms.assetid: A5664930-F9F0-4A08-965D-26EF266FED24
author: conceptdev
ms.author: crdun
ms.date: 03/27/2017
ms.openlocfilehash: 959f5eb149ad375d686b17a85eb3d3b8fbdf3659
ms.sourcegitcommit: e268fd44422d0bbc7c944a678e2cc633a0493122
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 10/25/2018
ms.locfileid: "50114250"
---
# <a name="fruity-falls-game-details"></a>Fruity Falls game の詳細
_このガイドでは、一般的な CocosSharp と物理学、コンテンツ管理、ゲームの状態、およびゲームの設計などのゲーム開発の概念を説明、Fruity がゲームを確認します。_
Fruity Falls は、プレーヤーがポイントを獲得できる色付きのバケットに赤色と黄色の果物を並べ替えますが、物理運動ベースの単純なゲームです。 ゲームの目的では、せず成果物ドロップ間違ったビンにゲームを終了、できるだけ多くのポイントを獲得できます。

Fruity Falls で導入された概念を拡張し、 [BouncingGame ガイド](~/graphics-games/cocossharp/bouncing-game.md)次を追加することで。
- Png の形式でコンテンツします。
- 高度な物理学
- ゲームの状態 (シーン間の移行)
- 1 つのクラスに含まれる変数を通じてゲームの係数を調整する機能
- 組み込みのビジュアル デバッグのサポート
- ゲームのエンティティを使用してコードの編成
- 趣味と再生の値に重点を置いてゲームの設計
中に、 [BouncingGame ガイド](~/graphics-games/cocossharp/bouncing-game.md)Fruity が core CocosSharp の概念を導入することに重点をゲームの完成した製品にまとめてさせる方法を示します。 このガイドは、BouncingGame を参照するためリーダーがまず理解している、 [BouncingGame ガイド](~/graphics-games/cocossharp/bouncing-game.md)このガイドを読む前にします。
このガイドでは、Fruity には該当の独自のゲームを作成するための洞察を提供するための設計と実装について説明します。 これには、次のトピックについて説明します。
- [GameController クラス](#gamecontroller-class)
- [ゲームのエンティティ](#game-entities)
- [果物のグラフィック](#fruit-graphics)
- [物理学](#physics)
- [コンテンツのゲーム](#game-content)
- [GameCoefficients](#gamecoefficients)
## <a name="gamecontroller-class"></a>GameController クラス
Fruity は PCL プロジェクトに含まれる、`GameController`ゲームをインスタンス化し、シーンの間での移動を担当するクラス。 このクラスは、重複するコードを削除する、iOS と Android プロジェクトの両方で使用されます。
コード内に含まれる、`Initialize`メソッドは、コードに似ています、`Initialize`が変更されていない CocosSharp テンプレートをメソッドには、変更の数が含まれています。
既定で、`GameView.ContentManager.SearchPaths`デバイスの解像度に依存します。 Fruity がデバイスの解像度に関係なく同じ内容を使用してさらに詳しくは、次に示すように、そのため、`Initialize`メソッドを追加、`Images`へのパス (サブフォルダーは不要) で、 `SearchPaths`:
```csharp
contentSearchPaths.Add ("Images");
```
CocosSharp の新しいテンプレートを継承するクラスを含める`CCLayer`、ゲームのビジュアルとロジックをこのクラスに追加する必要があることを意味します。 代わりに、複数の Fruity 滝`CCLayer`インスタンス コントロールを描画順序。 これら`CCLayer`内に含まれるインスタンス、`GameView`クラスから継承`CCScene`します。 Fruity Falls にも、最初が、複数のシーンが含まれています、`TitleScene`します。 そのため、`Initialize`メソッドをインスタンス化、`TitleScene`インスタンスに渡される`RunWithScene`:
```csharp
var scene = new TitleScene (GameView);
GameView.Director.RunWithScene (scene);
```
Fruity には該当のコンテンツは、見た目上の理由から、低解像度のピクセルのアートとして作成されました。 `GameView.DesignResolution`設定されているため、ゲームは 384 単位の幅と高さを 512、のみ。
```csharp
// We use a lower-resolution display to get a pixellated appearance
int width = 384;
int height = 512;
GameView.DesignResolution = new CCSizeI (width, height);
```
最後に、`GameController`クラスは、いずれかで呼び出すことができる静的メソッドを提供します。`CCGameScene`別に移行する`CCScene`します。 このメソッドを使用して、間で移動、 `TitleScene` 、`GameScene`します。
## <a name="game-entities"></a>ゲームのエンティティ
Fruity Falls ゲーム オブジェクトのほとんどのエンティティのパターンを利用します。 このパターンの詳細な説明が記載されて、 [CocosSharp のエンティティのガイド](~/graphics-games/cocossharp/entities.md)します。
[エンティティ] フォルダーにエンティティを実装するゲーム オブジェクトが見つかりません、 **FruityFalls.Common**プロジェクト。

エンティティから継承するオブジェクトは、 `CCNode`、ビジュアル、競合、およびフレームごとの動作があります。
`Fruit`オブジェクトは、一般的な CocosSharp のエンティティを表します: ビジュアル オブジェクト、競合、およびフレームごとのロジックが含まれています。 コンス トラクターは、フルーツを初期化します。
```csharp
public Fruit ()
{
CreateFruitGraphic ();
if (GameCoefficients.ShowCollisionAreas)
{
CreateDebugGraphic ();
}
CreateCollision ();
CreateExtraPointsLabel ();
Acceleration.Y = GameCoefficients.FruitGravity;
}
```
### <a name="fruit-graphics"></a>果物のグラフィック
`CreateFruitGraphic`メソッドを作成、`CCSprite`インスタンスし、それを追加、`Fruit`します。 `IsAntialiased`ゲーム ピクセルの外観を提供するプロパティが false に設定します。 この値がすべて false に設定されて`CCSprite`と`CCLabel`ゲーム全体にわたってインスタンス。
```csharp
private void CreateFruitGraphic()
{
graphic = new CCSprite ("cherry.png");
graphic.IsAntialiased = false;
this.AddChild (graphic);
}
```
プレイヤーが接触するたびに、`Fruit`インスタンス、 `Paddle`、その成果物のポイント値が 1 つずつ増加します。 これは、果物の余分なポイントを同時に、経験豊富なプレイヤーに追加のチャレンジを提供します。 使用して、果物のポイント値を表示、`extraPointsLabel`インスタンス。
`CreateExtraPointsLabel`メソッドを作成、`CCLabel`インスタンスし、それを追加、 `Fruit`:
```csharp
private void CreateExtraPointsLabel()
{
extraPointsLabel = new CCLabel("", "Arial", 12, CCLabelFormat.SystemFont);
extraPointsLabel.IsAntialiased = false;
extraPointsLabel.Color = CCColor3B.Black;
this.AddChild(extraPointsLabel);
}
```
Fruity Falls には、果物 – さくらんぼと lemons の 2 つのさまざまな種類が含まれています。 `CreateFruitGraphic`で変更できますが、既定のビジュアルを果物ビジュアル割り当てます、`FruitColor`プロパティを呼び出して`UpdateGraphics`:
```csharp
private void UpdateGraphics()
{
if (GameCoefficients.ShowCollisionAreas)
{
debugGrahic.Clear ();
const float borderWidth = 4;
debugGrahic.DrawSolidCircle (
CCPoint.Zero,
GameCoefficients.FruitRadius,
CCColor4B.Black);
debugGrahic.DrawSolidCircle (
CCPoint.Zero,
GameCoefficients.FruitRadius - borderWidth,
fruitColor.ToCCColor ());
}
if (this.FruitColor == FruitColor.Yellow)
{
graphic.Texture = CCTextureCache.SharedTextureCache.AddImage ("lemon.png");
extraPointsLabel.Color = CCColor3B.Black;
extraPointsLabel.PositionY = 0;
}
else
{
graphic.Texture = CCTextureCache.SharedTextureCache.AddImage ("cherry.png");
extraPointsLabel.Color = CCColor3B.White;
extraPointsLabel.PositionY = -8;
}
}
```
最初の if ステートメント`UpdateGraphics`衝突の領域を視覚化に使用されるデバッグ グラフィックを更新します。 ゲームの最終リリースを行うと、物理学をデバッグするための開発時に保持できる前に、これらのビジュアルが無効になります。 2 番目の部分の変更、`graphic.Texture`プロパティを呼び出して`CCTextureCache.SharedTextureCache.AddImage`します。 このメソッドは、ファイル名でテクスチャへのアクセスを提供します。 このメソッドの詳細についてで参照できる、[ガイドのテクスチャ キャッシュ](~/graphics-games/cocossharp/texture-cache.md)します。
`extraPointsLabel`果物のイメージとコントラストを維持する色が調整され、その`PositionY`値が調整センターに、`CCLabel`の果物の`CCSprite`:


### <a name="collision"></a>衝突
Fruity Falls Geometry フォルダー内のオブジェクトを使用してカスタムの競合ソリューションを実装します。

いずれかを使用して Fruity 滝で衝突が実装される、`Circle`または`Polygon`エンティティの子として追加されるこれらの 2 種類のいずれかでは、通常のオブジェクト。 たとえば、`Fruit`オブジェクトには、`Circle`と呼ばれる`Collision`でそれを初期化するその`CreateCollision`メソッド。
```csharp
private void CreateCollision()
{
Collision = new Circle ();
Collision.Radius = GameCoefficients.FruitRadius;
this.AddChild (Collision);
}
```
なお、`Circle`クラスから継承、`CCNode`クラス、ゲームのエンティティに子として追加できるように。
```csharp
public class Circle : CCNode
{
...
}
```
`Polygon` 作成がのような`Circle`作成のように、`Paddle`クラス。
```csharp
private void CreateCollision()
{
Polygon = Polygon.CreateRectangle(width, height);
this.AddChild (Polygon);
}
```
衝突のロジックについては説明[このガイドの後半](#collision)します。
## <a name="physics"></a>物理学
Fruity には該当の物理運動を 2 つのカテゴリに分けることができます: 移動と競合します。
### <a name="movement-using-velocity-and-acceleration"></a>速度と高速化を使用した移動
Fruity Falls 使用`Velocity`と`Acceleration`と同様に、そのエンティティの動作を制御する値、 [BouncingGame](~/graphics-games/cocossharp/bouncing-game.md)します。 エンティティという名前のメソッドで、移動ロジックを実装する`Activity`、フレームごとに 1 回呼び出します。 たとえばでの移動の実装をわかります、`Fruit`クラス`Activity`メソッド。
```csharp
public void Activity(float frameTimeInSeconds)
{
timeUntilExtraPointsCanBeAdded -= frameTimeInSeconds;
// linear approximation:
this.Velocity += Acceleration * frameTimeInSeconds;
// This is a linear approximation to drag. It's used to
// keep the object from falling too fast, and eventually
// to slow its horizontal movement. This makes the game easier
this.Velocity -= Velocity * GameCoefficients.FruitDrag * frameTimeInSeconds;
this.Position += Velocity * frameTimeInSeconds;
}
```
`Fruit`オブジェクトがドラッグの実装内で一意で – どれほど高速の成果物に関連の速度が遅くなります値が移動します。 ドラッグのこの実装を提供、*ターミナル velocity*、果物インスタンス別に分類の最高速度であります。 ドラッグが、ゲームを再生する少し容易に成果物の水平方向移動ダウンも低下します。
`Paddle`オブジェクトも適用されます`Velocity`でその`Activity`メソッドが、その`Velocity`で計算された、`desiredLocation`値。
```csharp
public void Activity(float frameTimeInSeconds)
{
// This code will cause the cursor to lag behind the touch point
// Increasing this value reduces how far the paddle lags
// behind the player’s finger.
const float velocityCoefficient = 4;
// Get the velocity from current location and touch location
Velocity = (desiredLocation - this.Position) * velocityCoefficient;
this.Position += Velocity * frameTimeInSeconds;
...
}
```
通常、使用するゲーム`Velocity`直接ではなく、オブジェクトの操作の位置を制御するには、初期化後に、少なくともエンティティの位置を設定します。 直接設定することではなく、 `Paddle` 、位置、`Paddle.HandleInput`メソッド割り当てます、`desiredLocation`値。
```csharp
public void HandleInput(CCPoint touchPoint)
{
desiredLocation = touchPoint;
}
```
### <a name="collision"></a>衝突
Fruity Falls など、果物と collidable 他のオブジェクト間の半現実的な衝突の実装、`Paddle`と`GameScene.Splitter`します。 競合をデバッグするには、Fruity が衝突領域の可視性を変更して、`GameCoefficients.ShowDebugInfo`で、`GameCoefficients.cs`ファイル。
```csharp
public static class GameCoefficients
{
...
public const bool ShowCollisionAreas = true;
...
}
```
この値を設定`true`衝突の領域を描画します。

競合のロジックの開始、`GameScene.Activity`メソッド。
```csharp
private void Activity(float frameTimeInSeconds)
{
if (hasGameEnded == false)
{
paddle.Activity(frameTimeInSeconds);
foreach (var fruit in fruitList)
{
fruit.Activity(frameTimeInSeconds);
}
spawner.Activity(frameTimeInSeconds);
DebugActivity();
PerformCollision();
}
}
```
`PerformCollision` すべての競合を処理する`Fruit`他のオブジェクトのインスタンス。
```csharp
private void PerformCollision()
{
// reverse for loop since fruit may be destroyed:
for(int i = fruitList.Count - 1; i > -1; i--)
{
var fruit = fruitList[i];
FruitVsPaddle(fruit);
FruitPolygonCollision(fruit, splitter.Polygon, CCPoint.Zero);
FruitVsBorders(fruit);
FruitVsBins(fruit);
}
}
```
#### <a name="fruitvsborders"></a>FruitVsBorders
`FruitVsBorders` 競合では、別のクラスに含まれるロジックに依存するのではなく、競合の独自のロジックを実行します。 成果物と、画面の境界線の間の競合が solid 完璧ではありません – 果物慎重パドル移動して、画面の端からプッシュする可能性があるために、この違いが存在します。 果物、パドルとヒット時 画面からバウンドさせますが、過去の端と画面の外を移動するが、プレーヤーが緩やかに変化成果物をプッシュする場合。
```csharp
private void FruitVsBorders(Fruit fruit)
{
if(fruit.PositionX - fruit.Radius < 0 && fruit.Velocity.X < 0)
{
fruit.Velocity.X *= -1 * GameCoefficients.FruitCollisionElasticity;
}
if(fruit.PositionX + fruit.Radius > gameplayLayer.ContentSize.Width && fruit.Velocity.X > 0)
{
fruit.Velocity.X *= -1 * GameCoefficients.FruitCollisionElasticity;
}
if(fruit.PositionY + fruit.Radius > gameplayLayer.ContentSize.Height && fruit.Velocity.Y > 0)
{
fruit.Velocity.Y *= -1 * GameCoefficients.FruitCollisionElasticity;
}
}
```
#### <a name="fruitvsbins"></a>FruitVsBins
`FruitVsBins`メソッドは 2 つの区間のいずれかに任意の果物が低下する場合にチェックします。場合は、し (この果物/箱は、一致を色) 場合、ポイントが授与はプレーヤーまたは (色が一致しない) 場合、ゲームは終了します。
```csharp
private void FruitVsBins(Fruit fruit)
{
foreach(var bin in fruitBins)
{
if(bin.Polygon.CollideAgainst(fruit.Collision))
{
if(bin.FruitColor == fruit.FruitColor)
{
// award points:
score += 1 + fruit.ExtraPointValue;
scoreText.Score = score;
Destroy(fruit);
}
else
{
EndGame();
}
break;
}
}
}
```
#### <a name="fruitvspaddle-and-fruitpolygoncollision"></a>FruitVsPaddle と FruitPolygonCollision
パドルと分割 (領域の 2 つの箱を分離すること) と成果物と成果物に依存して両方の衝突、`FruitPolygonCollision`メソッド。 このメソッドでは、3 つの部分があります。
1. オブジェクトが衝突するかどうかをテストします。
1. これらが重複しないように、オブジェクト (だけ Fruity には該当の果物) を移動します。
1. 次の例では、上記で作成ポイント バウンドをシミュレートするには、オブジェクト (だけ Fruity には該当の果物) の速度を調整します。
```csharp
private static bool FruitPolygonCollision(Fruit fruit, Polygon polygon, CCPoint polygonVelocity)
{
// Test whether the fruit collides
bool didCollide = polygon.CollideAgainst(fruit.Collision);
if (didCollide)
{
var circle = fruit.Collision;
// Get the separation vector to reposition the fruit so it doesn't overlap the polygon
var separation = CollisionResponse.GetSeparationVector(circle, polygon);
fruit.Position += separation;
// Adjust the fruit's Velocity to make it bounce:
var normal = separation;
normal.Normalize();
fruit.Velocity = CollisionResponse.ApplyBounce(
fruit.Velocity,
polygonVelocity,
normal,
GameCoefficients.FruitCollisionElasticity);
}
return didCollide;
}
```
Fruity Falls 衝突の応答が片側 – のみ果物の速度と位置を調整します。 その他のゲームで、互いに、岩またはクラッシュした 2 台の車をプッシュする文字など、関連する両方のオブジェクトに影響を与える競合があることに注目に値します。
含まれる衝突ロジックの背後にある数値演算、`Polygon`と`CollisionResponse`クラスについては、このガイドの範囲外としてに書き込まれた、これらのメソッド使用できますのゲームの多くの種類。 多角形と`CollisionResponse`クラスは、このコードは、さまざまな種類のゲームを使用できるようにもと四角形ではないの凸多角形をサポートします。
## <a name="game-content"></a>コンテンツのゲーム
Fruity には該当のアートは、BouncingGame からゲームをすぐに区別します。 ゲームの設計と似ていますが、プレーヤーは 2 つのゲームを検索する方法の違いをすぐに表示されます。 プレイヤーがゲームでは、そのビジュアルをゲームを試してみてくださいするかどうか多くの場合、決定します。 そのため、開発者が行う視覚に訴えるにリソースを投資することが非常に重要はゲームです。
Fruity には該当のアートは、次の目的で作成されました。
- 一貫性のあるテーマ
- 1 つだけの文字 – Xamarin monkey に重点を置いた
- 発生する明るい色を緩和の楽しいものを作成するには
- ゲームプレイ オブジェクトは、視覚的に追跡するために簡単であるため、バック グラウンドとフォア グラウンド間コントラストします。
- リソースの量が多いアニメーションなしの単純な視覚効果を作成する機能
### <a name="content-location"></a>コンテンツの場所
Fruity Falls には、Android プロジェクトで、Images フォルダーにそのすべての内容が含まれます。

これらの同じファイルは、重複を避けるために iOS プロジェクトにリンクされ、両方のプロジェクト ファイルにその変更に影響を与えます。

内でコンテンツが含まれていないことに注意、 **%ld**または**Hd**フォルダーで、既定の CocosSharp テンプレートの一部であります。 **%Ld**と**Hd**電話、タブレットなどの高解像度のデバイス用に 1 つなど、解像度の低いデバイス用に 1 – コンテンツの 2 つのセットを提供するゲームに使用するフォルダーが対象としています。 Fruity がアートは意図的にで作成、ピクセル見た目、のでさまざまな画面サイズにコンテンツを提供する必要はありません。 そのため、 **%ld**と**Hd**フォルダーがプロジェクトから完全に削除されました。
### <a name="gamescene-layering"></a>GameScene レイヤー
このガイドでは、前半で説明したように、`GameScene`すべてのゲーム オブジェクトのインスタンス化、配置、および (競合) などのオブジェクト間のロジックを担当します。 すべてのオブジェクトは 4 つのいずれかに追加`CCLayer`インスタンス。
```csharp
CCLayer backgroundLayer;
CCLayer gameplayLayer;
CCLayer foregroundLayer;
CCLayer hudLayer;
```
これら 4 つの層が作成、`CreateLayers`メソッド。 追加されますが、`GameScene`後ろから前の順序で。
```csharp
private void CreateLayers()
{
backgroundLayer = new CCLayer();
this.AddLayer(backgroundLayer);
gameplayLayer = new CCLayer();
this.AddLayer(gameplayLayer);
foregroundLayer = new CCLayer();
this.AddLayer(foregroundLayer);
hudLayer = new CCLayer();
this.AddLayer(hudLayer);
}
```
使用してレイヤーをすべてのビジュアル オブジェクトを追加のレイヤーが作成されたら、`AddChild`メソッドのように、`CreateBackground`メソッド。
```csharp
private void CreateBackground()
{
var background = new CCSprite("background.png");
background.AnchorPoint = new CCPoint(0, 0);
background.IsAntialiased = false;
backgroundLayer.AddChild(background);
}
```
### <a name="vine-entity"></a>つるエンティティ
`Vine`エンティティは、一意にコンテンツの使用 – ゲームプレイに影響を与えません。 20 個で構成されています`CCSprite`つるが常に、画面の上部に達したかどうかを確認する試行錯誤が選択した数のインスタンスします。
```csharp
public Vine ()
{
const int numberOfVinesToAdd = 20;
for (int i = 0; i < numberOfVinesToAdd; i++)
{
var graphic = new CCSprite ("vine.png");
graphic.AnchorPoint = new CCPoint(.5f, 0);
graphic.PositionY = i * graphic.ContentSize.Height;
this.AddChild (graphic);
}
}
```
最初の`CCSprite`下端には、つるの位置が一致するように配置されます。 これは、AnchorPoint 設定`new CCPoint(.5f, 0)`します。 `AnchorPoint`プロパティで使用*座標を正規化*どの範囲 0 ~ 1 のサイズに関係なくの間、 `CCSprite`:

後続のつるスプライトを使用して、配置、`graphic.ContentSize.Height`値で、イメージの高さをピクセル単位で返します。 次の図は、つるグラフィックを上に並べて表示する方法を示しています。

配信元から、 `Vine` 、つるの下部にあるエンティティは、その後に配置するはのように簡単です。、`Paddle.CreateVines`メソッド。
```csharp
private void CreateVines()
{
// Increasing this value moves the vines closer to the
// center of the paddle.
const int vineDistanceFromEdge = 4;
leftVine = new Vine ();
var leftEdge = -width / 2.0f;
leftVine.PositionX = leftEdge + vineDistanceFromEdge;
this.AddChild (leftVine);
rightVine = new Vine ();
var rightEdge = width / 2.0f;
rightVine.PositionX = rightEdge - vineDistanceFromEdge;
this.AddChild (rightVine);
}
```
インスタンスをつる、`Paddle`エンティティにも注目に値する回転動作があります。 `Paddle`に従ってプレイヤーの入力 (これは、このガイドは、以下で詳しく説明します)、エンティティ全体を回転、`Vine`インスタンスがこの回転によっても影響します。 ただし、回転、`Vine`インスタンスと共に、`Paddle`次のアニメーションに示すように、望ましくない効果が生成されます。

対抗する、 `Paddle` 、回転、`leftVine`と`rightVine`インスタンスをローテーションのように、パドルの反対の方向、`Paddle.Activity`メソッド。
```csharp
public void Activity(float frameTimeInSeconds)
{
...
// This value adds a slight amount of rotation
// to the vine. Having a small amount of tilt looks nice.
float vineAngle = this.Velocity.X / 100.0f;
leftVine.Rotation = -this.Rotation + vineAngle;
rightVine.Rotation = -this.Rotation + vineAngle;
}
```
を通じてつるに少量の回転が追加されたことに注意してください、`vineAngle`係数。 この値は、vines の回転量を調整する変更できます。
## <a name="gamecoefficients"></a>GameCoefficients
Fruity 滝にはというクラスが含まれていますので、すべての優れたゲームは、製品のイテレーションの`GameCoefficients`ゲームをプレイする方法を制御します。 このクラスには、コントロールの物理学、レイアウト、起動、およびスコア付けするゲーム全体で使用されている表現力豊かな変数が含まれています。
たとえば、果物の物理運動は、次の変数によって制御されます。
```csharp
public static class GameCoefficients
{
...
// The strength of the gravity. Making this a
// smaller (bigger negative) number will make the
// fruit fall faster. A larger (smaller negative) number
// will make the fruit more floaty.
public const float FruitGravity = -60;
// The impact of "air drag" on the fruit, which gives
// the fruit terminal velocity (max falling speed) and slows
// the fruit's horizontal movement (makes the game easier)
public const float FruitDrag = .1f;
// Controls fruit collision bouncyness. A value of 1
// means no momentum is lost. A value of 0 means all
// momentum is lost. Values greater than 1 create a spring-like effect
public const float FruitCollisionElasticity = .5f;
...
}
```
高速な成果物が、水平方向の方法を調整するこれらの変数を使用できる名前が示すよう移動は、時刻、およびパドル バウンス経由で、速度が低下します。
(XML) などのコードまたはデータ ファイルに格納されているゲームの係数をプログラマではないゲーム設計者とチームにとって特に重要であることができます。
`GameCoefficients`クラスには、情報または衝突の領域の生成などのデバッグ情報を有効にするための値も含まれています。
```csharp
public static class GameCoefficients
{
...
// This controls whether debug information is displayed on screen.
public const bool ShowDebugInfo = false;
public const bool ShowCollisionAreas = false;
...
}
```
## <a name="conclusion"></a>まとめ
このガイドでは、Fruity がゲームについて説明しました。 コンテンツ、物理学、ゲームの状態管理などの概念を説明しました。
## <a name="related-links"></a>関連リンク
- [CocosSharp API ドキュメント](https://developer.xamarin.com/api/namespace/CocosSharp/)
- [完成したプロジェクト (サンプル)](https://developer.xamarin.com/samples/mobile/FruityFalls/)
| 31.652666 | 331 | 0.747206 | yue_Hant | 0.717247 |
a9265fc77391b77ce5fbce2fac34ca166b66391e | 292 | md | Markdown | content/S01_E03.md | halfmage/kennyvsspenny | d8a47ff4e6919317511aadd7e623349b1fe0c27e | [
"MIT"
] | null | null | null | content/S01_E03.md | halfmage/kennyvsspenny | d8a47ff4e6919317511aadd7e623349b1fe0c27e | [
"MIT"
] | 7 | 2021-03-09T08:38:59.000Z | 2022-02-26T11:55:54.000Z | content/S01_E03.md | halfmage/kennyvsspenny | d8a47ff4e6919317511aadd7e623349b1fe0c27e | [
"MIT"
] | null | null | null | ---
season: '01'
episode: '03'
title: 'Who Is the Better Chef?'
description: 'At the first weigh-in, Kenny hid a solder halo in his hair.'
winner: 'Kenny'
humiliation: 'Spenny gave a spongebath to an old man.'
airtime: '08-26-2003'
youtubeID: 'https://www.youtube.com/watch?v=W4gy3tpgA9Y'
--- | 29.2 | 74 | 0.712329 | eng_Latn | 0.738451 |
a926d275e6dd59cfd0db71371c707005ab81d579 | 6,871 | md | Markdown | articles/application-gateway/application-gateway-create-probe-portal.md | diablo444/azure-docs.de-de | 168079679b8171e6c2b6957d21d581f05752689d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/application-gateway-create-probe-portal.md | diablo444/azure-docs.de-de | 168079679b8171e6c2b6957d21d581f05752689d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/application-gateway-create-probe-portal.md | diablo444/azure-docs.de-de | 168079679b8171e6c2b6957d21d581f05752689d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-01-21T14:22:47.000Z | 2022-01-21T14:22:47.000Z | ---
title: "Erstellen eines benutzerdefinierten Tests – Azure Application Gateway (Azure-Portal) | Microsoft-Dokumentation"
description: "Erfahren Sie, wie Sie einen benutzerdefinierten Test für ein Anwendungsgateway über das Portal erstellen."
services: application-gateway
documentationcenter: na
author: georgewallace
manager: timlt
editor:
tags: azure-resource-manager
ms.assetid: 33fd5564-43a7-4c54-a9ec-b1235f661f97
ms.service: application-gateway
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: infrastructure-services
ms.date: 04/26/2017
ms.author: gwallace
ms.translationtype: HT
ms.sourcegitcommit: 818f7756189ed4ceefdac9114a0b89ef9ee8fb7a
ms.openlocfilehash: 65e9bba4ce9ac41ae2a9a8c3fa7f661165fc1403
ms.contentlocale: de-de
ms.lasthandoff: 07/14/2017
---
# <a name="create-a-custom-probe-for-application-gateway-by-using-the-portal"></a>Erstellen eines benutzerdefinierten Tests für ein Anwendungsgateway über das Portal
> [!div class="op_single_selector"]
> * [Azure-Portal](application-gateway-create-probe-portal.md)
> * [Azure Resource Manager PowerShell](application-gateway-create-probe-ps.md)
> * [Klassische Azure PowerShell](application-gateway-create-probe-classic-ps.md)
In diesem Artikel fügen Sie einen benutzerdefinierten Test zu einem vorhandenen Anwendungsgateway über das Azure-Portal hinzu. Benutzerdefinierte Tests sind für Anwendungen, die über eine bestimmte Seite für die Integritätsprüfung verfügen, oder für Anwendungen hilfreich, die keine erfolgreiche Antwort für die Standardwebanwendung bereitstellen.
## <a name="before-you-begin"></a>Voraussetzungen
Wenn Sie nicht bereits über ein Anwendungsgateway verfügen, besuchen Sie [Erstellen eines Anwendungsgateways](application-gateway-create-gateway-portal.md), um ein Anwendungsgateway zu erstellen, mit dem Sie arbeiten können.
## <a name="createprobe"></a>Erstellen des Tests
Tests werden in einem aus zwei Schritten bestehenden Prozess im Portal konfiguriert. Im ersten Schritt wird der Test erstellt. Im zweiten Schritt fügen Sie den Test den Back-End-HTTP-Einstellungen des Anwendungsgateways hinzu.
1. Melden Sie sich beim [Azure-Portal](https://portal.azure.com)an. Falls Sie noch nicht über ein Azure-Konto verfügen, können Sie sich für eine [kostenlose einmonatige Testversion](https://azure.microsoft.com/free) registrieren.
1. Klicken Sie im Azure-Portal im Bereich „Favoriten“ auf „Alle Ressourcen“. Klicken Sie im Blatt „Alle Ressourcen“ auf das Anwendungsgateway. Falls das ausgewählte Abonnement bereits mehrere Ressourcen enthält, können Sie „partners.contoso.net“ in das Feld „Nach Name filtern...“ eingeben und komfortabel auf das Anwendungsgateway zugreifen.
1. Klicken Sie auf **Tests** und dann auf die Schaltfläche **Hinzufügen**, um einen Test hinzuzufügen.
![Hinzufügen eines Blatts für einen Test mit bereitgestellten Informationen][1]
1. Stellen Sie auf dem Blatt **Integritätstest hinzufügen** die erforderlichen Informationen für den Test bereit, und klicken Sie anschließend auf **OK**.
|**Einstellung** | **Wert** | **Details**|
|---|---|---|
|**Name**|customProbe|Dieser Wert ist der Anzeigename für den Test, auf den Sie über das Portal zugreifen können.|
|**Protokoll**|HTTP oder HTTPS | Das Protokoll, das vom Integritätstest verwendet wird.|
|**Host**|z.B.: contoso.com|Dieser Wert ist der Hostname, der für den Test verwendet wird. Nur relevant, wenn in Application Gateway mehrere Standorte konfiguriert sind. Andernfalls verwenden Sie 127.0.0.1. Dieser Wert entspricht nicht dem Hostnamen des virtuellen Computers.|
|**Path**|/ oder ein anderer Pfad|Dies ist der Rest der vollständigen URL für den benutzerdefinierten Test. Ein gültiger Pfad beginnt mit „/“. Verwenden Sie für den Standardpfad von „http://contoso.com“ nur „/“. |
|**Interval (Sek.)**|30|Legen Sie fest, wie oft der Test ausgeführt werden soll, um die Integrität zu prüfen. Es wird nicht empfohlen, einen Wert unter 30 Sekunden einzustellen.|
|**Timeout (Sek.)**|30|Legen Sie fest, wie lange der Test warten soll, bis ein Timeout auftritt. Das Timeoutintervall muss lang genug sein, damit ein HTTP-Aufruf erfolgen und sichergestellt werden kann, dass die Integritätsseite für das Back-End verfügbar ist.|
|**Fehlerhafter Schwellenwert**|3|Dies ist die Anzahl erfolgloser Versuche, nach denen der Test als „fehlerhaft“ eingestuft wird. Ein Schwellenwert von 0 bedeutet, dass das Back-End bei einer fehlerhaften Integritätsprüfung sofort als fehlerhaft eingestuft wird.|
> [!IMPORTANT]
> Der Hostname ist nicht identisch mit dem Servernamen. Dieser Wert ist der Name des virtuellen Hosts, der auf dem Anwendungsserver ausgeführt wird. Der Test wird an diese Adresse gesendet: http://(Hostname):(Port aus HTTP-Einstellungen)/urlPath.
## <a name="add-probe-to-the-gateway"></a>Hinzufügen des Tests zum Gateway
Nachdem der Test erstellt wurde, können Sie ihn jetzt zum Gateway hinzufügen. Testeinstellungen werden in den Back-End-HTTP-Einstellungen des Anwendungsgateways festgelegt.
1. Klicken Sie auf **HTTP-Einstellungen** für das Anwendungsgateway, und klicken Sie auf die aufgeführten aktuellen Back-End-HTTP-Einstellungen im Fenster, um das Blatt „Konfiguration“ zu öffnen.
![Fenster mit HTTPS-Einstellungen][2]
1. Aktivieren Sie auf dem Blatt mit den Einstellungen für **appGatewayBackEndHttpSettings** das Kontrollkästchen **Benutzerdefinierten Test verwenden**, und wählen Sie den Test aus, den Sie im Abschnitt [Erstellen des Tests](#createprobe) in der Dropdownliste **Benutzerdefinierter Test** erstellt haben.
Wenn Sie fertig sind, klicken Sie auf **Speichern**, um die Einstellungen anzuwenden.
Der Standardtest prüft den Standardzugriff auf die Webanwendung. Nachdem nun ein benutzerdefinierter Test erstellt wurde, verwendet das Anwendungsgateway den benutzerdefinierten Pfad, um die Integrität der Back-End-Server zu überwachen. Das Anwendungsgateway überprüft den im Test angegebenen Pfad basierend auf den definierten Kriterien. Wenn der Aufruf an „host:Port/path“ keine HTTP-Statusantwort 200-399 zurückgibt, wird der Server aus der Rotation herausgenommen, nachdem der Fehlerschwellenwert erreicht ist. Die fehlerhafte Instanz wird weiterhin überprüft, um festzustellen, wann die Integrität wiederhergestellt ist. Sobald die Instanz dem fehlerfreien Serverpool wieder hinzugefügt wurde, wird wieder Datenverkehr an die Instanz übertragen, und die Instanz wird wie üblich in den vom Benutzer angegebenen Intervallen getestet.
## <a name="next-steps"></a>Nächste Schritte
Informationen zum Konfigurieren der SSL-Auslagerung mit Azure Application Gateway finden Sie unter [Konfigurieren der SSL-Auslagerung](application-gateway-ssl-portal.md).
[1]: ./media/application-gateway-create-probe-portal/figure1.png
[2]: ./media/application-gateway-create-probe-portal/figure2.png
| 79.895349 | 836 | 0.797409 | deu_Latn | 0.996302 |
a9279a5fa375477d2b1351b88a2273cf2b7cc861 | 1,505 | md | Markdown | web/datetimepicker/how-to/custom-date-validation.md | dandv/kendo-docs | 536d6abbe39b377d57de3ced112d79ad4cebe8dd | [
"MIT",
"Unlicense"
] | 1 | 2021-09-11T06:36:32.000Z | 2021-09-11T06:36:32.000Z | web/datetimepicker/how-to/custom-date-validation.md | dandv/kendo-docs | 536d6abbe39b377d57de3ced112d79ad4cebe8dd | [
"MIT",
"Unlicense"
] | 3 | 2021-05-20T23:53:38.000Z | 2022-02-26T10:42:10.000Z | web/datetimepicker/how-to/custom-date-validation.md | dandv/kendo-docs | 536d6abbe39b377d57de3ced112d79ad4cebe8dd | [
"MIT",
"Unlicense"
] | null | null | null | ---
title: Custom date validation
page_title: Custom date validation
description: Example that shows how to create custom date validation
---
# How to create custom date validation
Example that shows how to create custom date validation
#### Example:
```html
<div id="example">
<div id="to-do">
<input id="datetimepicker" name="datetimepicker" style="width:200px;" required />
<span class="k-invalid-msg" data-for="datetimepicker"></span>
</div>
<script>
$(document).ready(function () {
// create DateTimePicker from input HTML element
$("#datetimepicker").kendoDateTimePicker({
value:new Date(),
parseFormats: ["MM/dd/yyyy"],
change: function(e) {
}
});
var validator = $("#example").kendoValidator({
rules: {
datepicker: function(input) {
if (input.is("[data-role=datetimepicker]")) {
return input.data("kendoDateTimePicker").value();
} else {
return true;
}
}
},
messages: {
datepicker: "Please enter valid date!"
}
}).data("kendoValidator");
});
</script>
<style scoped>
#to-do {
height: 52px;
width: 221px;
margin: 30px auto;
padding: 91px 0 0 188px;
background: url('../content/web/datepicker/todo.png') transparent no-repeat 0 0;
}
</style>
</div>
```
| 24.274194 | 88 | 0.546844 | eng_Latn | 0.344269 |
a927aab9410bd747c9dc38da860842cb6112bec4 | 1,273 | md | Markdown | _docs/36-ontimize-web-ngx-dynamicform.md | OntimizeWeb/docs | f09ef31f90c9729959505f1b545f2722ad20e4f9 | [
"MIT"
] | 2 | 2016-10-04T10:16:20.000Z | 2020-01-17T12:34:03.000Z | _docs/36-ontimize-web-ngx-dynamicform.md | OntimizeWeb/docs | f09ef31f90c9729959505f1b545f2722ad20e4f9 | [
"MIT"
] | 20 | 2019-11-04T11:29:32.000Z | 2022-03-30T06:53:57.000Z | _docs/36-ontimize-web-ngx-dynamicform.md | OntimizeWeb/docs | f09ef31f90c9729959505f1b545f2722ad20e4f9 | [
"MIT"
] | 2 | 2017-09-28T10:35:21.000Z | 2018-09-12T10:31:42.000Z | ---
permalink: /ontimize-web-ngx-dynamicform/
title: "Dynamicform"
excerpt: ""
---
# Ontimize Web Dynamicform
Ontimize Web Dynamicform is a web dynamic form implementation.
* [Github repository](#github)
* [Installation](#installation)
* [Usage](#usage)
## Github
Ontimize Web Dynamic Form module is stored in [github](https://github.com/OntimizeWeb/ontimize-web-ngx-dynamicform){:target="_blank"} where you can also see/add todos, bugs or feature requests in the [issues](https://github.com/OntimizeWeb/ontimize-web-ngx-dynamicform/issues){:target="_blank"} section.
## Installation
```bash
npm install ontimize-web-ngx-dynamicform --save
```
## Usage
### Configure angular-cli.json dependencies
You must add the module styles definition in your '*.angular-cli.json*' file styles array:
```bash
...
"styles": [
...
"../node_modules/ontimize-web-ngx-dynamicform/styles.scss",
....
],
...
```
### Import in an application module
Include the dynamic form module into your app in the module where you want to use it.
```bash
...
import { DynamicFormModule } from 'ontimize-web-ngx-dynamicform';
...
@NgModule({
imports: [
DynamicFormModule,
/* other imports */
],
declarations: ...
providers: ...
})
export class ExampleModule { }
```
| 20.532258 | 303 | 0.703064 | eng_Latn | 0.752247 |
a927ca6995dfff48753c2f6f95be855ea9813b11 | 67 | md | Markdown | kernel/README.md | EnderIce2/FennixProject | a1c10b9fe85b325c6d2293834c9cb907821cce16 | [
"MIT"
] | null | null | null | kernel/README.md | EnderIce2/FennixProject | a1c10b9fe85b325c6d2293834c9cb907821cce16 | [
"MIT"
] | null | null | null | kernel/README.md | EnderIce2/FennixProject | a1c10b9fe85b325c6d2293834c9cb907821cce16 | [
"MIT"
] | null | null | null | # Fennix Kernel
This is the core component of the operating system. | 33.5 | 51 | 0.80597 | eng_Latn | 0.99993 |
a928a3cf3652db5d5180187c931dcf73fb5e18f9 | 2,723 | md | Markdown | content/publication/2021-01-01_Socioeconomic_Gradie.md | nataliariquelme/publicaciones | e69bb7fab03783d2da04d8065ef41e8bfa68f865 | [
"MIT"
] | null | null | null | content/publication/2021-01-01_Socioeconomic_Gradie.md | nataliariquelme/publicaciones | e69bb7fab03783d2da04d8065ef41e8bfa68f865 | [
"MIT"
] | 43 | 2022-01-21T02:56:28.000Z | 2022-01-21T20:54:18.000Z | content/publication/2021-01-01_Socioeconomic_Gradie.md | nataliariquelme/publicaciones | e69bb7fab03783d2da04d8065ef41e8bfa68f865 | [
"MIT"
] | 1 | 2021-12-27T16:41:26.000Z | 2021-12-27T16:41:26.000Z | +++
title = "Socioeconomic Gradients in Child Development: Evidence from a Chilean Longitudinal Study 2010–2017"
date = "2021-01-01"
authors = ["Alejandra Abufhele", "Dante Contreras", "Esteban Puentes", "Amanda Telias", "Natalia Valdebenito"]
publication_types = ["2"]
publication = "Advances in Life Course Research 100451. https://doi.org/10.1016/j.alcr.2021.100451"
publication_short = "Advances in Life Course Research 100451. https://doi.org/10.1016/j.alcr.2021.100451"
abstract = "Empirical evidence shows that lack of resources during infancy and the process of accumulating disadvantages throughout childhood have important consequences for cognitive and socio-emotional development. This paper examines socioeconomic gradients across language and socio-emotional measures. Using longitudinal data from 7-year, three-wave panel data, we study the patterns of socioeconomic status and child development in Chile and estimate how much of the wealth gap can be explained by different mediators like maternal educational and skills, child attendance of preschool and school, possession of books, or domestic violence indicators. We show that there are strong associations between household wealth and child development, and that, as the child grows, the gap between the most extreme quintiles of the distribution, both in cognitive and socio-emotional skills, persists but decreases in magnitude. Taking advantage of the longitudinal nature of the data, we calculate a permanent skill for each child and each skill dimension in this 7-year period. The analysis for the permanent component shows that wealth gaps are important to determine language, but not socio-emotional skills, and that the gap is larger for girls than for boys in the early childhood period. While mediators account for some of the associations, there is still a large socioeconomic gap that persists in receptive language among children. The most important factors that mediate the wealth gaps are inherited from maternal characteristics. By understanding the dynamism of social and cognitive vulnerability experienced during childhood and employing longitudinal data and methods, this study contributes to and extends the existing literature on socioeconomic gaps and child development in the Latin American context."
abstract_short = ""
url_source = "https://www.sciencedirect.com/science/article/pii/S1040260821000551"
tags = ["Early childhood intervention","Social security","Socioeconomic factors"]
url_code = ""
image_preview = ""
selected = false
projects = []
url_pdf = ""
url_preprint = ""
url_dataset = ""
url_project = ""
url_slides = ""
url_video = ""
url_poster = ""
math = true
highlight = true
[header]
image = ""
caption = ""
+++
| 93.896552 | 1,819 | 0.801322 | eng_Latn | 0.991802 |
a9293e0b8654ae94fb6977f81bc2339104ca7dac | 484 | md | Markdown | README.md | jani-nykanen/another-3d-soft-renderer | 727d18d279a474883feb469e7280eb44e82ae166 | [
"Linux-OpenIB"
] | null | null | null | README.md | jani-nykanen/another-3d-soft-renderer | 727d18d279a474883feb469e7280eb44e82ae166 | [
"Linux-OpenIB"
] | null | null | null | README.md | jani-nykanen/another-3d-soft-renderer | 727d18d279a474883feb469e7280eb44e82ae166 | [
"Linux-OpenIB"
] | null | null | null | It will blow your mind.

## Building
You need the following libraries
- [SDL2](https://www.libsdl.org)
- [SDL_Image](https://www.libsdl.org/projects/SDL_image/)
- [TMX C Loader](https://github.com/baylej/tmx) by baylej
If you have the libraries installed, just cd to the root dir and
```
make
```
Shouldn't be too hard.
Windows not tested, works fine on 64-bit Linux Mint.
-----
(c) 2017 Jani Nykänen
Do not steal.
| 19.36 | 64 | 0.714876 | eng_Latn | 0.552874 |
a92a88b356ef332573b0b364ef93198ac6f57549 | 12,628 | md | Markdown | docs/preprocessing/OpenBMI.md | MIN2Net/MIN2Net.github.io | 5539255e95af36524efacf712fff496de64448ef | [
"MIT"
] | null | null | null | docs/preprocessing/OpenBMI.md | MIN2Net/MIN2Net.github.io | 5539255e95af36524efacf712fff496de64448ef | [
"MIT"
] | null | null | null | docs/preprocessing/OpenBMI.md | MIN2Net/MIN2Net.github.io | 5539255e95af36524efacf712fff496de64448ef | [
"MIT"
] | null | null | null | ---
layout: default
title: OpenBMI dataset
parent: Preprocessing
nav_order: 4
---
# min2net.preprocessing.OpenBMI
[<img src="https://min2net.github.io/assets/images/github.png" width="30" height="30"> View source on GitHub](https://github.com/IoBT-VISTEC/MIN2Net/tree/main/min2net/preprocessing/OpenBMI){: .btn .fs-5 .mb-4 .mb-md-0 }
{: .fs-6 .fw-300 }
{: .no_toc }
## Table of contents
{: .no_toc .text-delta }
1. TOC
{:toc}
---
## Time domain
[<img src="https://min2net.github.io/assets/images/github.png" width="30" height="30"> View source on GitHub](https://github.com/IoBT-VISTEC/MIN2Net/blob/main/min2net/preprocessing/OpenBMI/time_domain.py){: .btn .fs-5 .mb-4 .mb-md-0 }
### Subject-dependent setting
Preprocess raw time-series EEG in subject-dependent setting using butter bandpass filter and resampling. Split data into train, validation and test sets using stratified k-fold cross-validation.
```py
time_domain.subject_dependent(k_folds,
pick_smp_freq,
bands,
order,
save_path,
num_class,
sel_chs)
```
**Arguments:**
| Arguments | Description |
|:---|:----|
|k_folds | `int` number of k-fold cross-validation. |
| pick_smp_freq | `int` pick sample frequency (downsampling EEG to to `pick_smp_freq`) |
| bands | `list` list of low cut and high cut frequency bands e.g. `[8, 30]` |
| order | `int` number of order of butter bandpass filter |
| save_path | `str` path to save processed EEG |
| num_class | `int` number of classes. Default 2|
| sel_chs | `list` or `None`. list if EEG channels. Default `None` |
**Example**
```py
import min2net.preprocessing as prep
prep.OpenBMI.time_domain.subject_dependent_setting(k_folds=5,
pick_smp_freq=100,
bands=[8, 30],
order=5,
save_path='datasets')
```
### Subject-independent setting
Preprocess raw time-series EEG in subject-independent setting using butter bandpass filter and resampling. Split data into train, validation and test sets using stratified k-fold cross-validation.
```py
time_domain.subject_independent(k_folds,
pick_smp_freq,
bands,
order,
save_path,
num_class,
sel_chs)
```
**Arguments:**
| Arguments | Description |
|:---|:----|
|k_folds | `int` number of k-fold cross-validation. |
| pick_smp_freq | `int` pick sample frequency (downsampling EEG to to `pick_smp_freq`) |
| bands | `list` list of low cut and high cut frequency bands e.g. `[8, 30]` |
| order | `int` number of order of butter bandpass filter |
| save_path | `str` path to save processed EEG |
| num_class | `int` number of classes. Default 2|
| sel_chs | `list` or `None`. list if EEG channels. Default `None` |
**Example**
```py
import min2net.preprocessing as prep
prep.OpenBMI.time_domain.subject_independent_setting(k_folds=5,
pick_smp_freq=100,
bands=[8, 30],
order=5,
save_path='datasets')
```
---
## Filter Bank Common Spatial Pattern (FBCSP)
[<img src="https://min2net.github.io/assets/images/github.png" width="30" height="30"> View source on GitHub](https://github.com/IoBT-VISTEC/MIN2Net/blob/main/min2net/preprocessing/OpenBMI/fbcsp.py){: .btn .fs-5 .mb-4 .mb-md-0 }
### Subject-dependent setting
Preprocess raw time-series EEG in subject-dependent setting using Filter Bank Common Spatial Pattern (FBCSP). Split data into train, validation and test sets using stratified k-fold cross-validation.
```py
fbcsp.subject_dependent(k_folds,
pick_smp_freq,
n_components,
bands,
n_features,
order,
save_path,
num_class,
sel_chs)
```
**Arguments:**
| Arguments | Description |
|:---|:----|
|k_folds | `int` number of k-fold cross-validation. |
| pick_smp_freq | `int` pick sample frequency (downsampling EEG to to `pick_smp_freq`) |
| n_components | `int` number of components of CSP|
| bands | `list` list of low cut and high cut frequency bands of filter bank e.g. `[[4, 8], [8, 12], ...]` |
| n_features | `int` number of features for mutual_info_classif |
| order | `int` number of order of butter bandpass filter |
| save_path | `str` path to save processed EEG |
| num_class | `int` number of classes. Default 2|
| sel_chs | `list` or `None`. list if EEG channels. Default `None` |
**Example**
```py
import min2net.preprocessing as prep
bands = [[4, 8], [8, 12], [12, 16],
[16, 20], [20, 24], [24, 28],
[28, 32], [32, 36], [36, 40]]
prep.OpenBMI.fbcsp.subject_dependent_setting(k_folds=5,
pick_smp_freq=100,
n_components=4,
bands=bands,
n_features=8,
order=5,
save_path='datasets')
```
### Subject-independent setting
Preprocess raw time-series EEG in subject-independent setting using Filter Bank Common Spatial Pattern (FBCSP). Split data into train, validation and test sets using stratified k-fold cross-validation.
```py
fbcsp.subject_independent(k_folds,
pick_smp_freq,
n_components,
bands,
n_features,
order,
save_path,
num_class,
sel_chs)
```
**Arguments:**
| Arguments | Description |
|:---|:----|
|k_folds | `int` number of k-fold cross-validation. |
| pick_smp_freq | `int` pick sample frequency (downsampling EEG to to `pick_smp_freq`) |
| n_components | `int` number of components of CSP|
| bands | `list` list of low cut and high cut frequency bands of filter bank e.g. `[[4, 8], [8, 12], ...]` |
| n_features | `int` number of features for mutual_info_classif |
| order | `int` number of order of butter bandpass filter |
| save_path | `str` path to save processed EEG |
| num_class | `int` number of classes. Default 2|
| sel_chs | `list` or `None`. list if EEG channels. Default `None` |
**Example**
```py
import min2net.preprocessing as prep
bands = [[4, 8], [8, 12], [12, 16],
[16, 20], [20, 24], [24, 28],
[28, 32], [32, 36], [36, 40]]
prep.OpenBMI.fbcsp.subject_independent_setting(k_folds=5,
pick_smp_freq=100,
n_components=4,
bands=bands,
n_features=8,
order=5,
save_path='datasets')
```
---
## Spectral Spatial mapping
[<img src="https://min2net.github.io/assets/images/github.png" width="30" height="30"> View source on GitHub](https://github.com/IoBT-VISTEC/MIN2Net/blob/main/min2net/preprocessing/OpenBMI/spectral_spatial.py){: .btn .fs-5 .mb-4 .mb-md-0 }
### Subject-dependent setting
Preprocess raw time-series EEG in subject-dependent setting using Spectral Spatial mapping. Split data into train, validation and test sets using stratified k-fold cross-validation.
```py
spectral_spatial.subject_dependent(k_folds,
pick_smp_freq,
n_components,
bands,
n_features,
order,
save_path,
num_class,
sel_chs)
```
**Arguments:**
| Arguments | Description |
|:---|:----|
|k_folds | `int` number of k-fold cross-validation. |
| pick_smp_freq | `int` pick sample frequency (downsampling EEG to to `pick_smp_freq`) |
| n_components | `int` number of components of CSP|
| bands | `list` list of low cut and high cut frequency bands of filter bank e.g. `[[4, 8], [8, 12], ...]` |
| n_pick_bands | `int` number of filter bands|
| n_features | `int` number of features for mutual_info_classif |
| order | `int` number of order of butter bandpass filter |
| save_path | `str` path to save processed EEG |
| num_class | `int` number of classes. Default 2|
| sel_chs | `list` or `None`. list if EEG channels. Default `None` |
**Example**
```py
import min2net.preprocessing as prep
bands = [[7.5,14],[11,13],[10,14],[9,12],[19,22],
[16,22],[26,34],[17.5,20.5],[7,30],[5,14],
[11,31],[12,18],[7,9],[15,17],[25,30],
[20,25],[5,10],[10,25],[15,30],[10,12],
[23,27],[28,32],[12,33],[11,22],[5,8],
[7.5,17.5],[23,26],[5,20],[5,25],[10,20]]
prep.OpenBMI.spectral_spatial.subject_dependent_setting(k_folds=5,
pick_smp_freq=100,
n_components=10,
bands=bands,
n_pick_bands=20,
order=5,
save_path='datasets')
```
### Subject-independent setting
Preprocess raw time-series EEG in subject-independent setting using Spectral Spatial mapping. Split data into train, validation and test sets using stratified k-fold cross-validation.
```py
spectral_spatial.subject_independent(k_folds,
pick_smp_freq,
n_components,
bands,
n_features,
order,
save_path,
num_class,
sel_chs)
```
**Arguments:**
| Arguments | Description |
|:---|:----|
|k_folds | `int` number of k-fold cross-validation. |
| pick_smp_freq | `int` pick sample frequency (downsampling EEG to to `pick_smp_freq`) |
| n_components | `int` number of components of CSP|
| bands | `list` list of low cut and high cut frequency bands of filter bank e.g. `[[4, 8], [8, 12], ...]` |
| n_pick_bands | `int` number of filter bands|
| n_features | `int` number of features for mutual_info_classif |
| order | `int` number of order of butter bandpass filter |
| save_path | `str` path to save processed EEG |
| num_class | `int` number of classes. Default 2|
| sel_chs | `list` or `None`. list if EEG channels. Default `None` |
**Example**
```py
import min2net.preprocessing as prep
bands = [[7.5,14],[11,13],[10,14],[9,12],[19,22],
[16,22],[26,34],[17.5,20.5],[7,30],[5,14],
[11,31],[12,18],[7,9],[15,17],[25,30],
[20,25],[5,10],[10,25],[15,30],[10,12],
[23,27],[28,32],[12,33],[11,22],[5,8],
[7.5,17.5],[23,26],[5,20],[5,25],[10,20]]
prep.OpenBMI.spectral_spatial.subject_independent_setting(k_folds=5,
pick_smp_freq=100,
n_components=10,
bands=bands,
n_pick_bands=20,
order=5,
save_path='datasets')
``` | 41.539474 | 240 | 0.507444 | eng_Latn | 0.642592 |
a92b70a3c88fe47b0012010ef29ddb1e4b82c365 | 370 | md | Markdown | Shetland Sheepdog/README.md | dogs/doggo | 7588e6d42aac7dce0345b5a574d22895bbb930f1 | [
"Apache-2.0"
] | 21 | 2021-07-28T20:51:41.000Z | 2022-03-15T22:30:22.000Z | Shetland Sheepdog/README.md | dogs/doggo | 7588e6d42aac7dce0345b5a574d22895bbb930f1 | [
"Apache-2.0"
] | 3 | 2020-10-26T15:02:01.000Z | 2022-03-27T22:13:13.000Z | Shetland Sheepdog/README.md | dogs/doggo | 7588e6d42aac7dce0345b5a574d22895bbb930f1 | [
"Apache-2.0"
] | 9 | 2020-10-16T19:40:48.000Z | 2021-07-30T00:59:13.000Z | # Shetland Sheepdog
Weight: 14 kg
Height: 33 - 41 cm
Bred for: Sheep herding
Life span: 12 - 14 years yo
Temperament: Affectionate, Lively, Responsive, Alert, Loyal, Reserved, Playful, Gentle, Intelligent, Active, Trainable, Strong
Origin: not found

[source](https://api.thedogapi.com/v1/breeds/221)
| 21.764706 | 126 | 0.751351 | eng_Latn | 0.223668 |
a92b99a248a094fb8f08d01881ac5bf1b1bb8a8a | 1,258 | md | Markdown | README.md | lasselindqvist/webirc-react-node-mongo | 1b0ea8f38e812146d7114c9f901939abf3be1081 | [
"MIT"
] | null | null | null | README.md | lasselindqvist/webirc-react-node-mongo | 1b0ea8f38e812146d7114c9f901939abf3be1081 | [
"MIT"
] | 3 | 2018-01-03T17:16:54.000Z | 2018-01-03T17:23:08.000Z | README.md | lasselindqvist/webirc-react-node-mongo | 1b0ea8f38e812146d7114c9f901939abf3be1081 | [
"MIT"
] | null | null | null | This project was bootstrapped with [Create React App](https://github.com/facebookincubator/create-react-app).
## Application
This application is a simple chat application. First user finds a login screen with choosable username and channel. There is a twist that any given username
can only be used once. Usernames can be never recovered. My prediction is that in a system like these users would eventually use (paid) third party services to keep their username alive
or start incrementing their username after each lost connection (guest1, guest2, guest3).
The chat itself has a few example commands.
* /join to join a channel
* /part to leave a channel
* /ignore to ignore messages from certain users
* /unignore to stop ignoring messages from certain users
* /say to say something starting with "/"
Otherwise just type anything to say it.
## Technologies
This project consists of three required parts.
1. MondoDB (https://www.mongodb.com/) database. Install it and run it in default port on local machine.
2. The Node.js backend found in folder `backend`. Install it with `npm install`. Run it with `npm start` in its folder.
3. The React frontend found in folder `frontend`. Install it with `npm install`. Run it with `nmp start` in its folder.
| 48.384615 | 185 | 0.77504 | eng_Latn | 0.99878 |
a92c6986ef6060c13f6003081302a20e31a9c8f3 | 703 | md | Markdown | docs/versioned_docs/version-2.3.0/api/gestures/manual-gesture.md | virdesai/react-native-gesture-handler | 7a785f1c875633133da763124fb5eea89b0926d1 | [
"MIT"
] | 2,928 | 2016-10-27T10:10:51.000Z | 2019-12-11T08:20:59.000Z | docs/versioned_docs/version-2.1.0/api/gestures/manual-gesture.md | sultanularefin/react-native-gesture-handler | d9277f138a90a4f20d721d536c1ec52da4cec25e | [
"MIT"
] | 692 | 2016-11-06T08:58:52.000Z | 2019-12-10T10:44:02.000Z | docs/versioned_docs/version-2.1.0/api/gestures/manual-gesture.md | sultanularefin/react-native-gesture-handler | d9277f138a90a4f20d721d536c1ec52da4cec25e | [
"MIT"
] | 553 | 2016-10-29T05:12:35.000Z | 2019-12-11T07:19:14.000Z | ---
id: manual-gesture
title: Manual gesture
sidebar_label: Manual gesture
---
import BaseEventData from './base-gesture-event-data.md';
import BaseEventConfig from './base-gesture-config.md';
import BaseEventCallbacks from './base-gesture-callbacks.md';
import BaseContinousEventCallbacks from './base-continous-gesture-callbacks.md';
A plain gesture that has no specific activation criteria nor event data set. Its state has to be controlled manually using a [state manager](./state-manager.md). It will not fail when all the pointers are lifted from the screen.
## Config
<BaseEventConfig />
## Callbacks
<BaseEventCallbacks />
<BaseContinousEventCallbacks />
## Event data
<BaseEventData />
| 27.038462 | 228 | 0.773826 | eng_Latn | 0.891861 |
a92c86acbb27b542da7cd910d77e3c03d70a0f17 | 3,671 | md | Markdown | help/forms/adaptive-forms/prepopulating-html5-forms-in-aem-forms-article.md | rubnig/experience-manager-learn.en | e46a4ceda4ccfb10f4f7d6cd886dd4b8cdff9f29 | [
"Apache-2.0"
] | null | null | null | help/forms/adaptive-forms/prepopulating-html5-forms-in-aem-forms-article.md | rubnig/experience-manager-learn.en | e46a4ceda4ccfb10f4f7d6cd886dd4b8cdff9f29 | [
"Apache-2.0"
] | null | null | null | help/forms/adaptive-forms/prepopulating-html5-forms-in-aem-forms-article.md | rubnig/experience-manager-learn.en | e46a4ceda4ccfb10f4f7d6cd886dd4b8cdff9f29 | [
"Apache-2.0"
] | null | null | null | ---
title: PrePopulate HTML5 Forms using data attribute.
seo-title: PrePopulate HTML5 Forms using data attribute.
description: Populate HTML5 forms by fetching data from the backend source.
seo-description: Populate HTML5 forms by fetching data from the backend source.
feature: integrations
topics: mobile-forms
audience: developer
doc-type: article
activity: implement
version: 6.3,6.4,6.5.
uuid: 889d2cd5-fcf2-4854-928b-0c2c0db9dbc2
discoiquuid: 3aa645c9-941e-4b27-a538-cca13574b21c
---
# PrePopulate HTML5 Forms using data attribute {#prepopulate-html-forms-using-data-attribute}
Please visit the [AEM Forms samples](https://forms.enablementadobe.com/content/samples/samples.html?query=0) page for a link to a live demo of this capability.
XDP Templates rendered in HTML format using AEM Forms are called HTML5 or Mobile Forms. A common use case is to pre-populate these forms when they are being rendered.
There are 2 ways to merge data with the xdp template when it is being rendered as HTML.
**dataRef**: You can use the dataRef parameter in the URL. This parameter specifies the absolute path of the data file that is merged with the template. This parameter can be a URL to a rest service returning the data in XML format.
**data**: This parameter specifies the UTF-8 encoded data bytes that are merged with the template. If this parameter is specified, the HTML5 form ignores dataRef parameter. As a best practice, we recommend using the data approach.
The recommended approach is to set the data attribute in the request with the data that you want to pre-populate the form with.
slingRequest.setAttribute("data", content);
In this example, we are setting the data attribute with the content. The content represents the data that you want to pre-populate the form with. Typically you would fetch the "content" by making a REST call to an internal service.
To achieve this use case you need to create a custom profile. The details on creating custom profile are clearly documented in [AEM Forms documentation here](https://helpx.adobe.com/aem-forms/6/html5-forms/custom-profile.html).
Once you create your custom profile, you will then create a JSP file which will fetch the data by making calls to your backend system. Once the data is fetched you will use the slingRequest.setAttribute("data", content); to pre-populate the form
When the XDP is being rendered, you can also pass in some parameters to the xdp and based on the value of the parameter you can fetch the data from the backend system.
[For example this url has name parameter](http://localhost:4502/content/dam/formsanddocuments/PrepopulateMobileForm.xdp/jcr:content?name=john)
The JSP that you write will have access to the name parameter through the request.getParameter("name") . You can then pass the value of this parameter to your backend process to fetch the required data.
To get this capability working on your system, please follow the steps mentioned below:
* [Download and Import the assets into AEM using package manager](assets/prepopulatemobileform.zip)
The package will install the following
* CustomProfile
* Sample XDP
* Sample POST endpoint that will return data to populate your form
>[NOTE]If you want to propulate your form by calling workbench process, you may want to include the callWorkbenchProcess.jsp in your /apps/AEMFormsDemoListings/customprofiles/PrepopulateForm/html.jsp instead of the setdata.jsp
* [Point your favorite browser to this url](http://localhost:4502/content/dam/formsanddocuments/PrepopulateMobileForm.xdp/jcr:content?name=Adobe%20Systems). Form should get pre-populated with the value of the name parameter
| 66.745455 | 245 | 0.796513 | eng_Latn | 0.996759 |
a92d931f13f169fbe9594de6c73bc0b574cd2d81 | 393 | md | Markdown | ru/datalens/operations/chart/create-table.md | alphantom/docs | 9b24fd589e7802440e507e181aa4401528c82711 | [
"CC-BY-4.0"
] | null | null | null | ru/datalens/operations/chart/create-table.md | alphantom/docs | 9b24fd589e7802440e507e181aa4401528c82711 | [
"CC-BY-4.0"
] | null | null | null | ru/datalens/operations/chart/create-table.md | alphantom/docs | 9b24fd589e7802440e507e181aa4401528c82711 | [
"CC-BY-4.0"
] | null | null | null | # Создание таблицы
Чтобы создать таблицу:
1. На главной странице сервиса [!KEYREF datalens-full-name] нажмите **Создать чарт**.
1. В разделе **Датасет** выберите датасет для визуализации. Если у вас нет датасета, [создайте его](../dataset/create.md).
1. Выберите тип чарта **Таблица**.
1. Перетащите измерение или показатель из датасета в секцию **Столбцы**. Поле отобразится в виде столбца.
| 49.125 | 122 | 0.75827 | rus_Cyrl | 0.969883 |
a92dec5d15f0e193d6bb532b0ab788f3b8d3f40f | 147 | markdown | Markdown | _posts/2017-02-02-sri.markdown | Thotavenkatapruthvi/Thotavenkatapruthvi.github.io | 84edf246d18fd27b969ccd3f684073ab6194a57c | [
"MIT"
] | null | null | null | _posts/2017-02-02-sri.markdown | Thotavenkatapruthvi/Thotavenkatapruthvi.github.io | 84edf246d18fd27b969ccd3f684073ab6194a57c | [
"MIT"
] | null | null | null | _posts/2017-02-02-sri.markdown | Thotavenkatapruthvi/Thotavenkatapruthvi.github.io | 84edf246d18fd27b969ccd3f684073ab6194a57c | [
"MIT"
] | null | null | null | ---
title: "sri"
layout: post
date: 2017-02-02 00:00
image: /assets/images/sriram.jpg
blog: true
---
Sriramudu manchi baludu
Thotamma ki nachinodu
| 14.7 | 32 | 0.734694 | ces_Latn | 0.071962 |
a92e25e7461db378e5b4e6cec02212a5336d9093 | 1,913 | md | Markdown | CloudAppSecurityDocs/troubleshoot-policies.md | shsagir-demo/CloudAppSecurityDocs | 2c88b4cca89d123f9ebb8ee341c57179c4398d83 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-07-26T17:43:35.000Z | 2021-08-17T03:51:08.000Z | CloudAppSecurityDocs/troubleshoot-policies.md | shsagir-demo/CloudAppSecurityDocs | 2c88b4cca89d123f9ebb8ee341c57179c4398d83 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | CloudAppSecurityDocs/troubleshoot-policies.md | shsagir-demo/CloudAppSecurityDocs | 2c88b4cca89d123f9ebb8ee341c57179c4398d83 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-07-26T08:17:12.000Z | 2021-07-26T08:17:12.000Z | ---
# required metadata
title: Troubleshooting Cloud App Security policies
description: This article describes the process for troubleshooting policy creation in Cloud App Security.
keywords:
author: shsagir
ms.author: shsagir
manager: shsagir
ms.date: 12/10/2018
ms.topic: conceptual
ms.collection: M365-security-compliance
ms.prod:
ms.service: cloud-app-security
ms.technology:
# optional metadata
#ROBOTS:
#audience:
#ms.devlang:
ms.reviewer: reutam
ms.suite: ems
#ms.tgt_pltfrm:
ms.custom: seodec18
---
# Troubleshooting Microsoft Cloud App Security policies
*Applies to: Microsoft Cloud App Security*
This article describes the process for troubleshooting policy creation in Cloud App Security.
## Troubleshooting
The following chart has the description and resolution for errors you might see for policies.
|Error|Description|Resolution|
|----|----|----|
| **The policy <*name*> was automatically disabled due to a configuration error**|If you get this error in Microsoft Cloud App Security, it means that you need to fix the configuration of the indicated policy. When you create a Microsoft Cloud App Security policy, you often make use of other objects created within Cloud App Security or the Security and Compliance Center such as IP tags or custom sensitive types. If the IP tag or custom sensitive type you used in the policy is deleted, the policy will automatically be disabled, and you'll receive this error. This message might also indicate a more general configuration error such as a filter that is too complex. |To restore the policy, edit the policy and fix every configuration error mentioned. This error usually means you need to remove any deleted objects from the policy filters and save the policy.|
## Next steps
> [!div class="nextstepaction"]
> [Control cloud apps with policies](control-cloud-apps-with-policies.md)
[!INCLUDE [Open support ticket](includes/support.md)]
| 39.854167 | 865 | 0.788813 | eng_Latn | 0.991593 |
a92e43e818f474d827add9e06ce89db8b1e9baa0 | 79,131 | md | Markdown | repos/joomla/remote/php5.6.md | giridharkota/repo-info | 4544efd679054003cf76e3cce0980757e4391490 | [
"Apache-2.0"
] | null | null | null | repos/joomla/remote/php5.6.md | giridharkota/repo-info | 4544efd679054003cf76e3cce0980757e4391490 | [
"Apache-2.0"
] | null | null | null | repos/joomla/remote/php5.6.md | giridharkota/repo-info | 4544efd679054003cf76e3cce0980757e4391490 | [
"Apache-2.0"
] | 1 | 2019-12-12T12:30:49.000Z | 2019-12-12T12:30:49.000Z | ## `joomla:php5.6`
```console
$ docker pull joomla@sha256:d53b14e1d629913d7b0f0204e86824f0273be604f6503dfeb74fa9bda5ad7a7b
```
- Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json`
- Platforms:
- linux; amd64
- linux; arm variant v5
- linux; arm variant v7
- linux; arm64 variant v8
- linux; 386
- linux; ppc64le
### `joomla:php5.6` - linux; amd64
```console
$ docker pull joomla@sha256:f0ce35fd6eb508d3fbdc019b28e7725ff0561136284ec7b28cfb6e656dc9f8be
```
- Docker Version: 17.06.2-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **141.9 MB (141910983 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:c87dd6d0a28d0c33df5f3d9c9f649743bd0535613c36f10256975f72ec697062`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["apache2-foreground"]`
```dockerfile
# Tue, 17 Jul 2018 00:28:04 GMT
ADD file:919939fa022472751b717443eea9f1d7ab5c0723f1f3a6b776d3b83d22bde818 in /
# Tue, 17 Jul 2018 00:28:04 GMT
CMD ["bash"]
# Tue, 17 Jul 2018 04:49:37 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 17 Jul 2018 04:49:37 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 17 Jul 2018 04:50:16 GMT
RUN apt-get update && apt-get install -y $PHPIZE_DEPS ca-certificates curl xz-utils --no-install-recommends && rm -r /var/lib/apt/lists/*
# Tue, 17 Jul 2018 04:50:16 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 17 Jul 2018 04:50:17 GMT
RUN mkdir -p $PHP_INI_DIR/conf.d
# Tue, 17 Jul 2018 04:55:09 GMT
RUN apt-get update && apt-get install -y --no-install-recommends apache2 && rm -rf /var/lib/apt/lists/*
# Tue, 17 Jul 2018 04:55:10 GMT
ENV APACHE_CONFDIR=/etc/apache2
# Tue, 17 Jul 2018 04:55:10 GMT
ENV APACHE_ENVVARS=/etc/apache2/envvars
# Tue, 17 Jul 2018 04:55:11 GMT
RUN set -ex && sed -ri 's/^export ([^=]+)=(.*)$/: ${\1:=\2}\nexport \1/' "$APACHE_ENVVARS" && . "$APACHE_ENVVARS" && for dir in "$APACHE_LOCK_DIR" "$APACHE_RUN_DIR" "$APACHE_LOG_DIR" /var/www/html ; do rm -rvf "$dir" && mkdir -p "$dir" && chown -R "$APACHE_RUN_USER:$APACHE_RUN_GROUP" "$dir"; done
# Tue, 17 Jul 2018 04:55:11 GMT
RUN a2dismod mpm_event && a2enmod mpm_prefork
# Tue, 17 Jul 2018 04:55:12 GMT
RUN set -ex && . "$APACHE_ENVVARS" && ln -sfT /dev/stderr "$APACHE_LOG_DIR/error.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/access.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/other_vhosts_access.log"
# Tue, 17 Jul 2018 04:55:25 GMT
RUN { echo '<FilesMatch \.php$>'; echo '\tSetHandler application/x-httpd-php'; echo '</FilesMatch>'; echo; echo 'DirectoryIndex disabled'; echo 'DirectoryIndex index.php index.html'; echo; echo '<Directory /var/www/>'; echo '\tOptions -Indexes'; echo '\tAllowOverride All'; echo '</Directory>'; } | tee "$APACHE_CONFDIR/conf-available/docker-php.conf" && a2enconf docker-php
# Tue, 17 Jul 2018 04:55:25 GMT
ENV PHP_EXTRA_BUILD_DEPS=apache2-dev
# Tue, 17 Jul 2018 04:55:26 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi
# Tue, 17 Jul 2018 04:55:26 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 04:55:26 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 04:55:26 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Tue, 17 Jul 2018 06:26:33 GMT
ENV GPG_KEYS=0BD78B5F97500D450838F95DFE857D9A90D90EC1 6E4F6AB321FDC07F2C332E3AC2BF0BC433CFC8B3
# Sat, 21 Jul 2018 10:08:35 GMT
ENV PHP_VERSION=5.6.37
# Sat, 21 Jul 2018 10:08:35 GMT
ENV PHP_URL=https://secure.php.net/get/php-5.6.37.tar.xz/from/this/mirror PHP_ASC_URL=https://secure.php.net/get/php-5.6.37.tar.xz.asc/from/this/mirror
# Sat, 21 Jul 2018 10:08:35 GMT
ENV PHP_SHA256=5000d82610f9134aaedef28854ec3591f68dedf26a17b8935727dac2843bd256 PHP_MD5=
# Sat, 21 Jul 2018 10:08:49 GMT
RUN set -xe; fetchDeps=' wget '; if ! command -v gpg > /dev/null; then fetchDeps="$fetchDeps dirmngr gnupg "; fi; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; wget -O php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then wget -O php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; command -v gpgconf > /dev/null && gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps
# Sat, 21 Jul 2018 10:08:50 GMT
COPY file:207c686e3fed4f71f8a7b245d8dcae9c9048d276a326d82b553c12a90af0c0ca in /usr/local/bin/
# Sat, 21 Jul 2018 10:11:22 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libedit-dev libsqlite3-dev libssl1.0-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-curl --with-libedit --with-openssl --with-zlib $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; php --version; pecl update-channels; rm -rf /tmp/pear ~/.pearrc
# Sat, 21 Jul 2018 10:11:37 GMT
COPY multi:c925dfb355ea16ba0238c8b6ca78d3cd7fe815932bf707b25bbf051070430157 in /usr/local/bin/
# Sat, 21 Jul 2018 10:11:37 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Sat, 21 Jul 2018 10:11:38 GMT
COPY file:24613ecbb1ce6a09f683b0753da9c26a1af07547326e8a02f6eec80ad6f2774a in /usr/local/bin/
# Sat, 21 Jul 2018 10:11:38 GMT
WORKDIR /var/www/html
# Sat, 21 Jul 2018 10:11:38 GMT
EXPOSE 80/tcp
# Sat, 21 Jul 2018 10:11:39 GMT
CMD ["apache2-foreground"]
# Sat, 21 Jul 2018 14:13:40 GMT
LABEL maintainer=Michael Babker <michael.babker@joomla.org> (@mbabker)
# Sat, 21 Jul 2018 14:13:40 GMT
ENV JOOMLA_INSTALLATION_DISABLE_LOCALHOST_CHECK=1
# Sat, 21 Jul 2018 14:13:41 GMT
RUN a2enmod rewrite
# Sat, 21 Jul 2018 14:16:50 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libjpeg-dev libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev ; docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install bz2 gd ldap mcrypt mysqli pdo_mysql pdo_pgsql pgsql zip ; pecl install APCu-4.0.11; pecl install memcached-2.2.0; pecl install redis-3.1.6; docker-php-ext-enable apcu memcached redis ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Sat, 21 Jul 2018 14:16:50 GMT
VOLUME [/var/www/html]
# Thu, 02 Aug 2018 22:28:23 GMT
ENV JOOMLA_VERSION=3.8.11
# Thu, 02 Aug 2018 22:28:23 GMT
ENV JOOMLA_SHA1=d27fb06f13ec4fe74a41124e354ed639f2093100
# Thu, 02 Aug 2018 22:28:31 GMT
RUN curl -o joomla.tar.bz2 -SL https://github.com/joomla/joomla-cms/releases/download/${JOOMLA_VERSION}/Joomla_${JOOMLA_VERSION}-Stable-Full_Package.tar.bz2 && echo "$JOOMLA_SHA1 *joomla.tar.bz2" | sha1sum -c - && mkdir /usr/src/joomla && tar -xf joomla.tar.bz2 -C /usr/src/joomla && rm joomla.tar.bz2 && chown -R www-data:www-data /usr/src/joomla
# Fri, 10 Aug 2018 21:36:58 GMT
COPY file:25b57bf11549456c8a7b3fadac31b0211225c2cd85b3a380a644dcec5f8a605c in /entrypoint.sh
# Fri, 10 Aug 2018 21:36:58 GMT
COPY file:7328ebe063e26f7b7716dfd8778bb7d46b90702ea38b23b9147ba2fd837ac2c1 in /makedb.php
# Fri, 10 Aug 2018 21:36:59 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Fri, 10 Aug 2018 21:36:59 GMT
CMD ["apache2-foreground"]
```
- Layers:
- `sha256:be8881be8156e4068e611fe956aba2b9593ebd953be14fb7feea6d0659aa3abe`
Last Modified: Tue, 17 Jul 2018 00:44:17 GMT
Size: 22.5 MB (22485906 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:69a25f7e493029f541fc3c7ac66fdffdd5f8c4b9b33346031523d053177bb365`
Last Modified: Tue, 17 Jul 2018 06:59:33 GMT
Size: 227.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:65632e89c5f4ef102bcd13b6e86baf954e0b902f688a46961d5ff0a36dddfebe`
Last Modified: Tue, 17 Jul 2018 07:00:01 GMT
Size: 67.4 MB (67428909 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cd75fa32da8fd946b82c0447feac1f3c24330594492e3be74a516b18437d5306`
Last Modified: Tue, 17 Jul 2018 06:59:32 GMT
Size: 183.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:15bc7736db11de5eddd0f13bb1c28ebe5612a4fcf398c7c1077f446abbdfb935`
Last Modified: Tue, 17 Jul 2018 07:07:01 GMT
Size: 17.1 MB (17127402 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b2c40cef4807e3464b2859ebb5e4ac179cfbc253a212ce725f3a5d27388f79fe`
Last Modified: Tue, 17 Jul 2018 07:06:54 GMT
Size: 1.2 KB (1241 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f3507e55e5eba49288cb3c8ff469e5a772b31fe8d0b5d2dae06faff4a4d34318`
Last Modified: Tue, 17 Jul 2018 07:06:54 GMT
Size: 428.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e6006cdfa16b487a3a92269f59ecf33b936311fb9934fd4a5b7775b46933fdfe`
Last Modified: Tue, 17 Jul 2018 07:06:53 GMT
Size: 230.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a3ed406e3c880fbafac0ae7ab1a889a46f9ef17e86e3efd898158a3241a0518b`
Last Modified: Tue, 17 Jul 2018 07:06:52 GMT
Size: 487.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2d263b414e847c9ff92b03ffa1f9fa58c2a668fba6c8f1a45f903459019a0249`
Last Modified: Sat, 21 Jul 2018 11:57:24 GMT
Size: 12.8 MB (12817065 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:37eb650d88b5a62f60edcbc3521fba398b0c1008741dd3d9047499086571f523`
Last Modified: Sat, 21 Jul 2018 11:57:23 GMT
Size: 498.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:738c73be6f9cbddb0e9c1c58693c09c153135aebe73cf6e9f43ebb7f18848d1e`
Last Modified: Sat, 21 Jul 2018 11:57:26 GMT
Size: 9.7 MB (9682166 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d8348230e0057c69d6464bea7cc9341b11d16c522ab5f7478bace85be984d804`
Last Modified: Sat, 21 Jul 2018 11:57:23 GMT
Size: 2.2 KB (2192 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b5c23346c0dd7afb5cff2f158455e81d613225f615503b17d500b6bb78eab448`
Last Modified: Sat, 21 Jul 2018 11:57:23 GMT
Size: 903.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:4156a0555e8736b36ea20869d40c68791d238401e60279f6a1b2e404574072fa`
Last Modified: Sat, 21 Jul 2018 14:58:26 GMT
Size: 312.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:642e14db36cb52601c8ea3537577585f954b4f2fc3d64c5fb09c4cd11d1a3cd0`
Last Modified: Sat, 21 Jul 2018 14:58:27 GMT
Size: 2.9 MB (2938375 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6a3cc79d293c488f2301eee6e9d58cadd8f11cd5f5de8c5308253baf54694ea7`
Last Modified: Thu, 02 Aug 2018 22:37:42 GMT
Size: 9.4 MB (9422674 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:560f6eef323c75478f8f8321768c2d371aba7c98f421333ced4e69b52ec6c2cf`
Last Modified: Fri, 10 Aug 2018 21:41:04 GMT
Size: 1.2 KB (1171 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5b021b2bc80efeb5be9e5993030821ce11902934c610d68ee9bdf37340b83a35`
Last Modified: Fri, 10 Aug 2018 21:41:05 GMT
Size: 614.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `joomla:php5.6` - linux; arm variant v5
```console
$ docker pull joomla@sha256:1103a0a38382d6c71a599199052021181bf95481c3150ac44b6f814d849cfc64
```
- Docker Version: 17.06.2-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **129.6 MB (129607857 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:267633235a0d0b4efc594e8294d71ab80b7afe9308974c495f317c15a294fdfd`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["apache2-foreground"]`
```dockerfile
# Tue, 17 Jul 2018 08:56:27 GMT
ADD file:60830ba735048c6cbecbc75b83364ad442e1e5ee691ef74dad4eb07f720f8919 in /
# Tue, 17 Jul 2018 08:56:29 GMT
CMD ["bash"]
# Tue, 17 Jul 2018 12:11:37 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 17 Jul 2018 12:11:37 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 17 Jul 2018 12:12:07 GMT
RUN apt-get update && apt-get install -y $PHPIZE_DEPS ca-certificates curl xz-utils --no-install-recommends && rm -r /var/lib/apt/lists/*
# Tue, 17 Jul 2018 12:12:08 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 17 Jul 2018 12:12:09 GMT
RUN mkdir -p $PHP_INI_DIR/conf.d
# Tue, 17 Jul 2018 12:16:32 GMT
RUN apt-get update && apt-get install -y --no-install-recommends apache2 && rm -rf /var/lib/apt/lists/*
# Tue, 17 Jul 2018 12:16:32 GMT
ENV APACHE_CONFDIR=/etc/apache2
# Tue, 17 Jul 2018 12:16:33 GMT
ENV APACHE_ENVVARS=/etc/apache2/envvars
# Tue, 17 Jul 2018 12:16:33 GMT
RUN set -ex && sed -ri 's/^export ([^=]+)=(.*)$/: ${\1:=\2}\nexport \1/' "$APACHE_ENVVARS" && . "$APACHE_ENVVARS" && for dir in "$APACHE_LOCK_DIR" "$APACHE_RUN_DIR" "$APACHE_LOG_DIR" /var/www/html ; do rm -rvf "$dir" && mkdir -p "$dir" && chown -R "$APACHE_RUN_USER:$APACHE_RUN_GROUP" "$dir"; done
# Tue, 17 Jul 2018 12:16:34 GMT
RUN a2dismod mpm_event && a2enmod mpm_prefork
# Tue, 17 Jul 2018 12:16:35 GMT
RUN set -ex && . "$APACHE_ENVVARS" && ln -sfT /dev/stderr "$APACHE_LOG_DIR/error.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/access.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/other_vhosts_access.log"
# Tue, 17 Jul 2018 12:16:36 GMT
RUN { echo '<FilesMatch \.php$>'; echo '\tSetHandler application/x-httpd-php'; echo '</FilesMatch>'; echo; echo 'DirectoryIndex disabled'; echo 'DirectoryIndex index.php index.html'; echo; echo '<Directory /var/www/>'; echo '\tOptions -Indexes'; echo '\tAllowOverride All'; echo '</Directory>'; } | tee "$APACHE_CONFDIR/conf-available/docker-php.conf" && a2enconf docker-php
# Tue, 17 Jul 2018 12:16:36 GMT
ENV PHP_EXTRA_BUILD_DEPS=apache2-dev
# Tue, 17 Jul 2018 12:16:37 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi
# Tue, 17 Jul 2018 12:16:37 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 12:16:37 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 12:16:37 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Tue, 17 Jul 2018 13:42:28 GMT
ENV GPG_KEYS=0BD78B5F97500D450838F95DFE857D9A90D90EC1 6E4F6AB321FDC07F2C332E3AC2BF0BC433CFC8B3
# Sat, 21 Jul 2018 10:14:23 GMT
ENV PHP_VERSION=5.6.37
# Sat, 21 Jul 2018 10:14:23 GMT
ENV PHP_URL=https://secure.php.net/get/php-5.6.37.tar.xz/from/this/mirror PHP_ASC_URL=https://secure.php.net/get/php-5.6.37.tar.xz.asc/from/this/mirror
# Sat, 21 Jul 2018 10:14:23 GMT
ENV PHP_SHA256=5000d82610f9134aaedef28854ec3591f68dedf26a17b8935727dac2843bd256 PHP_MD5=
# Sat, 21 Jul 2018 10:14:38 GMT
RUN set -xe; fetchDeps=' wget '; if ! command -v gpg > /dev/null; then fetchDeps="$fetchDeps dirmngr gnupg "; fi; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; wget -O php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then wget -O php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; command -v gpgconf > /dev/null && gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps
# Sat, 21 Jul 2018 10:14:38 GMT
COPY file:207c686e3fed4f71f8a7b245d8dcae9c9048d276a326d82b553c12a90af0c0ca in /usr/local/bin/
# Sat, 21 Jul 2018 10:16:59 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libedit-dev libsqlite3-dev libssl1.0-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-curl --with-libedit --with-openssl --with-zlib $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; php --version; pecl update-channels; rm -rf /tmp/pear ~/.pearrc
# Sat, 21 Jul 2018 10:17:00 GMT
COPY multi:c925dfb355ea16ba0238c8b6ca78d3cd7fe815932bf707b25bbf051070430157 in /usr/local/bin/
# Sat, 21 Jul 2018 10:17:01 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Sat, 21 Jul 2018 10:17:01 GMT
COPY file:24613ecbb1ce6a09f683b0753da9c26a1af07547326e8a02f6eec80ad6f2774a in /usr/local/bin/
# Sat, 21 Jul 2018 10:17:01 GMT
WORKDIR /var/www/html
# Sat, 21 Jul 2018 10:17:02 GMT
EXPOSE 80/tcp
# Sat, 21 Jul 2018 10:17:02 GMT
CMD ["apache2-foreground"]
# Sat, 21 Jul 2018 12:16:35 GMT
LABEL maintainer=Michael Babker <michael.babker@joomla.org> (@mbabker)
# Sat, 21 Jul 2018 12:16:35 GMT
ENV JOOMLA_INSTALLATION_DISABLE_LOCALHOST_CHECK=1
# Sat, 21 Jul 2018 12:16:36 GMT
RUN a2enmod rewrite
# Sat, 21 Jul 2018 12:21:32 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libjpeg-dev libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev ; docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install bz2 gd ldap mcrypt mysqli pdo_mysql pdo_pgsql pgsql zip ; pecl install APCu-4.0.11; pecl install memcached-2.2.0; pecl install redis-3.1.6; docker-php-ext-enable apcu memcached redis ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Sat, 21 Jul 2018 12:21:32 GMT
VOLUME [/var/www/html]
# Fri, 03 Aug 2018 08:51:36 GMT
ENV JOOMLA_VERSION=3.8.11
# Fri, 03 Aug 2018 08:51:36 GMT
ENV JOOMLA_SHA1=d27fb06f13ec4fe74a41124e354ed639f2093100
# Fri, 03 Aug 2018 08:51:43 GMT
RUN curl -o joomla.tar.bz2 -SL https://github.com/joomla/joomla-cms/releases/download/${JOOMLA_VERSION}/Joomla_${JOOMLA_VERSION}-Stable-Full_Package.tar.bz2 && echo "$JOOMLA_SHA1 *joomla.tar.bz2" | sha1sum -c - && mkdir /usr/src/joomla && tar -xf joomla.tar.bz2 -C /usr/src/joomla && rm joomla.tar.bz2 && chown -R www-data:www-data /usr/src/joomla
# Sat, 11 Aug 2018 08:48:27 GMT
COPY file:25b57bf11549456c8a7b3fadac31b0211225c2cd85b3a380a644dcec5f8a605c in /entrypoint.sh
# Sat, 11 Aug 2018 08:48:27 GMT
COPY file:7328ebe063e26f7b7716dfd8778bb7d46b90702ea38b23b9147ba2fd837ac2c1 in /makedb.php
# Sat, 11 Aug 2018 08:48:28 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Sat, 11 Aug 2018 08:48:28 GMT
CMD ["apache2-foreground"]
```
- Layers:
- `sha256:235e2c34c6b727f2b00aae7eed907f84338b4002c487e0caaa123a50334c0810`
Last Modified: Tue, 17 Jul 2018 09:09:00 GMT
Size: 21.2 MB (21162647 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cd628726115d1b85f5fa4da7cce482866282d8d413ca606ea4ceb6d5c78e4f4b`
Last Modified: Tue, 17 Jul 2018 14:10:16 GMT
Size: 225.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e6d762076c20a075be84e128e3058c548c89794ff387f3398c1ffff670865359`
Last Modified: Tue, 17 Jul 2018 14:10:35 GMT
Size: 57.4 MB (57447364 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:285bc4dc2b6d84e706bd65098634e75f474fe61ef97464d1ef11e6fd51f5cda7`
Last Modified: Tue, 17 Jul 2018 14:10:16 GMT
Size: 212.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2fb9c15539d5a377f98f2a0f58ebb319c1c35ddc2e3f0441768a009d7ee5884c`
Last Modified: Tue, 17 Jul 2018 14:13:11 GMT
Size: 16.7 MB (16650925 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ffd19cc66fa8ebbc75a62e14647f88bbb78438490bbe70feb4963881f16257e6`
Last Modified: Tue, 17 Jul 2018 14:13:07 GMT
Size: 1.3 KB (1273 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e786b3fdcaa4442d8c2ac91b6c5e811c9901b655f35f08e397c4b976dd3d5e8f`
Last Modified: Tue, 17 Jul 2018 14:13:06 GMT
Size: 468.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d8c514c1971732f6df9808b1078568ad98ca39d96142504057d10e976f1da1e7`
Last Modified: Tue, 17 Jul 2018 14:13:05 GMT
Size: 231.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:14444c0ed248efe86528a5979e4e06bd62680bbdc60c71aa44601f631a747e98`
Last Modified: Tue, 17 Jul 2018 14:13:05 GMT
Size: 512.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:acbc09766299f9f5542b751bd20e6b3d63c0b72bbfef2775258071e4ae5521b1`
Last Modified: Sat, 21 Jul 2018 11:07:19 GMT
Size: 12.8 MB (12814938 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b6c07440d5e0960ef43b9aee8c273a794f53c9bfaf92cf7002ab805570fd9dd0`
Last Modified: Sat, 21 Jul 2018 11:07:18 GMT
Size: 500.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ca42994a4ec83ee712316a0c0d7540a02ef5e1a02e1de7aa933d997a5ef61d5f`
Last Modified: Sat, 21 Jul 2018 11:07:21 GMT
Size: 9.3 MB (9273030 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:70838bed379b435f73785065e2c4bd16f76ee8ce0a69af84a56a8cf86558be21`
Last Modified: Sat, 21 Jul 2018 11:07:18 GMT
Size: 2.2 KB (2193 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:353fa6a62ac6574dd72b73a8af6e806f90dad9c0e5dc32126a0264e3492b3f44`
Last Modified: Sat, 21 Jul 2018 11:07:18 GMT
Size: 903.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:161ace3cbb3e78136de7b061c5c2d02e6dcc3f14840ff345e825d9bf8a8730fd`
Last Modified: Sat, 21 Jul 2018 12:51:31 GMT
Size: 318.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0daf619ae9be58f99546f20d608759585cafc9cef4f410c4da9d1bfdf00d44b4`
Last Modified: Sat, 21 Jul 2018 12:51:32 GMT
Size: 2.8 MB (2827558 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:83396d4dd8a7028a527370448ea7dfc4db14a2247272a03037e3e50b67569b17`
Last Modified: Fri, 03 Aug 2018 08:53:58 GMT
Size: 9.4 MB (9422779 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:38d7abff565c7929c913b4cea6da53c445e82ee61398d56bb44df05a9431b1c6`
Last Modified: Sat, 11 Aug 2018 08:49:38 GMT
Size: 1.2 KB (1169 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:75d13badf3e8e42638cca61f0cb1125208243f69c5b9b710a0cc4e51477dfeb3`
Last Modified: Sat, 11 Aug 2018 08:49:37 GMT
Size: 612.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `joomla:php5.6` - linux; arm variant v7
```console
$ docker pull joomla@sha256:2b2f8e7dcaaab07c28eb06217201547d9b7ca0650f675e72188a1aac5b1ca709
```
- Docker Version: 17.06.2-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **122.8 MB (122769730 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:04c2b6f468ea02daf0ef20e0b91005f4ba952c16eb614d1013157097e4680a08`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["apache2-foreground"]`
```dockerfile
# Tue, 17 Jul 2018 12:06:02 GMT
ADD file:00cfe29a37b88b6eacba9ac7c46483798b55e0aaaa9a4a3cbbd097606fd23268 in /
# Tue, 17 Jul 2018 12:06:03 GMT
CMD ["bash"]
# Tue, 17 Jul 2018 15:15:13 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 17 Jul 2018 15:15:13 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 17 Jul 2018 15:15:41 GMT
RUN apt-get update && apt-get install -y $PHPIZE_DEPS ca-certificates curl xz-utils --no-install-recommends && rm -r /var/lib/apt/lists/*
# Tue, 17 Jul 2018 15:15:42 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 17 Jul 2018 15:15:43 GMT
RUN mkdir -p $PHP_INI_DIR/conf.d
# Tue, 17 Jul 2018 15:20:08 GMT
RUN apt-get update && apt-get install -y --no-install-recommends apache2 && rm -rf /var/lib/apt/lists/*
# Tue, 17 Jul 2018 15:20:08 GMT
ENV APACHE_CONFDIR=/etc/apache2
# Tue, 17 Jul 2018 15:20:09 GMT
ENV APACHE_ENVVARS=/etc/apache2/envvars
# Tue, 17 Jul 2018 15:20:11 GMT
RUN set -ex && sed -ri 's/^export ([^=]+)=(.*)$/: ${\1:=\2}\nexport \1/' "$APACHE_ENVVARS" && . "$APACHE_ENVVARS" && for dir in "$APACHE_LOCK_DIR" "$APACHE_RUN_DIR" "$APACHE_LOG_DIR" /var/www/html ; do rm -rvf "$dir" && mkdir -p "$dir" && chown -R "$APACHE_RUN_USER:$APACHE_RUN_GROUP" "$dir"; done
# Tue, 17 Jul 2018 15:20:13 GMT
RUN a2dismod mpm_event && a2enmod mpm_prefork
# Tue, 17 Jul 2018 15:20:15 GMT
RUN set -ex && . "$APACHE_ENVVARS" && ln -sfT /dev/stderr "$APACHE_LOG_DIR/error.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/access.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/other_vhosts_access.log"
# Tue, 17 Jul 2018 15:20:18 GMT
RUN { echo '<FilesMatch \.php$>'; echo '\tSetHandler application/x-httpd-php'; echo '</FilesMatch>'; echo; echo 'DirectoryIndex disabled'; echo 'DirectoryIndex index.php index.html'; echo; echo '<Directory /var/www/>'; echo '\tOptions -Indexes'; echo '\tAllowOverride All'; echo '</Directory>'; } | tee "$APACHE_CONFDIR/conf-available/docker-php.conf" && a2enconf docker-php
# Tue, 17 Jul 2018 15:20:18 GMT
ENV PHP_EXTRA_BUILD_DEPS=apache2-dev
# Tue, 17 Jul 2018 15:20:19 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi
# Tue, 17 Jul 2018 15:20:19 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 15:20:19 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 15:20:20 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Tue, 17 Jul 2018 16:44:41 GMT
ENV GPG_KEYS=0BD78B5F97500D450838F95DFE857D9A90D90EC1 6E4F6AB321FDC07F2C332E3AC2BF0BC433CFC8B3
# Sat, 21 Jul 2018 13:36:33 GMT
ENV PHP_VERSION=5.6.37
# Sat, 21 Jul 2018 13:36:34 GMT
ENV PHP_URL=https://secure.php.net/get/php-5.6.37.tar.xz/from/this/mirror PHP_ASC_URL=https://secure.php.net/get/php-5.6.37.tar.xz.asc/from/this/mirror
# Sat, 21 Jul 2018 13:36:35 GMT
ENV PHP_SHA256=5000d82610f9134aaedef28854ec3591f68dedf26a17b8935727dac2843bd256 PHP_MD5=
# Sat, 21 Jul 2018 13:37:05 GMT
RUN set -xe; fetchDeps=' wget '; if ! command -v gpg > /dev/null; then fetchDeps="$fetchDeps dirmngr gnupg "; fi; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; wget -O php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then wget -O php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; command -v gpgconf > /dev/null && gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps
# Sat, 21 Jul 2018 13:37:11 GMT
COPY file:207c686e3fed4f71f8a7b245d8dcae9c9048d276a326d82b553c12a90af0c0ca in /usr/local/bin/
# Sat, 21 Jul 2018 13:40:29 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libedit-dev libsqlite3-dev libssl1.0-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-curl --with-libedit --with-openssl --with-zlib $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; php --version; pecl update-channels; rm -rf /tmp/pear ~/.pearrc
# Sat, 21 Jul 2018 13:40:38 GMT
COPY multi:c925dfb355ea16ba0238c8b6ca78d3cd7fe815932bf707b25bbf051070430157 in /usr/local/bin/
# Sat, 21 Jul 2018 13:40:38 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Sat, 21 Jul 2018 13:40:39 GMT
COPY file:24613ecbb1ce6a09f683b0753da9c26a1af07547326e8a02f6eec80ad6f2774a in /usr/local/bin/
# Sat, 21 Jul 2018 13:40:40 GMT
WORKDIR /var/www/html
# Sat, 21 Jul 2018 13:40:40 GMT
EXPOSE 80/tcp
# Sat, 21 Jul 2018 13:40:41 GMT
CMD ["apache2-foreground"]
# Sat, 21 Jul 2018 15:05:41 GMT
LABEL maintainer=Michael Babker <michael.babker@joomla.org> (@mbabker)
# Sat, 21 Jul 2018 15:05:41 GMT
ENV JOOMLA_INSTALLATION_DISABLE_LOCALHOST_CHECK=1
# Sat, 21 Jul 2018 15:05:43 GMT
RUN a2enmod rewrite
# Sat, 21 Jul 2018 15:08:52 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libjpeg-dev libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev ; docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install bz2 gd ldap mcrypt mysqli pdo_mysql pdo_pgsql pgsql zip ; pecl install APCu-4.0.11; pecl install memcached-2.2.0; pecl install redis-3.1.6; docker-php-ext-enable apcu memcached redis ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Sat, 21 Jul 2018 15:09:02 GMT
VOLUME [/var/www/html]
# Fri, 03 Aug 2018 12:00:57 GMT
ENV JOOMLA_VERSION=3.8.11
# Fri, 03 Aug 2018 12:00:58 GMT
ENV JOOMLA_SHA1=d27fb06f13ec4fe74a41124e354ed639f2093100
# Fri, 03 Aug 2018 12:01:05 GMT
RUN curl -o joomla.tar.bz2 -SL https://github.com/joomla/joomla-cms/releases/download/${JOOMLA_VERSION}/Joomla_${JOOMLA_VERSION}-Stable-Full_Package.tar.bz2 && echo "$JOOMLA_SHA1 *joomla.tar.bz2" | sha1sum -c - && mkdir /usr/src/joomla && tar -xf joomla.tar.bz2 -C /usr/src/joomla && rm joomla.tar.bz2 && chown -R www-data:www-data /usr/src/joomla
# Sat, 11 Aug 2018 12:04:21 GMT
COPY file:25b57bf11549456c8a7b3fadac31b0211225c2cd85b3a380a644dcec5f8a605c in /entrypoint.sh
# Sat, 11 Aug 2018 12:04:22 GMT
COPY file:7328ebe063e26f7b7716dfd8778bb7d46b90702ea38b23b9147ba2fd837ac2c1 in /makedb.php
# Sat, 11 Aug 2018 12:04:23 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Sat, 11 Aug 2018 12:04:23 GMT
CMD ["apache2-foreground"]
```
- Layers:
- `sha256:e07de503944f9c1ea958f38d01af058a6e01c94d6df8bf8af06ed73fcf57793e`
Last Modified: Tue, 17 Jul 2018 12:18:34 GMT
Size: 19.3 MB (19270183 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:67d2a0c1d131c19625fceb8248dc96a847893d5f98f01da360c6b39e27bc3ca0`
Last Modified: Tue, 17 Jul 2018 17:14:09 GMT
Size: 228.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:271d58948a13adbffe09af81abcb95bcfc955a7c954392052af0f5e92bfdfa31`
Last Modified: Tue, 17 Jul 2018 17:14:26 GMT
Size: 53.6 MB (53562473 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:34f3a80a747ed1051b60ee82a50901d727f7b6bf1adef7ee158199fefc295d04`
Last Modified: Tue, 17 Jul 2018 17:14:08 GMT
Size: 212.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1c221881787fa76132ab07db82c818dee8cef992561e89d583f67261102744ff`
Last Modified: Tue, 17 Jul 2018 17:17:51 GMT
Size: 16.2 MB (16162679 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2df8f21ec4a6711b1155f4f6029563c4576c4a4dde14b2aef40ec68c8fcd8a86`
Last Modified: Tue, 17 Jul 2018 17:17:47 GMT
Size: 1.3 KB (1278 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c93a133ff30152cb97261b3996742c9a229159ff53646d11a6462fe08f0e4cfb`
Last Modified: Tue, 17 Jul 2018 17:17:46 GMT
Size: 467.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b16498caf1d0af169cd46ef7f8c5c70bc4cc48f1f7d97bb769c137452fb1f652`
Last Modified: Tue, 17 Jul 2018 17:17:46 GMT
Size: 230.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:805fc306963caf25cb37f4fb1705eb38e801b44dba304afcfaa12caf5ab2bc3d`
Last Modified: Tue, 17 Jul 2018 17:17:45 GMT
Size: 509.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bf071fd39e0173872953a7cfced38cebe193d427ad8dd735282b6ad2d278f048`
Last Modified: Sat, 21 Jul 2018 14:32:33 GMT
Size: 12.8 MB (12814888 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:be1241f53993658674839f224a0944e92cb478f1168ce726b6af7214063ee320`
Last Modified: Sat, 21 Jul 2018 14:32:31 GMT
Size: 500.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:83022c8f3bb352d52f81029b09cce0f6d461739dc1c4c1721752ab00a0763384`
Last Modified: Sat, 21 Jul 2018 14:32:34 GMT
Size: 8.8 MB (8795351 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c29505358a31d0bd55582d22c7d1f120bade653408a4df595b3f8318762d68be`
Last Modified: Sat, 21 Jul 2018 14:32:32 GMT
Size: 2.2 KB (2188 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:73ad21fcae38580afbdb872cb709ae6aadc3ceba9e9cd1691b2b9b332a00c1fd`
Last Modified: Sat, 21 Jul 2018 14:32:31 GMT
Size: 901.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cd324e7c9c53895850e4165193c57a1d745c66e8cdfcbfe57775f579d1923cd8`
Last Modified: Sat, 21 Jul 2018 15:38:47 GMT
Size: 315.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e6f042641dc3a26627767efe3df4648713a6c3df2830b1e3739596902302826c`
Last Modified: Sat, 21 Jul 2018 15:38:46 GMT
Size: 2.7 MB (2732773 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:9dd7f2e8ffa81eeb2103a1edc7cb6da3faac19e99f38f523561f730632e2b8ae`
Last Modified: Fri, 03 Aug 2018 12:03:37 GMT
Size: 9.4 MB (9422768 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:b9a30b493d95d11969eab9cd0f2b21bf13a295e232b05b5c5e298dd4f900c213`
Last Modified: Sat, 11 Aug 2018 12:05:56 GMT
Size: 1.2 KB (1173 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:462c6231eaa585511b1d21f9aca7833807d9948192842da418d8bd91b30f70ab`
Last Modified: Sat, 11 Aug 2018 12:05:55 GMT
Size: 614.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `joomla:php5.6` - linux; arm64 variant v8
```console
$ docker pull joomla@sha256:b70aeef49298d340a4a7e456e4351d8238f21b906fe512b778e86956c3d74f54
```
- Docker Version: 17.06.2-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **128.8 MB (128819287 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:ed77a163c36ce9964e01e22bff1c28b0f61fdc68ace3355a387e36ccbff67bb5`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["apache2-foreground"]`
```dockerfile
# Tue, 17 Jul 2018 08:48:06 GMT
ADD file:b6ea996ffd5aa4dade8cb1d721c2716614c03110d98683aca206c7ab52fcb9e5 in /
# Tue, 17 Jul 2018 08:48:07 GMT
CMD ["bash"]
# Tue, 17 Jul 2018 16:03:10 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 17 Jul 2018 16:03:11 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 17 Jul 2018 16:04:26 GMT
RUN apt-get update && apt-get install -y $PHPIZE_DEPS ca-certificates curl xz-utils --no-install-recommends && rm -r /var/lib/apt/lists/*
# Tue, 17 Jul 2018 16:04:28 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 17 Jul 2018 16:04:30 GMT
RUN mkdir -p $PHP_INI_DIR/conf.d
# Tue, 17 Jul 2018 16:19:04 GMT
RUN apt-get update && apt-get install -y --no-install-recommends apache2 && rm -rf /var/lib/apt/lists/*
# Tue, 17 Jul 2018 16:19:05 GMT
ENV APACHE_CONFDIR=/etc/apache2
# Tue, 17 Jul 2018 16:19:06 GMT
ENV APACHE_ENVVARS=/etc/apache2/envvars
# Tue, 17 Jul 2018 16:19:09 GMT
RUN set -ex && sed -ri 's/^export ([^=]+)=(.*)$/: ${\1:=\2}\nexport \1/' "$APACHE_ENVVARS" && . "$APACHE_ENVVARS" && for dir in "$APACHE_LOCK_DIR" "$APACHE_RUN_DIR" "$APACHE_LOG_DIR" /var/www/html ; do rm -rvf "$dir" && mkdir -p "$dir" && chown -R "$APACHE_RUN_USER:$APACHE_RUN_GROUP" "$dir"; done
# Tue, 17 Jul 2018 16:19:12 GMT
RUN a2dismod mpm_event && a2enmod mpm_prefork
# Tue, 17 Jul 2018 16:19:14 GMT
RUN set -ex && . "$APACHE_ENVVARS" && ln -sfT /dev/stderr "$APACHE_LOG_DIR/error.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/access.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/other_vhosts_access.log"
# Tue, 17 Jul 2018 16:19:16 GMT
RUN { echo '<FilesMatch \.php$>'; echo '\tSetHandler application/x-httpd-php'; echo '</FilesMatch>'; echo; echo 'DirectoryIndex disabled'; echo 'DirectoryIndex index.php index.html'; echo; echo '<Directory /var/www/>'; echo '\tOptions -Indexes'; echo '\tAllowOverride All'; echo '</Directory>'; } | tee "$APACHE_CONFDIR/conf-available/docker-php.conf" && a2enconf docker-php
# Tue, 17 Jul 2018 16:19:16 GMT
ENV PHP_EXTRA_BUILD_DEPS=apache2-dev
# Tue, 17 Jul 2018 16:19:17 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi
# Tue, 17 Jul 2018 16:19:18 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 16:19:19 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 16:19:20 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Tue, 17 Jul 2018 18:26:08 GMT
ENV GPG_KEYS=0BD78B5F97500D450838F95DFE857D9A90D90EC1 6E4F6AB321FDC07F2C332E3AC2BF0BC433CFC8B3
# Sat, 21 Jul 2018 14:38:06 GMT
ENV PHP_VERSION=5.6.37
# Sat, 21 Jul 2018 14:38:07 GMT
ENV PHP_URL=https://secure.php.net/get/php-5.6.37.tar.xz/from/this/mirror PHP_ASC_URL=https://secure.php.net/get/php-5.6.37.tar.xz.asc/from/this/mirror
# Sat, 21 Jul 2018 14:38:08 GMT
ENV PHP_SHA256=5000d82610f9134aaedef28854ec3591f68dedf26a17b8935727dac2843bd256 PHP_MD5=
# Sat, 21 Jul 2018 14:39:00 GMT
RUN set -xe; fetchDeps=' wget '; if ! command -v gpg > /dev/null; then fetchDeps="$fetchDeps dirmngr gnupg "; fi; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; wget -O php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then wget -O php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; command -v gpgconf > /dev/null && gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps
# Sat, 21 Jul 2018 14:39:01 GMT
COPY file:207c686e3fed4f71f8a7b245d8dcae9c9048d276a326d82b553c12a90af0c0ca in /usr/local/bin/
# Sat, 21 Jul 2018 14:45:28 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libedit-dev libsqlite3-dev libssl1.0-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-curl --with-libedit --with-openssl --with-zlib $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; php --version; pecl update-channels; rm -rf /tmp/pear ~/.pearrc
# Sat, 21 Jul 2018 14:45:29 GMT
COPY multi:c925dfb355ea16ba0238c8b6ca78d3cd7fe815932bf707b25bbf051070430157 in /usr/local/bin/
# Sat, 21 Jul 2018 14:45:30 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Sat, 21 Jul 2018 14:45:31 GMT
COPY file:24613ecbb1ce6a09f683b0753da9c26a1af07547326e8a02f6eec80ad6f2774a in /usr/local/bin/
# Sat, 21 Jul 2018 14:45:32 GMT
WORKDIR /var/www/html
# Sat, 21 Jul 2018 14:45:33 GMT
EXPOSE 80/tcp
# Sat, 21 Jul 2018 14:45:34 GMT
CMD ["apache2-foreground"]
# Sat, 21 Jul 2018 18:13:42 GMT
LABEL maintainer=Michael Babker <michael.babker@joomla.org> (@mbabker)
# Sat, 21 Jul 2018 18:13:43 GMT
ENV JOOMLA_INSTALLATION_DISABLE_LOCALHOST_CHECK=1
# Sat, 21 Jul 2018 18:13:44 GMT
RUN a2enmod rewrite
# Sat, 21 Jul 2018 18:18:21 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libjpeg-dev libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev ; docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install bz2 gd ldap mcrypt mysqli pdo_mysql pdo_pgsql pgsql zip ; pecl install APCu-4.0.11; pecl install memcached-2.2.0; pecl install redis-3.1.6; docker-php-ext-enable apcu memcached redis ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Sat, 21 Jul 2018 18:18:24 GMT
VOLUME [/var/www/html]
# Fri, 03 Aug 2018 08:58:49 GMT
ENV JOOMLA_VERSION=3.8.11
# Fri, 03 Aug 2018 08:58:50 GMT
ENV JOOMLA_SHA1=d27fb06f13ec4fe74a41124e354ed639f2093100
# Fri, 03 Aug 2018 08:59:05 GMT
RUN curl -o joomla.tar.bz2 -SL https://github.com/joomla/joomla-cms/releases/download/${JOOMLA_VERSION}/Joomla_${JOOMLA_VERSION}-Stable-Full_Package.tar.bz2 && echo "$JOOMLA_SHA1 *joomla.tar.bz2" | sha1sum -c - && mkdir /usr/src/joomla && tar -xf joomla.tar.bz2 -C /usr/src/joomla && rm joomla.tar.bz2 && chown -R www-data:www-data /usr/src/joomla
# Sat, 11 Aug 2018 08:52:47 GMT
COPY file:25b57bf11549456c8a7b3fadac31b0211225c2cd85b3a380a644dcec5f8a605c in /entrypoint.sh
# Sat, 11 Aug 2018 08:52:48 GMT
COPY file:7328ebe063e26f7b7716dfd8778bb7d46b90702ea38b23b9147ba2fd837ac2c1 in /makedb.php
# Sat, 11 Aug 2018 08:52:48 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Sat, 11 Aug 2018 08:52:49 GMT
CMD ["apache2-foreground"]
```
- Layers:
- `sha256:74a932489409d8d15db14c8a4a811fb46c7386bb06ea678ff27084d5657eeaaf`
Last Modified: Tue, 17 Jul 2018 08:57:35 GMT
Size: 20.3 MB (20331647 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:873bd0f2b0da24bb481d8491665955d3176f41a8c3262572051e6fbfc2075c14`
Last Modified: Tue, 17 Jul 2018 18:53:34 GMT
Size: 228.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:89e8001003dbe417b7882e436548f10e9fc4bfbfc45d9c22416657a3ab0ce20e`
Last Modified: Tue, 17 Jul 2018 18:53:57 GMT
Size: 57.6 MB (57595585 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e276d47e823964f5d049fdc2f13115ee09bb8044a6355809621253a1c52d2adb`
Last Modified: Tue, 17 Jul 2018 18:53:34 GMT
Size: 184.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2a1a190657e4474f8e52fed5ea4fc0a3252219e1c93c140782014917cbcb4d13`
Last Modified: Tue, 17 Jul 2018 18:57:55 GMT
Size: 16.7 MB (16709705 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:459567a3ab2fa017d0e8fb3192fb91ae0250877292699041921d538ebf421f18`
Last Modified: Tue, 17 Jul 2018 18:57:45 GMT
Size: 1.3 KB (1251 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5a39800acd3e1366edf89216dc2c8b72c8aaf5cbada14cba024b533caf0d77c0`
Last Modified: Tue, 17 Jul 2018 18:57:44 GMT
Size: 437.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c6bb43a3302b186c43a1b99a9281cd132a04b94c56b07207d4d2710c3d83339f`
Last Modified: Tue, 17 Jul 2018 18:57:43 GMT
Size: 231.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:be3a541b3e01e7e20a1dd4652487490845b524c5d486c63e812ee8b7307141b6`
Last Modified: Tue, 17 Jul 2018 18:57:44 GMT
Size: 490.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2d72283e24b615f7dff890266016a4337238c5cafa5d75a0371e57b9331996d8`
Last Modified: Sat, 21 Jul 2018 16:29:48 GMT
Size: 12.8 MB (12815300 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:de10be73cd1f1c8785fc7ec365f10287201cbc297f2fa7b54a84bf38ecf7e6a7`
Last Modified: Sat, 21 Jul 2018 16:29:46 GMT
Size: 500.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7faf755455316ac919d5e3a386bfd78678421241285acd270585fea0cd82168d`
Last Modified: Sat, 21 Jul 2018 16:29:50 GMT
Size: 9.1 MB (9099120 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5f1d0d9f9682fc838b789a40aff4efad5491e1d17b49981ff7888ebc2abe24ad`
Last Modified: Sat, 21 Jul 2018 16:29:46 GMT
Size: 2.2 KB (2191 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ea35552dd7b31c4ced2bfa9923eb2323d1cedbddc37a3b73b90c10dfe3f256f5`
Last Modified: Sat, 21 Jul 2018 16:29:46 GMT
Size: 902.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7f0e6856a17f452d1f3aef10374573cc47dbaa70b8434f0dbf46fa72167c104e`
Last Modified: Sat, 21 Jul 2018 19:13:36 GMT
Size: 312.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2aa85f6c2923cad4cdda970898d7b526b6efbc245580ef598a0f4c3757c83949`
Last Modified: Sat, 21 Jul 2018 19:13:37 GMT
Size: 2.8 MB (2836583 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:81f639c76bb0e7f9d2838f34241703e7c13b5e909e8146fc05166aaf13206b31`
Last Modified: Fri, 03 Aug 2018 09:08:14 GMT
Size: 9.4 MB (9422835 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:79efdcd245f0d9dfbd78c71a443c439878207ff3d63765e6fbaac7fefb8ddb55`
Last Modified: Sat, 11 Aug 2018 08:58:14 GMT
Size: 1.2 KB (1171 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3578fa3a7a058d88d456b50ae5d36b0d288d84205562e1e4667688e8064ecc91`
Last Modified: Sat, 11 Aug 2018 08:58:14 GMT
Size: 615.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `joomla:php5.6` - linux; 386
```console
$ docker pull joomla@sha256:0a7dc81ff90062a30078b6934e6f5aab06c087de3ccd90f36b2ef722f94b9291
```
- Docker Version: 17.06.2-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **147.4 MB (147426831 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:8f4d6c55fcdd734682b1e1273c9536f2db450aa25dbc6529ab12ff4f12def682`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["apache2-foreground"]`
```dockerfile
# Tue, 17 Jul 2018 10:50:00 GMT
ADD file:14cbcb91de201f648f46b04170dcae29163968a641f94d6ad7c3d77fc707a890 in /
# Tue, 17 Jul 2018 10:50:03 GMT
CMD ["bash"]
# Tue, 17 Jul 2018 15:47:26 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 17 Jul 2018 15:47:26 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 17 Jul 2018 15:48:13 GMT
RUN apt-get update && apt-get install -y $PHPIZE_DEPS ca-certificates curl xz-utils --no-install-recommends && rm -r /var/lib/apt/lists/*
# Tue, 17 Jul 2018 15:48:16 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 17 Jul 2018 15:48:17 GMT
RUN mkdir -p $PHP_INI_DIR/conf.d
# Tue, 17 Jul 2018 15:55:25 GMT
RUN apt-get update && apt-get install -y --no-install-recommends apache2 && rm -rf /var/lib/apt/lists/*
# Tue, 17 Jul 2018 15:55:26 GMT
ENV APACHE_CONFDIR=/etc/apache2
# Tue, 17 Jul 2018 15:55:26 GMT
ENV APACHE_ENVVARS=/etc/apache2/envvars
# Tue, 17 Jul 2018 15:55:27 GMT
RUN set -ex && sed -ri 's/^export ([^=]+)=(.*)$/: ${\1:=\2}\nexport \1/' "$APACHE_ENVVARS" && . "$APACHE_ENVVARS" && for dir in "$APACHE_LOCK_DIR" "$APACHE_RUN_DIR" "$APACHE_LOG_DIR" /var/www/html ; do rm -rvf "$dir" && mkdir -p "$dir" && chown -R "$APACHE_RUN_USER:$APACHE_RUN_GROUP" "$dir"; done
# Tue, 17 Jul 2018 15:55:28 GMT
RUN a2dismod mpm_event && a2enmod mpm_prefork
# Tue, 17 Jul 2018 15:55:29 GMT
RUN set -ex && . "$APACHE_ENVVARS" && ln -sfT /dev/stderr "$APACHE_LOG_DIR/error.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/access.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/other_vhosts_access.log"
# Tue, 17 Jul 2018 15:55:30 GMT
RUN { echo '<FilesMatch \.php$>'; echo '\tSetHandler application/x-httpd-php'; echo '</FilesMatch>'; echo; echo 'DirectoryIndex disabled'; echo 'DirectoryIndex index.php index.html'; echo; echo '<Directory /var/www/>'; echo '\tOptions -Indexes'; echo '\tAllowOverride All'; echo '</Directory>'; } | tee "$APACHE_CONFDIR/conf-available/docker-php.conf" && a2enconf docker-php
# Tue, 17 Jul 2018 15:55:30 GMT
ENV PHP_EXTRA_BUILD_DEPS=apache2-dev
# Tue, 17 Jul 2018 15:55:30 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi
# Tue, 17 Jul 2018 15:55:31 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 15:55:31 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 15:55:31 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Tue, 17 Jul 2018 17:44:36 GMT
ENV GPG_KEYS=0BD78B5F97500D450838F95DFE857D9A90D90EC1 6E4F6AB321FDC07F2C332E3AC2BF0BC433CFC8B3
# Sat, 21 Jul 2018 14:19:07 GMT
ENV PHP_VERSION=5.6.37
# Sat, 21 Jul 2018 14:19:08 GMT
ENV PHP_URL=https://secure.php.net/get/php-5.6.37.tar.xz/from/this/mirror PHP_ASC_URL=https://secure.php.net/get/php-5.6.37.tar.xz.asc/from/this/mirror
# Sat, 21 Jul 2018 14:19:08 GMT
ENV PHP_SHA256=5000d82610f9134aaedef28854ec3591f68dedf26a17b8935727dac2843bd256 PHP_MD5=
# Sat, 21 Jul 2018 14:19:25 GMT
RUN set -xe; fetchDeps=' wget '; if ! command -v gpg > /dev/null; then fetchDeps="$fetchDeps dirmngr gnupg "; fi; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; wget -O php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then wget -O php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; command -v gpgconf > /dev/null && gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps
# Sat, 21 Jul 2018 14:19:26 GMT
COPY file:207c686e3fed4f71f8a7b245d8dcae9c9048d276a326d82b553c12a90af0c0ca in /usr/local/bin/
# Sat, 21 Jul 2018 14:22:21 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libedit-dev libsqlite3-dev libssl1.0-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-curl --with-libedit --with-openssl --with-zlib $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; php --version; pecl update-channels; rm -rf /tmp/pear ~/.pearrc
# Sat, 21 Jul 2018 14:22:22 GMT
COPY multi:c925dfb355ea16ba0238c8b6ca78d3cd7fe815932bf707b25bbf051070430157 in /usr/local/bin/
# Sat, 21 Jul 2018 14:22:22 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Sat, 21 Jul 2018 14:22:22 GMT
COPY file:24613ecbb1ce6a09f683b0753da9c26a1af07547326e8a02f6eec80ad6f2774a in /usr/local/bin/
# Sat, 21 Jul 2018 14:22:23 GMT
WORKDIR /var/www/html
# Sat, 21 Jul 2018 14:22:23 GMT
EXPOSE 80/tcp
# Sat, 21 Jul 2018 14:22:24 GMT
CMD ["apache2-foreground"]
# Sat, 21 Jul 2018 16:43:03 GMT
LABEL maintainer=Michael Babker <michael.babker@joomla.org> (@mbabker)
# Sat, 21 Jul 2018 16:43:03 GMT
ENV JOOMLA_INSTALLATION_DISABLE_LOCALHOST_CHECK=1
# Sat, 21 Jul 2018 16:43:04 GMT
RUN a2enmod rewrite
# Sat, 21 Jul 2018 16:46:47 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libjpeg-dev libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev ; docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install bz2 gd ldap mcrypt mysqli pdo_mysql pdo_pgsql pgsql zip ; pecl install APCu-4.0.11; pecl install memcached-2.2.0; pecl install redis-3.1.6; docker-php-ext-enable apcu memcached redis ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Sat, 21 Jul 2018 16:46:47 GMT
VOLUME [/var/www/html]
# Fri, 03 Aug 2018 10:51:07 GMT
ENV JOOMLA_VERSION=3.8.11
# Fri, 03 Aug 2018 10:51:07 GMT
ENV JOOMLA_SHA1=d27fb06f13ec4fe74a41124e354ed639f2093100
# Fri, 03 Aug 2018 10:51:17 GMT
RUN curl -o joomla.tar.bz2 -SL https://github.com/joomla/joomla-cms/releases/download/${JOOMLA_VERSION}/Joomla_${JOOMLA_VERSION}-Stable-Full_Package.tar.bz2 && echo "$JOOMLA_SHA1 *joomla.tar.bz2" | sha1sum -c - && mkdir /usr/src/joomla && tar -xf joomla.tar.bz2 -C /usr/src/joomla && rm joomla.tar.bz2 && chown -R www-data:www-data /usr/src/joomla
# Sat, 11 Aug 2018 10:48:39 GMT
COPY file:25b57bf11549456c8a7b3fadac31b0211225c2cd85b3a380a644dcec5f8a605c in /entrypoint.sh
# Sat, 11 Aug 2018 10:48:39 GMT
COPY file:7328ebe063e26f7b7716dfd8778bb7d46b90702ea38b23b9147ba2fd837ac2c1 in /makedb.php
# Sat, 11 Aug 2018 10:48:39 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Sat, 11 Aug 2018 10:48:40 GMT
CMD ["apache2-foreground"]
```
- Layers:
- `sha256:9f3675ed6653666b64ffa6c3dc93755d10c6f906a1cab9f061cdbe09c65323f4`
Last Modified: Tue, 17 Jul 2018 11:09:22 GMT
Size: 23.1 MB (23126377 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:373dbc1260a05667efc80f8be05f86f2c8e1ffd318e78ef63c8e46f89a458c23`
Last Modified: Tue, 17 Jul 2018 18:23:10 GMT
Size: 225.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d5043b9e8c05fba62c02a7922de92abc9897582c1d42ef5b54fadd7729ed6724`
Last Modified: Tue, 17 Jul 2018 18:23:45 GMT
Size: 71.5 MB (71484260 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7656a960ed9b5048a41ee9f4e43ba5a1be7cbf598652ea796b7e53d1f8d18c27`
Last Modified: Tue, 17 Jul 2018 18:23:09 GMT
Size: 182.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d9d55c28699d0ce3a61c5c0f364d0799467086dd003d195eb97ca63c366b8966`
Last Modified: Tue, 17 Jul 2018 18:28:04 GMT
Size: 17.6 MB (17562616 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:f4b133748d2cd4318f3ec6ea0f1998cd8f77a218613390cf7943002bcaef79cc`
Last Modified: Tue, 17 Jul 2018 18:27:56 GMT
Size: 1.2 KB (1241 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:325bf5bf5aad0a669c355051eb4445912c81c9c9549bbdcd4b3a02f87a99f310`
Last Modified: Tue, 17 Jul 2018 18:27:54 GMT
Size: 440.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1d60b473c0a464669f20241289f33594fc59d89e9a5ea002c20e9b6d85803afe`
Last Modified: Tue, 17 Jul 2018 18:27:53 GMT
Size: 229.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:7f6870b5466e5e98dfb297ef40e21bbd2caf49384eee79fe8e5b62b9cd14bbcf`
Last Modified: Tue, 17 Jul 2018 18:27:52 GMT
Size: 486.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:a731c363fd5db82b89d66ca56b95fc178815704ad9a80badc7610f9ec7acc180`
Last Modified: Sat, 21 Jul 2018 16:11:08 GMT
Size: 12.8 MB (12816570 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:715860909e5c110749fcbcc1d4197603e349a9d45344ebdf715003343035f204`
Last Modified: Sat, 21 Jul 2018 16:11:06 GMT
Size: 499.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:3c604cf44673a65823c17cec0bd1cebd771d03d5f03bad99c30afb5a6f72e45f`
Last Modified: Sat, 21 Jul 2018 16:11:11 GMT
Size: 10.1 MB (10099027 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1887c3fbcb18154fd5658a3794a77da2324f9dde36df14d2aa2218a720808142`
Last Modified: Sat, 21 Jul 2018 16:11:06 GMT
Size: 2.2 KB (2192 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0648e6df129e4e46f32bf3b96c8e6f72c37d849de7f2becb96f3ca69141c8722`
Last Modified: Sat, 21 Jul 2018 16:11:06 GMT
Size: 902.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:ffed17ef6f3cad15a6dee3c6791a7c070f94d2338ff5ac4454dea336b98f29cc`
Last Modified: Sat, 21 Jul 2018 17:33:37 GMT
Size: 315.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:522c9b5f633dff21e0fa8aa783284d428bb1ce8860ef4c52e6f864ffadbae446`
Last Modified: Sat, 21 Jul 2018 17:33:38 GMT
Size: 2.9 MB (2907039 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:5c11b6a3bcbd22cb731bc082c77f133168e809f952d01eac327adc59cc5d71ed`
Last Modified: Fri, 03 Aug 2018 10:58:25 GMT
Size: 9.4 MB (9422443 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:711b6a36576d360d65c7e83f826a0315f9c55172dcd7eb987b3ca077282a2421`
Last Modified: Sat, 11 Aug 2018 10:52:41 GMT
Size: 1.2 KB (1173 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:bce19397540dd44d2ff335a47c5bc50fc451960bed2af8344c6215fd7b7aadb2`
Last Modified: Sat, 11 Aug 2018 10:52:41 GMT
Size: 615.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
### `joomla:php5.6` - linux; ppc64le
```console
$ docker pull joomla@sha256:a9fd86d1687e7f5ddb79dcce7a9ca153b7e49cc1b47a3933ed97c2bbdc2b74e0
```
- Docker Version: 17.06.2-ce
- Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json`
- Total Size: **136.6 MB (136622689 bytes)**
(compressed transfer size, not on-disk size)
- Image ID: `sha256:c365e1f99351a51dcc0721b9b04d23079479d37095f7ffd9f14bfe216aea1194`
- Entrypoint: `["\/entrypoint.sh"]`
- Default Command: `["apache2-foreground"]`
```dockerfile
# Tue, 17 Jul 2018 08:20:29 GMT
ADD file:d8fd3ee34d99a5bb7abafecc4f8991a3de0ad779e8fd8f3ebb33a4811ecfd5a5 in /
# Tue, 17 Jul 2018 08:20:30 GMT
CMD ["bash"]
# Tue, 17 Jul 2018 15:54:19 GMT
RUN set -eux; { echo 'Package: php*'; echo 'Pin: release *'; echo 'Pin-Priority: -1'; } > /etc/apt/preferences.d/no-debian-php
# Tue, 17 Jul 2018 15:54:21 GMT
ENV PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
# Tue, 17 Jul 2018 15:56:04 GMT
RUN apt-get update && apt-get install -y $PHPIZE_DEPS ca-certificates curl xz-utils --no-install-recommends && rm -r /var/lib/apt/lists/*
# Tue, 17 Jul 2018 15:56:08 GMT
ENV PHP_INI_DIR=/usr/local/etc/php
# Tue, 17 Jul 2018 15:56:15 GMT
RUN mkdir -p $PHP_INI_DIR/conf.d
# Tue, 17 Jul 2018 16:04:30 GMT
RUN apt-get update && apt-get install -y --no-install-recommends apache2 && rm -rf /var/lib/apt/lists/*
# Tue, 17 Jul 2018 16:04:31 GMT
ENV APACHE_CONFDIR=/etc/apache2
# Tue, 17 Jul 2018 16:04:32 GMT
ENV APACHE_ENVVARS=/etc/apache2/envvars
# Tue, 17 Jul 2018 16:04:34 GMT
RUN set -ex && sed -ri 's/^export ([^=]+)=(.*)$/: ${\1:=\2}\nexport \1/' "$APACHE_ENVVARS" && . "$APACHE_ENVVARS" && for dir in "$APACHE_LOCK_DIR" "$APACHE_RUN_DIR" "$APACHE_LOG_DIR" /var/www/html ; do rm -rvf "$dir" && mkdir -p "$dir" && chown -R "$APACHE_RUN_USER:$APACHE_RUN_GROUP" "$dir"; done
# Tue, 17 Jul 2018 16:04:38 GMT
RUN a2dismod mpm_event && a2enmod mpm_prefork
# Tue, 17 Jul 2018 16:04:46 GMT
RUN set -ex && . "$APACHE_ENVVARS" && ln -sfT /dev/stderr "$APACHE_LOG_DIR/error.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/access.log" && ln -sfT /dev/stdout "$APACHE_LOG_DIR/other_vhosts_access.log"
# Tue, 17 Jul 2018 16:04:50 GMT
RUN { echo '<FilesMatch \.php$>'; echo '\tSetHandler application/x-httpd-php'; echo '</FilesMatch>'; echo; echo 'DirectoryIndex disabled'; echo 'DirectoryIndex index.php index.html'; echo; echo '<Directory /var/www/>'; echo '\tOptions -Indexes'; echo '\tAllowOverride All'; echo '</Directory>'; } | tee "$APACHE_CONFDIR/conf-available/docker-php.conf" && a2enconf docker-php
# Tue, 17 Jul 2018 16:04:50 GMT
ENV PHP_EXTRA_BUILD_DEPS=apache2-dev
# Tue, 17 Jul 2018 16:04:53 GMT
ENV PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi
# Tue, 17 Jul 2018 16:04:54 GMT
ENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 16:04:55 GMT
ENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2
# Tue, 17 Jul 2018 16:04:59 GMT
ENV PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie
# Tue, 17 Jul 2018 18:05:36 GMT
ENV GPG_KEYS=0BD78B5F97500D450838F95DFE857D9A90D90EC1 6E4F6AB321FDC07F2C332E3AC2BF0BC433CFC8B3
# Sat, 21 Jul 2018 11:09:43 GMT
ENV PHP_VERSION=5.6.37
# Sat, 21 Jul 2018 11:09:44 GMT
ENV PHP_URL=https://secure.php.net/get/php-5.6.37.tar.xz/from/this/mirror PHP_ASC_URL=https://secure.php.net/get/php-5.6.37.tar.xz.asc/from/this/mirror
# Sat, 21 Jul 2018 11:09:45 GMT
ENV PHP_SHA256=5000d82610f9134aaedef28854ec3591f68dedf26a17b8935727dac2843bd256 PHP_MD5=
# Sat, 21 Jul 2018 11:10:09 GMT
RUN set -xe; fetchDeps=' wget '; if ! command -v gpg > /dev/null; then fetchDeps="$fetchDeps dirmngr gnupg "; fi; apt-get update; apt-get install -y --no-install-recommends $fetchDeps; rm -rf /var/lib/apt/lists/*; mkdir -p /usr/src; cd /usr/src; wget -O php.tar.xz "$PHP_URL"; if [ -n "$PHP_SHA256" ]; then echo "$PHP_SHA256 *php.tar.xz" | sha256sum -c -; fi; if [ -n "$PHP_MD5" ]; then echo "$PHP_MD5 *php.tar.xz" | md5sum -c -; fi; if [ -n "$PHP_ASC_URL" ]; then wget -O php.tar.xz.asc "$PHP_ASC_URL"; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --verify php.tar.xz.asc php.tar.xz; command -v gpgconf > /dev/null && gpgconf --kill all; rm -rf "$GNUPGHOME"; fi; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $fetchDeps
# Sat, 21 Jul 2018 11:10:12 GMT
COPY file:207c686e3fed4f71f8a7b245d8dcae9c9048d276a326d82b553c12a90af0c0ca in /usr/local/bin/
# Sat, 21 Jul 2018 11:13:06 GMT
RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libcurl4-openssl-dev libedit-dev libsqlite3-dev libssl1.0-dev libxml2-dev zlib1g-dev ${PHP_EXTRA_BUILD_DEPS:-} ; rm -rf /var/lib/apt/lists/*; export CFLAGS="$PHP_CFLAGS" CPPFLAGS="$PHP_CPPFLAGS" LDFLAGS="$PHP_LDFLAGS" ; docker-php-source extract; cd /usr/src/php; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; if [ ! -d /usr/include/curl ]; then ln -sT "/usr/include/$debMultiarch/curl" /usr/local/include/curl; fi; ./configure --build="$gnuArch" --with-config-file-path="$PHP_INI_DIR" --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" --enable-option-checking=fatal --with-mhash --enable-ftp --enable-mbstring --enable-mysqlnd --with-curl --with-libedit --with-openssl --with-zlib $(test "$gnuArch" = 's390x-linux-gnu' && echo '--without-pcre-jit') --with-libdir="lib/$debMultiarch" ${PHP_EXTRA_CONFIGURE_ARGS:-} ; make -j "$(nproc)"; make install; find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; make clean; cd /; docker-php-source delete; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; find /usr/local -type f -executable -exec ldd '{}' ';' | awk '/=>/ { print $(NF-1) }' | sort -u | xargs -r dpkg-query --search | cut -d: -f1 | sort -u | xargs -r apt-mark manual ; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; php --version; pecl update-channels; rm -rf /tmp/pear ~/.pearrc
# Sat, 21 Jul 2018 11:13:07 GMT
COPY multi:c925dfb355ea16ba0238c8b6ca78d3cd7fe815932bf707b25bbf051070430157 in /usr/local/bin/
# Sat, 21 Jul 2018 11:13:09 GMT
ENTRYPOINT ["docker-php-entrypoint"]
# Sat, 21 Jul 2018 11:13:10 GMT
COPY file:24613ecbb1ce6a09f683b0753da9c26a1af07547326e8a02f6eec80ad6f2774a in /usr/local/bin/
# Sat, 21 Jul 2018 11:13:11 GMT
WORKDIR /var/www/html
# Sat, 21 Jul 2018 11:13:12 GMT
EXPOSE 80/tcp
# Sat, 21 Jul 2018 11:13:13 GMT
CMD ["apache2-foreground"]
# Sat, 21 Jul 2018 13:20:44 GMT
LABEL maintainer=Michael Babker <michael.babker@joomla.org> (@mbabker)
# Sat, 21 Jul 2018 13:20:45 GMT
ENV JOOMLA_INSTALLATION_DISABLE_LOCALHOST_CHECK=1
# Sat, 21 Jul 2018 13:20:49 GMT
RUN a2enmod rewrite
# Sat, 21 Jul 2018 13:24:14 GMT
RUN set -ex; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends libbz2-dev libjpeg-dev libldap2-dev libmcrypt-dev libmemcached-dev libpng-dev libpq-dev ; docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; debMultiarch="$(dpkg-architecture --query DEB_BUILD_MULTIARCH)"; docker-php-ext-configure ldap --with-libdir="lib/$debMultiarch"; docker-php-ext-install bz2 gd ldap mcrypt mysqli pdo_mysql pdo_pgsql pgsql zip ; pecl install APCu-4.0.11; pecl install memcached-2.2.0; pecl install redis-3.1.6; docker-php-ext-enable apcu memcached redis ; apt-mark auto '.*' > /dev/null; apt-mark manual $savedAptMark; ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so | awk '/=>/ { print $3 }' | sort -u | xargs -r dpkg-query -S | cut -d: -f1 | sort -u | xargs -rt apt-mark manual; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; rm -rf /var/lib/apt/lists/*
# Sat, 21 Jul 2018 13:24:15 GMT
VOLUME [/var/www/html]
# Fri, 03 Aug 2018 08:27:24 GMT
ENV JOOMLA_VERSION=3.8.11
# Fri, 03 Aug 2018 08:27:26 GMT
ENV JOOMLA_SHA1=d27fb06f13ec4fe74a41124e354ed639f2093100
# Fri, 03 Aug 2018 08:27:35 GMT
RUN curl -o joomla.tar.bz2 -SL https://github.com/joomla/joomla-cms/releases/download/${JOOMLA_VERSION}/Joomla_${JOOMLA_VERSION}-Stable-Full_Package.tar.bz2 && echo "$JOOMLA_SHA1 *joomla.tar.bz2" | sha1sum -c - && mkdir /usr/src/joomla && tar -xf joomla.tar.bz2 -C /usr/src/joomla && rm joomla.tar.bz2 && chown -R www-data:www-data /usr/src/joomla
# Sat, 11 Aug 2018 08:52:35 GMT
COPY file:25b57bf11549456c8a7b3fadac31b0211225c2cd85b3a380a644dcec5f8a605c in /entrypoint.sh
# Sat, 11 Aug 2018 08:52:44 GMT
COPY file:7328ebe063e26f7b7716dfd8778bb7d46b90702ea38b23b9147ba2fd837ac2c1 in /makedb.php
# Sat, 11 Aug 2018 08:52:48 GMT
ENTRYPOINT ["/entrypoint.sh"]
# Sat, 11 Aug 2018 08:53:00 GMT
CMD ["apache2-foreground"]
```
- Layers:
- `sha256:6dc0c10e32a730b4a6b92aaa59148a751864a834dc8ac1b0032717f378efc701`
Last Modified: Tue, 17 Jul 2018 08:26:26 GMT
Size: 22.7 MB (22740445 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:6406e7e7e0e209551d2fc5932b81081b7ee5b53d00110a25a43cd6e5cd522c5d`
Last Modified: Tue, 17 Jul 2018 18:34:46 GMT
Size: 227.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:2e6d2e43cf207ce92c12a360170e3c178401d97fa3bf53b8caaff1a77cd9319b`
Last Modified: Tue, 17 Jul 2018 18:35:15 GMT
Size: 61.8 MB (61810031 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1d64e6ea2df855f8cef3825a61c8350f67bc9c7c2b9bf22708698e3dcae919fc`
Last Modified: Tue, 17 Jul 2018 18:34:43 GMT
Size: 212.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:70bf10867575d96c0b481a187216561a91e276db1d5c38dca841ac4ed2900489`
Last Modified: Tue, 17 Jul 2018 18:37:32 GMT
Size: 17.3 MB (17348331 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:cf9456c6196ef547186fb17a52d895f0ba57aa6be88ca128703730f4abe36ece`
Last Modified: Tue, 17 Jul 2018 18:37:22 GMT
Size: 1.3 KB (1280 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:d39ccfba093918882764f89d1e024831191044c9f56daca67a966ad8b238945e`
Last Modified: Tue, 17 Jul 2018 18:37:21 GMT
Size: 474.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:8d5ef7a7cc6b22430131a48003ab45a89ec57af6fa78ee7e9b1ac8beec618b38`
Last Modified: Tue, 17 Jul 2018 18:37:18 GMT
Size: 230.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e8de5878d75dfa279a26be7cebb298bd4b1c2d510030c3eff4e670a44431d78d`
Last Modified: Tue, 17 Jul 2018 18:37:17 GMT
Size: 516.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:e218171344de11846bf86c73eb8680e53e14b50c2b0e2f867d396fbfb726370e`
Last Modified: Sat, 21 Jul 2018 12:08:39 GMT
Size: 12.8 MB (12815451 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:1770ef0a39e11176ced16ba49d1086bfba236f0518125912c9fb94c48d0f825c`
Last Modified: Sat, 21 Jul 2018 12:08:38 GMT
Size: 501.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:fb5f38bae3f99325f6a5a832838e302b016115aca03c5537ba9214e5bd4ba781`
Last Modified: Sat, 21 Jul 2018 12:08:42 GMT
Size: 9.5 MB (9512347 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:0ecad1725ac333453319186504dcae960834a58e1a1df871f27dac7f42897fd9`
Last Modified: Sat, 21 Jul 2018 12:08:37 GMT
Size: 2.2 KB (2193 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:c3b58346b06e2f9cc3c8b9a5b4af3e244d66208de668574d149467a9a4fd14ec`
Last Modified: Sat, 21 Jul 2018 12:08:38 GMT
Size: 904.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:caddc8f17fad19430ece05b3df720804ad49606c45fc1c57d58cc64306eec2ee`
Last Modified: Sat, 21 Jul 2018 14:12:33 GMT
Size: 318.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:aa6dcf4fa8a762c37d86097532c993d05aed5421df65620929acb5b13d71fec4`
Last Modified: Sat, 21 Jul 2018 14:12:35 GMT
Size: 3.0 MB (2964713 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:39aa5aad363db1cd91968f5f8c2bb60e69f9ffdf27868e293dbcac22809bf6c8`
Last Modified: Fri, 03 Aug 2018 08:37:41 GMT
Size: 9.4 MB (9422731 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:174b8efc842b392a10e865214da1ebcc76062b6c5b9851eebc0079df0d7205ce`
Last Modified: Sat, 11 Aug 2018 08:59:51 GMT
Size: 1.2 KB (1171 bytes)
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
- `sha256:62b11794de16d60f7c579156bd6a6a79be9351d00d670e305932b5d69ae4ead4`
Last Modified: Sat, 11 Aug 2018 08:59:51 GMT
Size: 614.0 B
MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
| 71.097035 | 1,700 | 0.72246 | yue_Hant | 0.267496 |
a92fce41180ea34254c58def950aa4c26aa33707 | 177 | md | Markdown | src/ds/README.md | iLuSIAnn/test | 10d0a20dc1a646b5c1f6c7bff2960e3f5df0510e | [
"Apache-2.0"
] | 530 | 2019-05-07T03:07:15.000Z | 2022-03-29T16:33:06.000Z | src/ds/README.md | iLuSIAnn/test | 10d0a20dc1a646b5c1f6c7bff2960e3f5df0510e | [
"Apache-2.0"
] | 3,393 | 2019-05-07T08:33:32.000Z | 2022-03-31T14:57:14.000Z | src/ds/README.md | iLuSIAnn/test | 10d0a20dc1a646b5c1f6c7bff2960e3f5df0510e | [
"Apache-2.0"
] | 158 | 2019-05-07T09:17:56.000Z | 2022-03-25T16:45:04.000Z | # Data Structures
This directory contains data structures and utilities used by the rest of the code. No files
in this directory should include files from any other directory.
| 35.4 | 92 | 0.813559 | eng_Latn | 0.999982 |
a9305eaf04396f61d4729d88e53401538e326c4a | 2,649 | md | Markdown | docs/framework/unmanaged-api/debugging/icordebugprocess5-gettypefields-method.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugprocess5-gettypefields-method.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugprocess5-gettypefields-method.md | Graflinger/docs.de-de | 9dfa50229d23e2ee67ef4047b6841991f1e40ac4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ICorDebugProcess5::GetTypeFields-Methode
ms.date: 03/30/2017
api_name:
- ICorDebugProcess5.GetTypeFields
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- ICorDebugProcess5::GetTypeFields
helpviewer_keywords:
- GetTypeFields method, ICorDebugProcess5 interface [.NET Framework debugging]
- ICorDebugProcess5::GetTypeFields method [.NET Framework debugging]
ms.assetid: 6a0ad3ee-dacb-47e9-abae-4536bcc4804b
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 7c2725c62105e92996bb2d8e79e8ff504904e9c7
ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/08/2019
ms.locfileid: "59107951"
---
# <a name="icordebugprocess5gettypefields-method"></a>ICorDebugProcess5::GetTypeFields-Methode
Enthält Informationen zu den Feldern, die auf einen Typ gehören.
## <a name="syntax"></a>Syntax
```
HRESULT GetTypeFields(
[in] COR_TYPEID id,
[in] ULONG32 celt,
[out] COR_FIELD fields[],
[out] ULONG32 *pceltNeeded
);
```
## <a name="parameters"></a>Parameter
`id`
[in] Der Bezeichner des Typs, dessen Feldinformationen abgerufen werden.
`celt`
[in] Die Anzahl der [COR_FIELD](../../../../docs/framework/unmanaged-api/debugging/cor-field-structure.md) -Objekte, deren Feldinformationen abgerufen werden.
`fields`
[out] Ein Array von [COR_FIELD](../../../../docs/framework/unmanaged-api/debugging/cor-field-structure.md) Objekte, die Informationen zu den Feldern bereitstellen, die in den Typ gehören.
`pceltNeeded`
[out] Ein Zeiger auf die Anzahl der [COR_FIELD](../../../../docs/framework/unmanaged-api/debugging/cor-field-structure.md) Objekttypen `fields`.
## <a name="remarks"></a>Hinweise
Die `celt` -Parameter, der die Anzahl von Feldern gibt an, deren Feldinformationen, die die Methode, die zum Auffüllen verwendet `fields`, sollte auf den Wert der entsprechen den `COR_TYPE_LAYOUT::numFields` Feld.
## <a name="requirements"></a>Anforderungen
**Plattformen:** Weitere Informationen finden Sie unter [Systemanforderungen](../../../../docs/framework/get-started/system-requirements.md).
**Header:** CorDebug.idl, CorDebug.h
**Bibliothek:** CorGuids.lib
**.NET Framework-Versionen:** [!INCLUDE[net_current_v45plus](../../../../includes/net-current-v45plus-md.md)]
## <a name="see-also"></a>Siehe auch
- [ICorDebugProcess5-Schnittstelle](../../../../docs/framework/unmanaged-api/debugging/icordebugprocess5-interface.md)
- [Debugschnittstellen](../../../../docs/framework/unmanaged-api/debugging/debugging-interfaces.md)
| 37.842857 | 216 | 0.732729 | deu_Latn | 0.566531 |
a930c8e0ed3b366efb846180f9f22b90f16af460 | 4,130 | md | Markdown | articles/sentinel/connect-microsoft-defender-advanced-threat-protection.md | sbrienen/azure-docs.nl-nl | 57573a8d40119c389ca398ef6eb1eacadb67c4c8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sentinel/connect-microsoft-defender-advanced-threat-protection.md | sbrienen/azure-docs.nl-nl | 57573a8d40119c389ca398ef6eb1eacadb67c4c8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sentinel/connect-microsoft-defender-advanced-threat-protection.md | sbrienen/azure-docs.nl-nl | 57573a8d40119c389ca398ef6eb1eacadb67c4c8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Micro soft Defender voor eindpunt gegevens verbinden met Azure Sentinel | Microsoft Docs
description: Informatie over het verbinden van gegevens van micro soft Defender voor eind punten (voorheen micro soft Defender ATP) aan Azure Sentinel.
services: sentinel
documentationcenter: na
author: yelevin
manager: rkarlin
editor: ''
ms.service: azure-sentinel
ms.subservice: azure-sentinel
ms.devlang: na
ms.topic: conceptual
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 09/16/2020
ms.author: yelevin
ms.openlocfilehash: 72b2ba0ea444fb14ef9fc1bc3ea6aea3654677df
ms.sourcegitcommit: 8e7316bd4c4991de62ea485adca30065e5b86c67
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 11/17/2020
ms.locfileid: "94655524"
---
# <a name="connect-alerts-from-microsoft-defender-for-endpoint-formerly-microsoft-defender-atp"></a>Verbinding maken met waarschuwingen van micro soft Defender voor eind punt (voorheen micro soft Defender ATP)
> [!IMPORTANT]
>
> - **Micro soft Defender voor eind punt** was voorheen bekend als **micro soft Defender Advanced Threat Protection** of **MDATP**.
>
> Mogelijk ziet u de oude naam die nog steeds wordt gebruikt in het product (inclusief de gegevens connector in azure Sentinel) gedurende een bepaalde periode.
>
> - Opname van micro soft Defender voor eindpunt waarschuwingen is momenteel beschikbaar als open bare preview.
> Deze functie wordt zonder service level agreement gegeven en wordt niet aanbevolen voor productie werkbelastingen.
> Zie [Supplemental Terms of Use for Microsoft Azure Previews (Aanvullende gebruiksvoorwaarden voor Microsoft Azure-previews)](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) voor meer informatie.
Met de [micro soft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/microsoft-defender-advanced-threat-protection) -connector kunt u waarschuwingen van micro soft Defender naar het eind punt streamen naar Azure Sentinel. Op die manier kunt u beveiligings gebeurtenissen in uw organisatie uitgebreid analyseren en playbooks bouwen voor een effectief en direct antwoord.
> [!NOTE]
>
> Als u de nieuwe onbewerkte gegevens logboeken wilt opnemen in de [geavanceerde jacht](/windows/security/threat-protection/microsoft-defender-atp/advanced-hunting-overview)van micro soft Defender voor het eind punt, gebruikt u de nieuwe connector voor Microsoft 365 Defender (voorheen micro soft Threat Protection, [raadpleegt](./connect-microsoft-365-defender.md)u de documentatie).
## <a name="prerequisites"></a>Vereisten
- U moet een geldige licentie voor micro soft Defender voor eind punt hebben, zoals beschreven in [micro soft Defender voor endpoint-implementatie instellen](/windows/security/threat-protection/microsoft-defender-atp/licensing).
- U moet een globale beheerder of een beveiligings beheerder zijn in de Azure Sentinel-Tenant.
## <a name="connect-to-microsoft-defender-for-endpoint"></a>Verbinding maken met micro soft Defender voor eind punt
Als micro soft Defender voor het eind punt is geïmplementeerd en uw gegevens worden opgenomen, kunnen de waarschuwingen eenvoudig worden gestreamd naar Azure Sentinel.
1. In azure Sentinel selecteert u **gegevens connectors**, selecteert u **micro soft Defender voor eind punt** (kan nog steeds *micro soft Defender Advanced Threat Protection* worden genoemd) in de galerie en selecteert u de **pagina connector openen**.
1. Klik op **Verbinding maken**.
1. Als u een query wilt uitvoeren voor micro soft Defender for Endpoint-waarschuwingen in **Logboeken**, voert u **SecurityAlert** in het query venster in en voegt u een filter toe waarbij de **provider naam** **MDATP** is.
## <a name="next-steps"></a>Volgende stappen
In dit document hebt u geleerd hoe u micro soft Defender voor eind punt verbindt met Azure Sentinel. Zie de volgende artikelen voor meer informatie over Azure Sentinel:
- Meer informatie over het [verkrijgen van inzicht in uw gegevens en mogelijke bedreigingen](quickstart-get-visibility.md).
- Ga aan de slag met [het detecteren van bedreigingen met Azure Sentinel](./tutorial-detect-threats-built-in.md). | 67.704918 | 401 | 0.801937 | nld_Latn | 0.995652 |
a931a3f2ad18547d6d9443670f1f3b4978aa77f9 | 2,947 | md | Markdown | _posts/2016-03-04-Grace-Loves-Lace-Bridesmaid-Olsen-Long.md | transblinkgift/transblinkgift.github.io | 7c2c546c519cf59d8ac1772f78d70aa271854b5a | [
"MIT"
] | null | null | null | _posts/2016-03-04-Grace-Loves-Lace-Bridesmaid-Olsen-Long.md | transblinkgift/transblinkgift.github.io | 7c2c546c519cf59d8ac1772f78d70aa271854b5a | [
"MIT"
] | null | null | null | _posts/2016-03-04-Grace-Loves-Lace-Bridesmaid-Olsen-Long.md | transblinkgift/transblinkgift.github.io | 7c2c546c519cf59d8ac1772f78d70aa271854b5a | [
"MIT"
] | null | null | null | ---
layout: post
date: 2016-03-04
title: "Grace Loves Lace Bridesmaid Olsen Long"
category: Grace Loves Lace Bridesmaid
tags: [Grace Loves Lace Bridesmaid]
---
### Grace Loves Lace Bridesmaid Olsen Long
Just **$307.98**
###
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270712/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<!-- break --><a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270717/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270723/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270726/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270730/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270734/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270737/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270739/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html"><img src="//img.readybrides.com/270708/grace-loves-lace-bridesmaid-olsen-long.jpg" alt="Grace Loves Lace Bridesmaid Olsen Long" style="width:100%;" /></a>
Buy it: [https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html](https://www.readybrides.com/en/grace-loves-lace-bridesmaid/105945-grace-loves-lace-bridesmaid-olsen-long.html)
| 128.130435 | 288 | 0.764506 | yue_Hant | 0.21821 |
a933296987441a65fc91e1fef7c031c82790233a | 2,684 | md | Markdown | doc/message_define.md | yuqingtong1990/ggtalk_server | e28f8542ccc259772fc492b6c703d3b3d495d36a | [
"BSD-3-Clause"
] | null | null | null | doc/message_define.md | yuqingtong1990/ggtalk_server | e28f8542ccc259772fc492b6c703d3b3d495d36a | [
"BSD-3-Clause"
] | null | null | null | doc/message_define.md | yuqingtong1990/ggtalk_server | e28f8542ccc259772fc492b6c703d3b3d495d36a | [
"BSD-3-Clause"
] | null | null | null | ### 用户消息定义
```json
{
'sendid':11111,
'recvid':22222,
'type':0,
'session':1,
'ver':1,
'uuid':0,
'time':1542104716,
'data':{}
}
```
**字段说明**
| 字段 | 类型 | 说明 |
| ------- | :----- | --------------------------------------------- |
| sendid | number | 发送者的用户id |
| recvid | number | 接受者的id(根据session来判断是用户id还是群id) |
| session | number | 会话类型 1单聊会话 2群聊会话 3系统会话 |
| type | number | 消息类型,详细见消息类型定义 |
| ver | number | 消息的版本 |
| uuid | number | 消息的uuid,由收到的消息服务器产生 |
| time | number | 消息产生的unix时间戳 |
| data | object | 消息的具体内容,根据type的不同里面也不同 |
**消息类型**
| 消息类型 | 消息定义 | 说明 |
| -------- | -------- | ------------------------------------------------------------ |
| 0 | 默认值 | 无意义 |
| 1 | 文本消息 | 发送文本内容,文本统一转换为utf-8格式,支持emoji表情 |
| 2 | 图片消息 | 发送图片,这里发送是图片的链接,然后用链接去图片服务器取数据 |
| 3 | 音频消息 | 发送语音文件。包含文件的下载地址md5 文件后缀,音频的时长 |
| 4 | 视频消息 | 发送视频文件,包含文件的下载地址md5 文件后缀,视频的时长 |
| 5 | 文件消息 | 发送文件,这里发送是文件的链接,然后用链接去文件服务器取数据 |
| 6 | 表情消息 | 表情的索引,如果本地没有则用gif文件下载地址 |
| 7 | 位置消息 | 包含位置的经纬度 |
| 8 | 名片消息 | 包含用户的名片信息 |
**消息体内容定义**
1.***文本消息***
```json
{
data:{
[
'content':"hello world"
'refer':[111,111,111,111]
]
},
}
```
data content消息的内容,refer这条消息提及的用户id
2.***图片消息***
```json
{
data:{
'url':"http://xxx.xx/",
'md5':"e10adc3949ba59abbe56e057f20f883e",
'sfix':".png",
'size':1113232,
'width':640,
'height':360,
},
}
```
3.***音频消息***
```json
{
data:{
'url':"http://xxx.xx/",
'md5':"e10adc3949ba59abbe56e057f20f883e",
'sfix':".mp3",
'size':1113232,
'dura':50,//音频持续时间
},
}
```
4.***视频消息***
```json
{
data:{
'url':"http://xxx.xx/",
//'cover':"http://xxx.xx/",//视频的封面,这个可能占时没有
'md5':"e10adc3949ba59abbe56e057f20f883e",
'sfix':".mp4",
'size':1113232,
'dura':50,//视频持续时间
},
}
```
5.***文件消息***
```json
{
data:{
'url':"http://xxx.xx/",
'md5':"e10adc3949ba59abbe56e057f20f883e",
'sfix':".zip",
'size':1113232,
},
}
```
6.表情消息
```json
data:{
'url':"http://xxx.xx/", //表情的下载链接
'md5':"e10adc3949ba59abbe56e057f20f883e",
'album':'sfsssws', //表情的专辑id
'index':1,
},
```
| 20.646154 | 86 | 0.405365 | yue_Hant | 0.271469 |
a933f0696b0e48d59284b8361d63bbff37ba6213 | 3,878 | md | Markdown | README.md | claychinasky/SmartShape2D | ce1db8a0797d15f589c8e542bbdd4f6d97a0b4f2 | [
"MIT"
] | null | null | null | README.md | claychinasky/SmartShape2D | ce1db8a0797d15f589c8e542bbdd4f6d97a0b4f2 | [
"MIT"
] | null | null | null | README.md | claychinasky/SmartShape2D | ce1db8a0797d15f589c8e542bbdd4f6d97a0b4f2 | [
"MIT"
] | null | null | null |
SmartShape2D for Godot 4 (alpha 2) porting attempt.
Current state :
- Parsing errors, API changes mostly corrected.
- Button class pressed signal doesn't seem to working correctly. Therefor button pressed signals are commented out.
- Since Reference class being removed in Godot 4, it is replaced by either RefCounted or Resource. It's unclear to me in which case it supposed to be replaced by. Either way because Godot 4 is more strict with static type declaration, more typecasting is needed.
- Current state of version alpha 2 parser/compiler isn't giving any errors and custom class nodes isn't showing up in node explorer.
- GUT packages are removed since they won't work on Godot 4 and Godot 4 supposed to ship with its own unit testing backend.
SmartShape2D
---


SmartShape2D + Aseprite tutorial can be found here (Thanks Picster!):
[](http://www.youtube.com/watch?v=r-pd2yuNPvA)
SmartShape2D tutorial can be found here (Thanks LucyLavend!):
[](https://www.youtube.com/watch?v=45PldDNCQhw)
# About
This plugin allows you to create nicely textured 2D polys.
Simply place a few points then create / assign the shape material and you should have a good looking polygon.
The textures used are similar to what you would use if making terrain using TileMaps/TileSets
**This plugin is under ACTIVE DEVELOPMENT! If you find any issues, by all means let us know.
Read the section below on Contributing and post an issue if one doesn't already exist**
**If you enjoy this tool and want to support its development, [I'd appreciate a coffee ](https://www.buymeacoffee.com/SirRamESQ) :)**
<a href="https://www.buymeacoffee.com/SirRamESQ">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" align="left" height="48">
</a>
# Support
- Supported and Tested on Godot 3.2
- Should work with later versions of Godot 3.x
# Demo
A Sample Godot Project can be found here:
https://github.com/SirRamEsq/SmartShape2D-DemoProject
# Documentation
- [How To Install]( ./addons/rmsmartshape/documentation/Install.md )
- [Quick Start]( ./addons/rmsmartshape/documentation/Quickstart.md )
- [Migrating from 1.x]( ./addons/rmsmartshape/documentation/Migration.md )
- [Shapes]( ./addons/rmsmartshape/documentation/Shapes.md )
- [Toolbar]( ./addons/rmsmartshape/documentation/Toolbar.md )
- [Resources]( ./addons/rmsmartshape/documentation/Resources.md )
- [Normals]( ./addons/rmsmartshape/documentation/Normals.md )
- [Controls and Hotkeys]( ./addons/rmsmartshape/documentation/Controls.md )
- [FAQ]( ./addons/rmsmartshape/documentation/FAQ.md )
- [Version History]( ./addons/rmsmartshape/documentation/VersionHistory.md )
# Contibuting
## Issues
If you have any suggestions or find any bugs, feel free to add an issue.
Please include the following three bits of information in each issue posted:
- Bug / Suggestion
- Godot Version
- SmartShape2D Version
Some Guidelines for Issues:
- Attaching a sample project where the issue exists is the fastest way for us to see what's going on
- Try to be as descriptive as possible
- Pictures and screenshots will also be very helpful
Issues can be added [here](https://github.com/SirRamEsq/SmartShape2D/issues)
## Development
We have a set of tests we run against the code (courtesy of [GUT](https://github.com/bitwes/Gut)).
If making a merge request, please ensure that the tests pass. If the tests have been updated appropriately to pass, please note this in the merge request.
## Discord
We have a Discord server for the plugin. https://discord.gg/mHWDPBD3vu
Here, you can:
- Ask for help
- Showcase your project
- Speak with the developers directly
| 45.093023 | 262 | 0.7705 | eng_Latn | 0.978656 |
a93406d08be9b15a4ed45a7c84de3e134e992af4 | 5,362 | md | Markdown | README.md | nikolockenvitz/ethr-did | 5af1efbd0d9ee04b526d89068ef2329a355f30e1 | [
"Apache-2.0"
] | null | null | null | README.md | nikolockenvitz/ethr-did | 5af1efbd0d9ee04b526d89068ef2329a355f30e1 | [
"Apache-2.0"
] | null | null | null | README.md | nikolockenvitz/ethr-did | 5af1efbd0d9ee04b526d89068ef2329a355f30e1 | [
"Apache-2.0"
] | null | null | null |
[](https://www.npmjs.com/package/ethr-did)
[](https://www.npmjs.com/package/ethr-did)
[](https://chat.uport.me/#/login)
[](https://twitter.com/uport_me)
# Ethr-DID Library
[DID Specification](https://w3c-ccg.github.io/did-spec/) | [ERC-1056](https://github.com/ethereum/EIPs/issues/1056) | [Getting Started](/docs/guides/index.md)
[FAQ and helpdesk support](http://bit.ly/uPort_helpdesk)
This library conforms to [ERC-1056](https://github.com/ethereum/EIPs/issues/1056) and is intended to use Ethereum addresses as fully self-managed [Decentralized Identifiers](https://w3c-ccg.github.io/did-spec/#decentralized-identifiers-dids) (DIDs), it allows you to easily create and manage keys for these identities. It also lets you sign standards compliant [JSON Web Tokens (JWT)](https://jwt.io) that can be consumed using the [DID-JWT](https://github.com/uport-project/did-jwt) library.
This library can be used to create a new ethr-did identity. It allows ethr-did identities to be represented as an object that can perform actions such as updating its did-document, signing messages, and verifying messages from other dids.
Use this if you are looking for the easiest way to start using ethr-did identities, and want high-level abstractions to access its entire range of capabilities. It encapsulates all the functionality of [ethr-did-resolver](https://github.com/decentralized-identity/ethr-did-resolver) and [ethr-did-registry](https://github.com/uport-project/ethr-did-registry).
A DID is an Identifier that allows you to lookup a DID document that can be used to authenticate you and messages created by you.
Ethr-DID provides a scalable identity method for Ethereum addresses that gives any Ethereum address the ability to collect on-chain and off-chain data. Because Ethr-DID allows any Ethereum key pair to become an identity, it is more scalable and privacy-preserving than smart contract based identity methods, like our previous [Proxy Contract](https://github.com/uport-project/uport-identity/blob/develop/docs/reference/proxy.md).
This particular DID method relies on the [Ethr-Did-Registry](https://github.com/uport-project/ethr-did-registry). The Ethr-DID-Registry is a smart contract that facilitates public key resolution for off-chain (and on-chain) authentication. It also facilitates key rotation, delegate assignment and revocation to allow 3rd party signers on a key's behalf, as well as setting and revoking off-chain attribute data. These interactions and events are used in aggregate to form a DID's DID document using the [Ethr-Did-Resolver](https://github.com/uport-project/ethr-did-resolver).
An example of a DID document resolved using the Ethr-Did-Resolver:
```
{
'@context': 'https://w3id.org/did/v1',
id: 'did:ethr:0xb9c5714089478a327f09197987f16f9e5d936e8a',
publicKey: [{
id: 'did:ethr:0xb9c5714089478a327f09197987f16f9e5d936e8a#owner',
type: 'Secp256k1VerificationKey2018',
owner: 'did:ethr:0xb9c5714089478a327f09197987f16f9e5d936e8a',
ethereumAddress: '0xb9c5714089478a327f09197987f16f9e5d936e8a'}],
authentication: [{
type: 'Secp256k1SignatureAuthentication2018',
publicKey: 'did:ethr:0xb9c5714089478a327f09197987f16f9e5d936e8a#owner'}]
}
```
On-chain refers to something that is resolved with a transaction on a blockchain, while off-chain can refer to anything from temporary payment channels to IPFS.
It supports the proposed [Decentralized Identifiers](https://w3c-ccg.github.io/did-spec/) spec from the [W3C Credentials Community Group](https://w3c-ccg.github.io).
## DID Method
A "DID method" is a specific implementation of a DID scheme that is identified by a `method name`. In this case, the method name is `ethr`, and the method identifier is an Ethereum address.
To encode a DID for an Ethereum address, simply prepend `did:ethr:`
For example:
`did:ethr:0xf3beac30c498d9e26865f34fcaa57dbb935b0d74`
## Configuration
```js
import EthrDID from 'ethr-did'
// Assume web3 object is configured either manually or injected using metamask
const ethrDid = new EthrDID({address: '0x...', privateKey: '...', provider})
```
| key | description| required |
|-----|------------|----------|
|`address`|Ethereum address representing Identity| yes |
|`registry`| registry address (defaults to `0xdca7ef03e98e0dc2b855be647c39abe984fcf21b`) | no |
|`provider`| web3 provider | no |
|`web3`| preconfigured web3 object | no |
|`rpcUrl`| JSON-RPC endpoint url | no |
|`signer`| [Signing function](https://github.com/uport-project/did-jwt#signer-functions)| either `signer` or `privateKey` |
|`privateKey`| Hex encoded private key | yes* |
**Note**
An instance created using only an address can only be used to encapsulate an external ethr-did (one where there is no access to the private key).
This instance will not have the ability to sign anything, but it can be used for a subset of actions:
* provide its own address (`ethrDid.address`)
* provide the full DID string (`ethrDid.did`)
* lookup its owner `await ethrDid.lookupOwner()`
* verify a JWT `await ethrDid.verifyJwt(jwt)`
| 60.931818 | 576 | 0.762589 | eng_Latn | 0.936116 |
a9347ee571690afc9f2d9d68707fe746b394a350 | 3,463 | md | Markdown | README.md | victorguillen/Hipcamp | 518e50a91e3d3d90b90035105c2178af9ade92f1 | [
"MIT"
] | null | null | null | README.md | victorguillen/Hipcamp | 518e50a91e3d3d90b90035105c2178af9ade92f1 | [
"MIT"
] | null | null | null | README.md | victorguillen/Hipcamp | 518e50a91e3d3d90b90035105c2178af9ade92f1 | [
"MIT"
] | null | null | null | # Hipcamp
## Camping Features
The `Camping Features` section includes all the features available in a camping site, I added a little bit of color to make it clearer to the user whats available and whats not. I also added the functionality of not only show all the unavailable features but to also hide them.
<img src="docs/images/features.png" width="80%" height="80%">
<img src="docs/images/all-features.png" width="80%" height="80%">
## Quick start
1. Clone this repo using `git clone --depth=1 git@github.com:victorguillen/Hipcamp.git`
2. Move to the appropriate directory: `cd Hipcamp`.<br />
3. Run `npm run setup`.<br />
4. Run `npm start`.<br />
5. Go to `http://localhost:3000`.<br />
## Project Structure
The project can be found under `app/Hipcamp` and I divided the project in 3 main folder:
```
pages: HomePage that includes the <Features /> component.
shared: constants, component/lib and images.
styles: variables, mixins, etc.
```
## React Components
### IconsContainer
I first started by creating this `IconsContainer` component with the idea of passing it 3 main props:
```
presence: PropTypes.any,
icon: PropTypes.string,
title: PropTypes.string,
```
These 3 props are essential for showing wether the feature is available or not, the type of feature it is and the icon that represents it.
```
class IconContainer extends React.Component {
...
render() {
...
return (
<div className={`${iconClass} ${!presence && style.blur}`} title={title}>
<div className={markerClass}>
<img alt={title} src={icon} />
</div>
{!subfeature && <p>{title}</p>}
</div>
);
}
}
export default IconContainer;
```
After this I created an `IconWrapper` that renders the `Feature` or the list of `Subfeatures`.
These components are part of the shared components library.
## Sass
### Mixins
I started noticing repeated CSS so I decided to use mixins, here's an example:
```
:local(.greenMarker) {
@include colorMarker($green);
}
:local(.redMarker) {
@include colorMarker($red);
}
...
@mixin colorMarker($color) {
display: flex;
justify-content: center;
align-items: center;
border: solid 1px $color;
background-color: $color;
border-radius: 50%;
height: 45px;
width: 45px;
}
```
With more time I would have added more mixins to create a responsive website.
```
@mixin tablet {
@media #{$tablet}{
@content;
}
}
@mixin desktop {
@media #{$desktop}{
@content;
}
}
```
## React Boilerplate
I used this [React Boilerplate][react] for a quick start on building the Camping Features assignment.
[react]: https://github.com/react-boilerplate/react-boilerplate
<img src="https://raw.githubusercontent.com/react-boilerplate/react-boilerplate-brand/master/assets/banner-metal-optimized.jpg" alt="react boilerplate banner" align="center" />
## License
This project is licensed under the MIT license, Copyright (c) 2017 Maximilian
Stoiber. For more information see `LICENSE.md`.
## Side Notes
It was a bit of a personal struggle to have to use `<ul>`'s and `<li>`'s, I'm used to structuring everything with `Flexbox` its super easy and responsive! but thats just lack of experience and personal preference lol
One of the subfeatures is a bit off, theres a bug with an `li`'s `display: inline, float: left` to make a horizontal list, the parent css class is overwriting it to be `display: inline-block` and couldn't get that fixed in time ... ugh
| 26.844961 | 278 | 0.706035 | eng_Latn | 0.982967 |
a935068fd6dbff5b02a7207055e54f46edfe764e | 13,273 | md | Markdown | articles/search/search-manage.md | changeworld/azure-docs.nl-nl | bdaa9c94e3a164b14a5d4b985a519e8ae95248d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/search-manage.md | changeworld/azure-docs.nl-nl | bdaa9c94e3a164b14a5d4b985a519e8ae95248d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/search-manage.md | changeworld/azure-docs.nl-nl | bdaa9c94e3a164b14a5d4b985a519e8ae95248d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Service beheer in de portal
titleSuffix: Azure Cognitive Search
description: Een Azure Cognitive Search-service beheren, een gehoste service voor zoeken in de Cloud op Microsoft Azure met behulp van de Azure Portal.
manager: nitinme
author: HeidiSteen
ms.author: heidist
tags: azure-portal
ms.service: cognitive-search
ms.topic: conceptual
ms.date: 11/04/2019
ms.openlocfilehash: 3abbf2c8e0734d17aabadd2ae5f61cc03889964b
ms.sourcegitcommit: 7b25c9981b52c385af77feb022825c1be6ff55bf
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 03/13/2020
ms.locfileid: "79282924"
---
# <a name="service-administration-for-azure-cognitive-search-in-the-azure-portal"></a>Service beheer voor Azure Cognitive Search in het Azure Portal
> [!div class="op_single_selector"]
> * [PowerShell](search-manage-powershell.md)
> * [REST-API](https://docs.microsoft.com/rest/api/searchmanagement/)
> * [.NET SDK](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.search)
> * [Portal](search-manage.md)
> * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0) ->
Azure Cognitive Search is een volledig beheerde, op de cloud gebaseerde zoek service die wordt gebruikt voor het bouwen van een uitgebreide zoek ervaring in aangepaste apps. In dit artikel worden de Service beheer taken behandeld die u kunt uitvoeren in de [Azure Portal](https://portal.azure.com) voor een zoek service die u al hebt ingericht. Service beheer is lichter ontwerp, beperkt tot de volgende taken:
> [!div class="checklist"]
> * Beheer de toegang tot de *API-sleutels* die worden gebruikt voor lees-of schrijf toegang tot uw service.
> * Pas de service capaciteit aan door de toewijzing van partities en replica's te wijzigen.
> * Het resource gebruik bewaken ten opzichte van maximum limieten van uw servicelaag.
U ziet dat de *upgrade* niet wordt weer gegeven als een beheer taak. Omdat resources worden toegewezen wanneer de service wordt ingericht, is een nieuwe service vereist voor het verplaatsen naar een andere laag. Zie [een Azure Cognitive Search-service maken](search-create-service-portal.md)voor meer informatie.
U kunt het query volume en andere metrische gegevens bewaken en deze inzichten gebruiken om uw service aan te passen voor snellere reactie tijden. Zie het [gebruik en de metrische gegevens van Query's controleren](search-monitor-usage.md) en [prestaties en optimalisatie](search-performance-optimization.md)voor meer informatie.
<a id="admin-rights"></a>
## <a name="administrator-rights"></a>Beheerders rechten
Het inrichten of buiten gebruik stellen van de service zelf kan worden uitgevoerd door een beheerder van een Azure-abonnement of een mede beheerder.
Binnen een service heeft iedereen die toegang heeft tot de service-URL en een beheer-API-sleutel, lees-/schrijftoegang tot de service. Lees-/schrijftoegang biedt de mogelijkheid om Server objecten toe te voegen, te verwijderen of te wijzigen, met inbegrip van API-sleutels, indexen, Indexeer functies, gegevens bronnen, planningen en roltoewijzingen zoals geïmplementeerd via door [RBAC gedefinieerde rollen](search-security-rbac.md).
Alle gebruikers interactie met Azure Cognitive Search valt binnen een van deze modi: lees-/schrijftoegang tot de service (beheerders rechten) of alleen-lezen toegang tot de service (query rechten). Zie [Manage the API-Keys (](search-security-api-keys.md)Engelstalig) voor meer informatie.
<a id="sys-info"></a>
## <a name="logging-and-system-information"></a>Logboek registratie-en systeem gegevens
Azure Cognitive Search maakt geen logboek bestanden voor een afzonderlijke service beschikbaar via de portal-of programmatische interfaces. Op de basis-laag en hierboven bewaakt micro soft alle Azure Cognitive Search Services voor 99,9% Beschik baarheid per service level agreements (SLA). Als de service traag is of de door Voer van een aanvraag onder de drempel waarde voor de SLA ligt, worden de beschik bare logboek bestanden door ondersteunings teams gecontroleerd en wordt het probleem opgelost.
Voor algemene informatie over uw service kunt u op de volgende manieren informatie verkrijgen:
* In de portal, op het service dashboard, via meldingen, eigenschappen en status berichten.
* Gebruik [Power shell](search-manage-powershell.md) of het [Management rest API](https://docs.microsoft.com/rest/api/searchmanagement/) om [service-eigenschappen](https://docs.microsoft.com/rest/api/searchmanagement/services)of status van het gebruik van index bronnen op te halen.
<a id="sub-5"></a>
## <a name="monitor-resource-usage"></a>Resource gebruik bewaken
In het dash board is bron bewaking beperkt tot de informatie die wordt weer gegeven in het service dashboard en enkele metrische gegevens die u kunt verkrijgen door de service te doorzoeken. In het service dashboard, in de sectie gebruik, kunt u snel bepalen of partitie bron niveaus voldoende zijn voor uw toepassing. U kunt externe resources, zoals Azure-bewaking, inrichten als u vastgelegde gebeurtenissen wilt vastleggen en persistent wilt maken. Zie [Azure Cognitive Search bewaken](search-monitor-usage.md)voor meer informatie.
Met de REST API van de zoek service kunt u via programma code een aantal op documenten en indexen ontvangen:
* [Index statistieken ophalen](https://docs.microsoft.com/rest/api/searchservice/Get-Index-Statistics)
* [Documenten tellen](https://docs.microsoft.com/rest/api/searchservice/count-documents)
## <a name="disaster-recovery-and-service-outages"></a>Herstel na nood gevallen en service storingen
Hoewel we uw gegevens kunnen bijvoegen, biedt Azure Cognitive Search geen directe failover van de service als er een storing optreedt op het niveau van het cluster of het Data Center. Als een cluster in het Data Center uitvalt, detecteert en werkt het operations-team de service te herstellen. U krijgt tijdens het herstellen van de service een uitval tijd, maar u kunt Service tegoed aanvragen om service niet-beschik baarheid per [Service Level Agreement (Sla)](https://azure.microsoft.com/support/legal/sla/search/v1_0/)te compenseren.
Als er een continue service vereist is in het geval van fatale storingen buiten de controle van micro soft, kunt u [een extra service](search-create-service-portal.md) in een andere regio inrichten en een geo-replicatie strategie implementeren om ervoor te zorgen dat indexen volledig redundant zijn in alle services.
Klanten die [Indexeer functies](search-indexer-overview.md) gebruiken om indexen in te vullen en te vernieuwen, kunnen herstel na nood gevallen afhandelen via geo-specifieke Indexeer functies die gebruikmaken van dezelfde gegevens bron. Twee services in verschillende regio's, elk waarop een Indexeer functie wordt uitgevoerd, kunnen dezelfde gegevens bron indexeren om geo-redundantie te garanderen. Als u gegevens bronnen indexeert die ook geo-redundant zijn, moet u er rekening mee houden dat Azure Cognitive Search-Indexeer functies alleen incrementele indexering kunnen uitvoeren (het samen voegen van updates van nieuwe, gewijzigde of verwijderde documenten) van primaire replica's. Zorg er bij een failover-gebeurtenis voor dat u de Indexeer functie opnieuw naar de nieuwe primaire replica toewijst.
Als u geen Indexeer functies gebruikt, gebruikt u uw toepassings code om objecten en gegevens parallel naar verschillende zoek services te pushen. Zie [prestaties en optimalisatie in Azure Cognitive Search](search-performance-optimization.md)voor meer informatie.
## <a name="backup-and-restore"></a>Back-ups en herstellen
Omdat Azure Cognitive Search geen primaire oplossing voor gegevens opslag is, bieden we geen formeel mechanisme voor Self-Service back-up en herstel. U kunt echter de voorbeeld code **index-Backup-Restore** in dit [Azure Cognitive Search .net-voor beeld opslag plaats](https://github.com/Azure-Samples/azure-search-dotnet-samples) gebruiken om een back-up te maken van de index definitie en moment opname naar een reeks json-bestanden en vervolgens deze bestanden te gebruiken om de index te herstellen, indien nodig. Met dit hulp programma kunt u ook indexen verplaatsen tussen service lagen.
Anders is uw toepassings code die wordt gebruikt voor het maken en vullen van een index de optie voor het terugzetten van de index. Als u een index opnieuw wilt samen stellen, verwijdert u deze (ervan uitgaande dat deze bestaat), maakt u de index opnieuw in de service en laadt u deze opnieuw door gegevens op te halen uit uw primaire gegevens opslag.
<a id="scale"></a>
## <a name="scale-up-or-down"></a>Omhoog of omlaag schalen
Elke zoek service begint met mini maal één replica en één partitie. Als u zich hebt geregistreerd voor een [laag die toegewezen resources biedt](search-limits-quotas-capacity.md), klikt u op de tegel **schaal** in het service dashboard om het resource gebruik aan te passen.
Wanneer u capaciteit toevoegt via een van de resources, gebruikt de service deze automatisch. Er is geen verdere actie vereist voor uw onderdeel, maar er is een lichte vertraging voordat de impact van de nieuwe resource wordt gerealiseerd. Het kan 15 minuten of langer duren om aanvullende resources in te richten.
![][10]
### <a name="add-replicas"></a>Replica's toevoegen
Het verhogen van query's per seconde (QPS) of het bereiken van hoge Beschik baarheid geschiedt door het toevoegen van replica's. Elke replica heeft één exemplaar van een index, dus er wordt nog één replica omgezet in een meer index die beschikbaar is voor het verwerken van aanvragen voor service query's. Er zijn mini maal drie replica's vereist voor hoge Beschik baarheid (Zie [capaciteits planning](search-capacity-planning.md) voor meer informatie).
Een zoek service met meer replica's kan Load Balancing query-aanvragen over een groter aantal indexen. Op basis van een niveau van het query volume is de query doorvoer sneller wanneer er meer exemplaren van de index beschikbaar zijn voor de service van de aanvraag. Als u een query latentie ondervindt, kunt u een positieve invloed op de prestaties verwachten zodra de extra replica's online zijn.
Hoewel de query doorvoer tijdens het toevoegen van replica's wordt bereikt, is dit niet precies twee of drie maal wanneer u replica's aan uw service toevoegt. Alle zoek toepassingen zijn onderhevig aan externe factoren die de query prestaties kunnen impinge. Complexe query's en netwerk latentie zijn twee factoren die bijdragen aan variaties in de reactie tijden van query's.
### <a name="add-partitions"></a>Partities toevoegen
De meeste service toepassingen hebben een ingebouwde nood zaak voor meer replica's in plaats van partities. In gevallen waarin een groter aantal documenten vereist is, kunt u partities toevoegen als u zich hebt geregistreerd voor de Standard-Service. De Basic-laag biedt geen extra partities.
In de laag standaard worden partities toegevoegd in veelvouden van 12 (met name 1, 2, 3, 4, 6 of 12). Dit is een artefact van sharding. Er wordt een index gemaakt in 12 Shards, die allemaal op één partitie kan worden opgeslagen of op dezelfde manier kan worden onderverdeeld in 2, 3, 4, 6 of 12 partities (één Shard per partitie).
### <a name="remove-replicas"></a>Replica's verwijderen
Na Peri Oden van hoge query volumes kunt u de schuif regelaar gebruiken om replica's te reduceren nadat de geladen Zoek query's zijn genormaliseerd (bijvoorbeeld nadat de verkoop van de feest dagen is voltooid). Er zijn geen verdere stappen voor uw onderdeel vereist. Het aantal replica's relinquishes virtuele machines in het Data Center verlagen. Uw bewerkingen voor het opnemen van query's en gegevens worden nu op minder Vm's uitgevoerd dan voorheen. De minimum vereiste is één replica.
### <a name="remove-partitions"></a>Partities verwijderen
In tegens telling tot het verwijderen van replica's, waarvoor u geen extra moeite hoeft te doen, hebt u mogelijk werk nodig als u meer opslag gebruikt dan kan worden verkleind. Als uw oplossing bijvoorbeeld drie partities gebruikt, genereert Overweeg naar een of twee partities een fout als de nieuwe opslag ruimte minder is dan vereist is om uw index te hosten. Zoals u kunt verwachten, zijn uw keuzes het verwijderen van indexen of documenten binnen een gekoppelde index om ruimte vrij te maken of de huidige configuratie te houden.
Er is geen detectie methode die aangeeft welke index Shards worden opgeslagen op specifieke partities. Elke partitie biedt ongeveer 25 GB opslag ruimte, dus u moet de opslag beperken tot een grootte die kan worden aangepast aan het aantal partities dat u hebt. Als u wilt terugkeren naar één partitie, moeten alle 12 Shards passen.
Als u hulp nodig hebt bij het plannen van de toekomst, wilt u mogelijk de opslag controleren (met behulp van [index statistieken ophalen](https://docs.microsoft.com/rest/api/searchservice/Get-Index-Statistics)) om te zien hoeveel u werkelijk hebt gebruikt.
<a id="next-steps"></a>
## <a name="next-steps"></a>Volgende stappen
Wanneer u de concepten van het Service beheer begrijpt, kunt u [Power shell](search-manage-powershell.md) gebruiken om taken te automatiseren.
We raden u ook aan het [artikel over prestaties en optimalisatie te](search-performance-optimization.md)bekijken.
<!--Image references-->
[10]: ./media/search-manage/Azure-Search-Manage-3-ScaleUp.png
| 104.511811 | 807 | 0.802381 | nld_Latn | 0.999754 |
a935de0fbfa79bafa6c2ab5c30b0017b253bdfab | 1,042 | md | Markdown | README.md | lordnaz/dmc-core | 0f172918f7cf1423a7b4967c4eff94f71dd5510b | [
"MIT"
] | 1 | 2021-05-20T04:41:10.000Z | 2021-05-20T04:41:10.000Z | README.md | lordnaz/dmc-core | 0f172918f7cf1423a7b4967c4eff94f71dd5510b | [
"MIT"
] | null | null | null | README.md | lordnaz/dmc-core | 0f172918f7cf1423a7b4967c4eff94f71dd5510b | [
"MIT"
] | null | null | null | DonateMeCrypto.me
=================
A core engine for donatemecrpyto.me created using Express JS and MongoDB
## Project Details
- Project : donatemecrypto.me
- Version : v1.0
- Author : Nazrul Hanif
- Date Created : 20210516
## Project Contributor
- Frontend Developer : [Ahmad Miqdad](https://github.com/ahmadudon6)
- Backend Developer : [Nazrul Hanif](https://github.com/lordnaz)
## Local Setup
This Instruction is for the first timer setup.
1. Install NodeJS automatically with NPM Package (https://nodejs.org/en/download/)
2. Clone this repo to your local
3. Dependency Manager : run `npm install` in your cmd.
```
$ npm install
```
4. run `npm run server` in your cmd go to localhost:300 or http://127.0.0.1:3000 (default at port 300 can be change if require)
```
$ npm run server
```
## Production
- go to [here](https://donatemecrypto.me/) for LIVE.
## Support
For Support & Inquiry kindly contact me at:-
- Click [here](https://github.com/lordnaz) to go to developer profile.
- Or email me at nazrul.workspace@gmail.com | 26.05 | 127 | 0.714012 | eng_Latn | 0.708802 |
a9369a5ca056706e944e471e1abaf6a060548c73 | 309 | md | Markdown | README.md | awduley/casino-games | 822f21556231094c775d8e519a607eabda8fd6ec | [
"MIT"
] | null | null | null | README.md | awduley/casino-games | 822f21556231094c775d8e519a607eabda8fd6ec | [
"MIT"
] | null | null | null | README.md | awduley/casino-games | 822f21556231094c775d8e519a607eabda8fd6ec | [
"MIT"
] | null | null | null | # casino-games
An ongoing project for practicing web-development
It consists of three card games that I have not programmed yet: Five card draw, Blackjack, and War. Plus I am toying around with developing a slots game for this app.
I have a few more CSS kinks to work out but I'll figure it out eventually.
| 44.142857 | 166 | 0.783172 | eng_Latn | 0.999687 |
a936cf44a76e2a2530e04e30927a292525420f06 | 47 | md | Markdown | README.md | wencode/ubit | 2c6e0a84af339be2d6ce53d953865af884212b4b | [
"MIT"
] | null | null | null | README.md | wencode/ubit | 2c6e0a84af339be2d6ce53d953865af884212b4b | [
"MIT"
] | null | null | null | README.md | wencode/ubit | 2c6e0a84af339be2d6ce53d953865af884212b4b | [
"MIT"
] | null | null | null | # ubit
unitive library for micro:bit on TinyGo
| 15.666667 | 39 | 0.787234 | eng_Latn | 0.7874 |
a9394235fbe2597141f13e06d014715479bf63d2 | 1,330 | md | Markdown | node_modules/karma-phantomjs-launcher/CHANGELOG.md | tknKL/meanjs-codesnippets | 418cf61719e1b51983cf341fa22e99c3195526f7 | [
"MIT"
] | null | null | null | node_modules/karma-phantomjs-launcher/CHANGELOG.md | tknKL/meanjs-codesnippets | 418cf61719e1b51983cf341fa22e99c3195526f7 | [
"MIT"
] | null | null | null | node_modules/karma-phantomjs-launcher/CHANGELOG.md | tknKL/meanjs-codesnippets | 418cf61719e1b51983cf341fa22e99c3195526f7 | [
"MIT"
] | null | null | null | <a name"0.2.2"></a>
### 0.2.2 (2015-12-24)
#### Bug Fixes
* pass PhantomJS script as the first cmd-line argument ([1c195c6b](https://github.com/karma-runner/karma-phantomjs-launcher/commit/1c195c6b))
* do not duplicate cmd-line flags on repeated PhantomJS runs ([76228f18](https://github.com/karma-runner/karma-phantomjs-launcher/commit/76228f18))
<a name"0.2.1"></a>
### 0.2.1 (2015-08-05)
#### Bug Fixes
* ensure console output from phantomjs is available in karma debug logs ([eed281b5](https://github.com/karma-runner/karma-phantomjs-launcher/commit/eed281b5))
<a name"0.2.0"></a>
## 0.2.0 (2015-05-29)
#### Bug Fixes
* **npm:** Make .npmignore more sensible to dot files ([1322a89d](https://github.com/karma-runner/karma-phantomjs-launcher/commit/1322a89d), closes [#68](https://github.com/karma-runner/karma-phantomjs-launcher/issues/68))
#### Features
* Move phantomjs to peerDeps, #37, #42, #56 ([a0f399de](https://github.com/karma-runner/karma-phantomjs-launcher/commit/a0f399de), closes [#25](https://github.com/karma-runner/karma-phantomjs-launcher/issues/25))
* Support option for phantom to exit on ResourceError ([2b90c6b9](https://github.com/karma-runner/karma-phantomjs-launcher/commit/2b90c6b9))
* debug option ([c6dfe786](https://github.com/karma-runner/karma-phantomjs-launcher/commit/c6dfe786))
| 40.30303 | 222 | 0.733083 | eng_Latn | 0.158947 |
a9396b23ff5441ab0d8bc1b764481ce7f9cfcb0d | 47,933 | md | Markdown | doc/v1-items.md | okenshields/fangkuai-java | 5565af90990be59a26e7dd169819116bfa3060ac | [
"Apache-2.0"
] | null | null | null | doc/v1-items.md | okenshields/fangkuai-java | 5565af90990be59a26e7dd169819116bfa3060ac | [
"Apache-2.0"
] | 1 | 2021-12-09T22:45:42.000Z | 2021-12-09T22:45:42.000Z | doc/v1-items.md | okenshields/fangkuai-java | 5565af90990be59a26e7dd169819116bfa3060ac | [
"Apache-2.0"
] | null | null | null | # V1 Items
```java
V1ItemsApi v1ItemsApi = client.getV1ItemsApi();
```
## Class Name
`V1ItemsApi`
## Methods
* [List Categories](/doc/v1-items.md#list-categories)
* [Create Category](/doc/v1-items.md#create-category)
* [Delete Category](/doc/v1-items.md#delete-category)
* [Update Category](/doc/v1-items.md#update-category)
* [List Discounts](/doc/v1-items.md#list-discounts)
* [Create Discount](/doc/v1-items.md#create-discount)
* [Delete Discount](/doc/v1-items.md#delete-discount)
* [Update Discount](/doc/v1-items.md#update-discount)
* [List Fees](/doc/v1-items.md#list-fees)
* [Create Fee](/doc/v1-items.md#create-fee)
* [Delete Fee](/doc/v1-items.md#delete-fee)
* [Update Fee](/doc/v1-items.md#update-fee)
* [List Inventory](/doc/v1-items.md#list-inventory)
* [Adjust Inventory](/doc/v1-items.md#adjust-inventory)
* [List Items](/doc/v1-items.md#list-items)
* [Create Item](/doc/v1-items.md#create-item)
* [Delete Item](/doc/v1-items.md#delete-item)
* [Retrieve Item](/doc/v1-items.md#retrieve-item)
* [Update Item](/doc/v1-items.md#update-item)
* [Remove Fee](/doc/v1-items.md#remove-fee)
* [Apply Fee](/doc/v1-items.md#apply-fee)
* [Remove Modifier List](/doc/v1-items.md#remove-modifier-list)
* [Apply Modifier List](/doc/v1-items.md#apply-modifier-list)
* [Create Variation](/doc/v1-items.md#create-variation)
* [Delete Variation](/doc/v1-items.md#delete-variation)
* [Update Variation](/doc/v1-items.md#update-variation)
* [List Modifier Lists](/doc/v1-items.md#list-modifier-lists)
* [Create Modifier List](/doc/v1-items.md#create-modifier-list)
* [Delete Modifier List](/doc/v1-items.md#delete-modifier-list)
* [Retrieve Modifier List](/doc/v1-items.md#retrieve-modifier-list)
* [Update Modifier List](/doc/v1-items.md#update-modifier-list)
* [Create Modifier Option](/doc/v1-items.md#create-modifier-option)
* [Delete Modifier Option](/doc/v1-items.md#delete-modifier-option)
* [Update Modifier Option](/doc/v1-items.md#update-modifier-option)
* [List Pages](/doc/v1-items.md#list-pages)
* [Create Page](/doc/v1-items.md#create-page)
* [Delete Page](/doc/v1-items.md#delete-page)
* [Update Page](/doc/v1-items.md#update-page)
* [Delete Page Cell](/doc/v1-items.md#delete-page-cell)
* [Update Page Cell](/doc/v1-items.md#update-page-cell)
## List Categories
Lists all the item categories for a given location.
```java
CompletableFuture<List<V1Category>> listCategoriesAsync(
final String locationId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to list categories for. |
### Response Type
[`List<V1Category>`](/doc/models/v1-category.md)
### Example Usage
```java
String locationId = "location_id4";
v1ItemsApi.listCategoriesAsync(locationId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Category
Creates an item category.
```java
CompletableFuture<V1Category> createCategoryAsync(
final String locationId,
final V1Category body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to create an item for. |
| `body` | [`V1Category`](/doc/models/v1-category.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Category`](/doc/models/v1-category.md)
### Example Usage
```java
String locationId = "location_id4";
V1Category body = new V1Category.Builder()
.id("id6")
.name("name6")
.v2Id("v2_id6")
.build();
v1ItemsApi.createCategoryAsync(locationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Category
Deletes an existing item category.
__DeleteCategory__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeleteCategoryRequest` object
as documented below.
```java
CompletableFuture<V1Category> deleteCategoryAsync(
final String locationId,
final String categoryId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `categoryId` | `String` | Template, Required | The ID of the category to delete. |
### Response Type
[`V1Category`](/doc/models/v1-category.md)
### Example Usage
```java
String locationId = "location_id4";
String categoryId = "category_id8";
v1ItemsApi.deleteCategoryAsync(locationId, categoryId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Category
Modifies the details of an existing item category.
```java
CompletableFuture<V1Category> updateCategoryAsync(
final String locationId,
final String categoryId,
final V1Category body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the category's associated location. |
| `categoryId` | `String` | Template, Required | The ID of the category to edit. |
| `body` | [`V1Category`](/doc/models/v1-category.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Category`](/doc/models/v1-category.md)
### Example Usage
```java
String locationId = "location_id4";
String categoryId = "category_id8";
V1Category body = new V1Category.Builder()
.id("id6")
.name("name6")
.v2Id("v2_id6")
.build();
v1ItemsApi.updateCategoryAsync(locationId, categoryId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## List Discounts
Lists all the discounts for a given location.
```java
CompletableFuture<List<V1Discount>> listDiscountsAsync(
final String locationId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to list categories for. |
### Response Type
[`List<V1Discount>`](/doc/models/v1-discount.md)
### Example Usage
```java
String locationId = "location_id4";
v1ItemsApi.listDiscountsAsync(locationId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Discount
Creates a discount.
```java
CompletableFuture<V1Discount> createDiscountAsync(
final String locationId,
final V1Discount body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to create an item for. |
| `body` | [`V1Discount`](/doc/models/v1-discount.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Discount`](/doc/models/v1-discount.md)
### Example Usage
```java
String locationId = "location_id4";
V1Money bodyAmountMoney = new V1Money.Builder()
.amount(194)
.currencyCode("KWD")
.build();
V1Discount body = new V1Discount.Builder()
.id("id6")
.name("name6")
.rate("rate4")
.amountMoney(bodyAmountMoney)
.discountType("VARIABLE_AMOUNT")
.build();
v1ItemsApi.createDiscountAsync(locationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Discount
Deletes an existing discount.
__DeleteDiscount__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeleteDiscountRequest` object
as documented below.
```java
CompletableFuture<V1Discount> deleteDiscountAsync(
final String locationId,
final String discountId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `discountId` | `String` | Template, Required | The ID of the discount to delete. |
### Response Type
[`V1Discount`](/doc/models/v1-discount.md)
### Example Usage
```java
String locationId = "location_id4";
String discountId = "discount_id8";
v1ItemsApi.deleteDiscountAsync(locationId, discountId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Discount
Modifies the details of an existing discount.
```java
CompletableFuture<V1Discount> updateDiscountAsync(
final String locationId,
final String discountId,
final V1Discount body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the category's associated location. |
| `discountId` | `String` | Template, Required | The ID of the discount to edit. |
| `body` | [`V1Discount`](/doc/models/v1-discount.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Discount`](/doc/models/v1-discount.md)
### Example Usage
```java
String locationId = "location_id4";
String discountId = "discount_id8";
V1Money bodyAmountMoney = new V1Money.Builder()
.amount(194)
.currencyCode("KWD")
.build();
V1Discount body = new V1Discount.Builder()
.id("id6")
.name("name6")
.rate("rate4")
.amountMoney(bodyAmountMoney)
.discountType("VARIABLE_AMOUNT")
.build();
v1ItemsApi.updateDiscountAsync(locationId, discountId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## List Fees
Lists all the fees (taxes) for a given location.
```java
CompletableFuture<List<V1Fee>> listFeesAsync(
final String locationId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to list fees for. |
### Response Type
[`List<V1Fee>`](/doc/models/v1-fee.md)
### Example Usage
```java
String locationId = "location_id4";
v1ItemsApi.listFeesAsync(locationId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Fee
Creates a fee (tax).
```java
CompletableFuture<V1Fee> createFeeAsync(
final String locationId,
final V1Fee body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to create a fee for. |
| `body` | [`V1Fee`](/doc/models/v1-fee.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Fee`](/doc/models/v1-fee.md)
### Example Usage
```java
String locationId = "location_id4";
V1Fee body = new V1Fee.Builder()
.id("id6")
.name("name6")
.rate("rate4")
.calculationPhase("FEE_SUBTOTAL_PHASE")
.adjustmentType("TAX")
.build();
v1ItemsApi.createFeeAsync(locationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Fee
Deletes an existing fee (tax).
__DeleteFee__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeleteFeeRequest` object
as documented below.
```java
CompletableFuture<V1Fee> deleteFeeAsync(
final String locationId,
final String feeId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the fee's associated location. |
| `feeId` | `String` | Template, Required | The ID of the fee to delete. |
### Response Type
[`V1Fee`](/doc/models/v1-fee.md)
### Example Usage
```java
String locationId = "location_id4";
String feeId = "fee_id8";
v1ItemsApi.deleteFeeAsync(locationId, feeId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Fee
Modifies the details of an existing fee (tax).
```java
CompletableFuture<V1Fee> updateFeeAsync(
final String locationId,
final String feeId,
final V1Fee body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the fee's associated location. |
| `feeId` | `String` | Template, Required | The ID of the fee to edit. |
| `body` | [`V1Fee`](/doc/models/v1-fee.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Fee`](/doc/models/v1-fee.md)
### Example Usage
```java
String locationId = "location_id4";
String feeId = "fee_id8";
V1Fee body = new V1Fee.Builder()
.id("id6")
.name("name6")
.rate("rate4")
.calculationPhase("FEE_SUBTOTAL_PHASE")
.adjustmentType("TAX")
.build();
v1ItemsApi.updateFeeAsync(locationId, feeId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## List Inventory
Provides inventory information for all inventory-enabled item
variations.
```java
CompletableFuture<List<V1InventoryEntry>> listInventoryAsync(
final String locationId,
final Integer limit,
final String batchToken)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `limit` | `Integer` | Query, Optional | The maximum number of inventory entries to return in a single response. This value cannot exceed 1000. |
| `batchToken` | `String` | Query, Optional | A pagination cursor to retrieve the next set of results for your<br>original query to the endpoint. |
### Response Type
[`List<V1InventoryEntry>`](/doc/models/v1-inventory-entry.md)
### Example Usage
```java
String locationId = "location_id4";
Integer limit = 172;
String batchToken = "batch_token2";
v1ItemsApi.listInventoryAsync(locationId, limit, batchToken).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Adjust Inventory
Adjusts the current available inventory of an item variation.
```java
CompletableFuture<V1InventoryEntry> adjustInventoryAsync(
final String locationId,
final String variationId,
final V1AdjustInventoryRequest body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `variationId` | `String` | Template, Required | The ID of the variation to adjust inventory information for. |
| `body` | [`V1AdjustInventoryRequest`](/doc/models/v1-adjust-inventory-request.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1InventoryEntry`](/doc/models/v1-inventory-entry.md)
### Example Usage
```java
String locationId = "location_id4";
String variationId = "variation_id2";
V1AdjustInventoryRequest body = new V1AdjustInventoryRequest.Builder()
.quantityDelta(87.82)
.adjustmentType("SALE")
.memo("memo0")
.build();
v1ItemsApi.adjustInventoryAsync(locationId, variationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## List Items
Provides summary information of all items for a given location.
```java
CompletableFuture<List<V1Item>> listItemsAsync(
final String locationId,
final String batchToken)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to list items for. |
| `batchToken` | `String` | Query, Optional | A pagination cursor to retrieve the next set of results for your<br>original query to the endpoint. |
### Response Type
[`List<V1Item>`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String batchToken = "batch_token2";
v1ItemsApi.listItemsAsync(locationId, batchToken).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Item
Creates an item and at least one variation for it.
Item-related entities include fields you can use to associate them with
entities in a non-Square system.
When you create an item-related entity, you can optionally specify `id`.
This value must be unique among all IDs ever specified for the account,
including those specified by other applications. You can never reuse an
entity ID. If you do not specify an ID, Square generates one for the entity.
Item variations have a `user_data` string that lets you associate arbitrary
metadata with the variation. The string cannot exceed 255 characters.
```java
CompletableFuture<V1Item> createItemAsync(
final String locationId,
final V1Item body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to create an item for. |
| `body` | [`V1Item`](/doc/models/v1-item.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
V1Item body = new V1Item.Builder()
.id("id6")
.name("name6")
.description("description4")
.type("GIFT_CARD")
.color("593c00")
.build();
v1ItemsApi.createItemAsync(locationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Item
Deletes an existing item and all item variations associated with it.
__DeleteItem__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeleteItemRequest` object
as documented below.
```java
CompletableFuture<V1Item> deleteItemAsync(
final String locationId,
final String itemId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `itemId` | `String` | Template, Required | The ID of the item to modify. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
v1ItemsApi.deleteItemAsync(locationId, itemId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Retrieve Item
Provides the details for a single item, including associated modifier
lists and fees.
```java
CompletableFuture<V1Item> retrieveItemAsync(
final String locationId,
final String itemId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `itemId` | `String` | Template, Required | The item's ID. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
v1ItemsApi.retrieveItemAsync(locationId, itemId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Item
Modifies the core details of an existing item.
```java
CompletableFuture<V1Item> updateItemAsync(
final String locationId,
final String itemId,
final V1Item body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `itemId` | `String` | Template, Required | The ID of the item to modify. |
| `body` | [`V1Item`](/doc/models/v1-item.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
V1Item body = new V1Item.Builder()
.id("id6")
.name("name6")
.description("description4")
.type("GIFT_CARD")
.color("593c00")
.build();
v1ItemsApi.updateItemAsync(locationId, itemId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Remove Fee
Removes a fee assocation from an item so the fee is no longer
automatically applied to the item in Square Point of Sale.
```java
CompletableFuture<V1Item> removeFeeAsync(
final String locationId,
final String itemId,
final String feeId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the fee's associated location. |
| `itemId` | `String` | Template, Required | The ID of the item to add the fee to. |
| `feeId` | `String` | Template, Required | The ID of the fee to apply. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
String feeId = "fee_id8";
v1ItemsApi.removeFeeAsync(locationId, itemId, feeId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Apply Fee
Associates a fee with an item so the fee is automatically applied to
the item in Square Point of Sale.
```java
CompletableFuture<V1Item> applyFeeAsync(
final String locationId,
final String itemId,
final String feeId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the fee's associated location. |
| `itemId` | `String` | Template, Required | The ID of the item to add the fee to. |
| `feeId` | `String` | Template, Required | The ID of the fee to apply. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
String feeId = "fee_id8";
v1ItemsApi.applyFeeAsync(locationId, itemId, feeId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Remove Modifier List
Removes a modifier list association from an item so the modifier
options from the list can no longer be applied to the item.
```java
CompletableFuture<V1Item> removeModifierListAsync(
final String locationId,
final String modifierListId,
final String itemId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The ID of the modifier list to remove. |
| `itemId` | `String` | Template, Required | The ID of the item to remove the modifier list from. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
String itemId = "item_id0";
v1ItemsApi.removeModifierListAsync(locationId, modifierListId, itemId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Apply Modifier List
Associates a modifier list with an item so the associated modifier
options can be applied to the item.
```java
CompletableFuture<V1Item> applyModifierListAsync(
final String locationId,
final String modifierListId,
final String itemId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The ID of the modifier list to apply. |
| `itemId` | `String` | Template, Required | The ID of the item to add the modifier list to. |
### Response Type
[`V1Item`](/doc/models/v1-item.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
String itemId = "item_id0";
v1ItemsApi.applyModifierListAsync(locationId, modifierListId, itemId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Variation
Creates an item variation for an existing item.
```java
CompletableFuture<V1Variation> createVariationAsync(
final String locationId,
final String itemId,
final V1Variation body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `itemId` | `String` | Template, Required | The item's ID. |
| `body` | [`V1Variation`](/doc/models/v1-variation.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Variation`](/doc/models/v1-variation.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
V1Variation body = new V1Variation.Builder()
.id("id6")
.name("name6")
.itemId("item_id4")
.ordinal(88)
.pricingType("FIXED_PRICING")
.build();
v1ItemsApi.createVariationAsync(locationId, itemId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Variation
Deletes an existing item variation from an item.
__DeleteVariation__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeleteVariationRequest` object
as documented below.
```java
CompletableFuture<V1Variation> deleteVariationAsync(
final String locationId,
final String itemId,
final String variationId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `itemId` | `String` | Template, Required | The ID of the item to delete. |
| `variationId` | `String` | Template, Required | The ID of the variation to delete. |
### Response Type
[`V1Variation`](/doc/models/v1-variation.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
String variationId = "variation_id2";
v1ItemsApi.deleteVariationAsync(locationId, itemId, variationId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Variation
Modifies the details of an existing item variation.
```java
CompletableFuture<V1Variation> updateVariationAsync(
final String locationId,
final String itemId,
final String variationId,
final V1Variation body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `itemId` | `String` | Template, Required | The ID of the item to modify. |
| `variationId` | `String` | Template, Required | The ID of the variation to modify. |
| `body` | [`V1Variation`](/doc/models/v1-variation.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Variation`](/doc/models/v1-variation.md)
### Example Usage
```java
String locationId = "location_id4";
String itemId = "item_id0";
String variationId = "variation_id2";
V1Variation body = new V1Variation.Builder()
.id("id6")
.name("name6")
.itemId("item_id4")
.ordinal(88)
.pricingType("FIXED_PRICING")
.build();
v1ItemsApi.updateVariationAsync(locationId, itemId, variationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## List Modifier Lists
Lists all the modifier lists for a given location.
```java
CompletableFuture<List<V1ModifierList>> listModifierListsAsync(
final String locationId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to list modifier lists for. |
### Response Type
[`List<V1ModifierList>`](/doc/models/v1-modifier-list.md)
### Example Usage
```java
String locationId = "location_id4";
v1ItemsApi.listModifierListsAsync(locationId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Modifier List
Creates an item modifier list and at least 1 modifier option for it.
```java
CompletableFuture<V1ModifierList> createModifierListAsync(
final String locationId,
final V1ModifierList body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to create a modifier list for. |
| `body` | [`V1ModifierList`](/doc/models/v1-modifier-list.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1ModifierList`](/doc/models/v1-modifier-list.md)
### Example Usage
```java
String locationId = "location_id4";
List<V1ModifierOption> bodyModifierOptions = new LinkedList<>();
V1Money bodyModifierOptions0PriceMoney = new V1Money.Builder()
.amount(104)
.currencyCode("UAH")
.build();
V1ModifierOption bodyModifierOptions0 = new V1ModifierOption.Builder()
.id("id0")
.name("name0")
.priceMoney(bodyModifierOptions0PriceMoney)
.onByDefault(false)
.ordinal(178)
.build();
bodyModifierOptions.add(bodyModifierOptions0);
V1Money bodyModifierOptions1PriceMoney = new V1Money.Builder()
.amount(103)
.currencyCode("TZS")
.build();
V1ModifierOption bodyModifierOptions1 = new V1ModifierOption.Builder()
.id("id1")
.name("name1")
.priceMoney(bodyModifierOptions1PriceMoney)
.onByDefault(true)
.ordinal(179)
.build();
bodyModifierOptions.add(bodyModifierOptions1);
V1ModifierList body = new V1ModifierList.Builder()
.id("id6")
.name("name6")
.selectionType("SINGLE")
.modifierOptions(bodyModifierOptions)
.v2Id("v2_id6")
.build();
v1ItemsApi.createModifierListAsync(locationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Modifier List
Deletes an existing item modifier list and all modifier options
associated with it.
__DeleteModifierList__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeleteModifierListRequest` object
as documented below.
```java
CompletableFuture<V1ModifierList> deleteModifierListAsync(
final String locationId,
final String modifierListId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The ID of the modifier list to delete. |
### Response Type
[`V1ModifierList`](/doc/models/v1-modifier-list.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
v1ItemsApi.deleteModifierListAsync(locationId, modifierListId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Retrieve Modifier List
Provides the details for a single modifier list.
```java
CompletableFuture<V1ModifierList> retrieveModifierListAsync(
final String locationId,
final String modifierListId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The modifier list's ID. |
### Response Type
[`V1ModifierList`](/doc/models/v1-modifier-list.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
v1ItemsApi.retrieveModifierListAsync(locationId, modifierListId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Modifier List
Modifies the details of an existing item modifier list.
```java
CompletableFuture<V1ModifierList> updateModifierListAsync(
final String locationId,
final String modifierListId,
final V1UpdateModifierListRequest body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The ID of the modifier list to edit. |
| `body` | [`V1UpdateModifierListRequest`](/doc/models/v1-update-modifier-list-request.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1ModifierList`](/doc/models/v1-modifier-list.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
V1UpdateModifierListRequest body = new V1UpdateModifierListRequest.Builder()
.name("name6")
.selectionType("SINGLE")
.build();
v1ItemsApi.updateModifierListAsync(locationId, modifierListId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Modifier Option
Creates an item modifier option and adds it to a modifier list.
```java
CompletableFuture<V1ModifierOption> createModifierOptionAsync(
final String locationId,
final String modifierListId,
final V1ModifierOption body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The ID of the modifier list to edit. |
| `body` | [`V1ModifierOption`](/doc/models/v1-modifier-option.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1ModifierOption`](/doc/models/v1-modifier-option.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
V1Money bodyPriceMoney = new V1Money.Builder()
.amount(194)
.currencyCode("XBA")
.build();
V1ModifierOption body = new V1ModifierOption.Builder()
.id("id6")
.name("name6")
.priceMoney(bodyPriceMoney)
.onByDefault(false)
.ordinal(88)
.build();
v1ItemsApi.createModifierOptionAsync(locationId, modifierListId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Modifier Option
Deletes an existing item modifier option from a modifier list.
__DeleteModifierOption__ returns nothing on success but Connect
SDKs map the empty response to an empty `V1DeleteModifierOptionRequest`
object.
```java
CompletableFuture<V1ModifierOption> deleteModifierOptionAsync(
final String locationId,
final String modifierListId,
final String modifierOptionId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The ID of the modifier list to delete. |
| `modifierOptionId` | `String` | Template, Required | The ID of the modifier list to edit. |
### Response Type
[`V1ModifierOption`](/doc/models/v1-modifier-option.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
String modifierOptionId = "modifier_option_id6";
v1ItemsApi.deleteModifierOptionAsync(locationId, modifierListId, modifierOptionId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Modifier Option
Modifies the details of an existing item modifier option.
```java
CompletableFuture<V1ModifierOption> updateModifierOptionAsync(
final String locationId,
final String modifierListId,
final String modifierOptionId,
final V1ModifierOption body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the item's associated location. |
| `modifierListId` | `String` | Template, Required | The ID of the modifier list to edit. |
| `modifierOptionId` | `String` | Template, Required | The ID of the modifier list to edit. |
| `body` | [`V1ModifierOption`](/doc/models/v1-modifier-option.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1ModifierOption`](/doc/models/v1-modifier-option.md)
### Example Usage
```java
String locationId = "location_id4";
String modifierListId = "modifier_list_id6";
String modifierOptionId = "modifier_option_id6";
V1Money bodyPriceMoney = new V1Money.Builder()
.amount(194)
.currencyCode("XBA")
.build();
V1ModifierOption body = new V1ModifierOption.Builder()
.id("id6")
.name("name6")
.priceMoney(bodyPriceMoney)
.onByDefault(false)
.ordinal(88)
.build();
v1ItemsApi.updateModifierOptionAsync(locationId, modifierListId, modifierOptionId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## List Pages
Lists all Favorites pages (in Square Point of Sale) for a given
location.
```java
CompletableFuture<List<V1Page>> listPagesAsync(
final String locationId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to list Favorites pages for. |
### Response Type
[`List<V1Page>`](/doc/models/v1-page.md)
### Example Usage
```java
String locationId = "location_id4";
v1ItemsApi.listPagesAsync(locationId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Create Page
Creates a Favorites page in Square Point of Sale.
```java
CompletableFuture<V1Page> createPageAsync(
final String locationId,
final V1Page body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the location to create an item for. |
| `body` | [`V1Page`](/doc/models/v1-page.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Page`](/doc/models/v1-page.md)
### Example Usage
```java
String locationId = "location_id4";
List<V1PageCell> bodyCells = new LinkedList<>();
V1PageCell bodyCells0 = new V1PageCell.Builder()
.pageId("page_id8")
.row(2)
.column(80)
.objectType("ITEM")
.objectId("object_id6")
.build();
bodyCells.add(bodyCells0);
V1Page body = new V1Page.Builder()
.id("id6")
.name("name6")
.pageIndex(224)
.cells(bodyCells)
.build();
v1ItemsApi.createPageAsync(locationId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Page
Deletes an existing Favorites page and all of its cells.
__DeletePage__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeletePageRequest` object.
```java
CompletableFuture<V1Page> deletePageAsync(
final String locationId,
final String pageId)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the Favorites page's associated location. |
| `pageId` | `String` | Template, Required | The ID of the page to delete. |
### Response Type
[`V1Page`](/doc/models/v1-page.md)
### Example Usage
```java
String locationId = "location_id4";
String pageId = "page_id0";
v1ItemsApi.deletePageAsync(locationId, pageId).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Page
Modifies the details of a Favorites page in Square Point of Sale.
```java
CompletableFuture<V1Page> updatePageAsync(
final String locationId,
final String pageId,
final V1Page body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the Favorites page's associated location |
| `pageId` | `String` | Template, Required | The ID of the page to modify. |
| `body` | [`V1Page`](/doc/models/v1-page.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Page`](/doc/models/v1-page.md)
### Example Usage
```java
String locationId = "location_id4";
String pageId = "page_id0";
List<V1PageCell> bodyCells = new LinkedList<>();
V1PageCell bodyCells0 = new V1PageCell.Builder()
.pageId("page_id8")
.row(2)
.column(80)
.objectType("ITEM")
.objectId("object_id6")
.build();
bodyCells.add(bodyCells0);
V1Page body = new V1Page.Builder()
.id("id6")
.name("name6")
.pageIndex(224)
.cells(bodyCells)
.build();
v1ItemsApi.updatePageAsync(locationId, pageId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Delete Page Cell
Deletes a cell from a Favorites page in Square Point of Sale.
__DeletePageCell__ returns nothing on success but Connect SDKs
map the empty response to an empty `V1DeletePageCellRequest` object
as documented below.
```java
CompletableFuture<V1Page> deletePageCellAsync(
final String locationId,
final String pageId,
final String row,
final String column)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the Favorites page's associated location. |
| `pageId` | `String` | Template, Required | The ID of the page to delete. |
| `row` | `String` | Query, Optional | The row of the cell to clear. Always an integer between 0 and 4, inclusive. Row 0 is the top row. |
| `column` | `String` | Query, Optional | The column of the cell to clear. Always an integer between 0 and 4, inclusive. Column 0 is the leftmost column. |
### Response Type
[`V1Page`](/doc/models/v1-page.md)
### Example Usage
```java
String locationId = "location_id4";
String pageId = "page_id0";
String row = "row0";
String column = "column4";
v1ItemsApi.deletePageCellAsync(locationId, pageId, row, column).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
## Update Page Cell
Modifies a cell of a Favorites page in Square Point of Sale.
```java
CompletableFuture<V1Page> updatePageCellAsync(
final String locationId,
final String pageId,
final V1PageCell body)
```
### Parameters
| Parameter | Type | Tags | Description |
| --- | --- | --- | --- |
| `locationId` | `String` | Template, Required | The ID of the Favorites page's associated location. |
| `pageId` | `String` | Template, Required | The ID of the page the cell belongs to. |
| `body` | [`V1PageCell`](/doc/models/v1-page-cell.md) | Body, Required | An object containing the fields to POST for the request.<br><br>See the corresponding object definition for field details. |
### Response Type
[`V1Page`](/doc/models/v1-page.md)
### Example Usage
```java
String locationId = "location_id4";
String pageId = "page_id0";
V1PageCell body = new V1PageCell.Builder()
.pageId("page_id6")
.row(22)
.column(60)
.objectType("ITEM")
.objectId("object_id4")
.build();
v1ItemsApi.updatePageCellAsync(locationId, pageId, body).thenAccept(result -> {
// TODO success callback handler
}).exceptionally(exception -> {
// TODO failure callback handler
return null;
});
```
| 27.611175 | 235 | 0.676966 | eng_Latn | 0.443615 |
a9396bcb642dc20c31750f1c348776857d8b0a15 | 1,724 | md | Markdown | docs/TODO.md | blacktrue/CfdiUtils | 8117f987582b13181fbb6cdfc3afc8e220dedfbb | [
"MIT"
] | null | null | null | docs/TODO.md | blacktrue/CfdiUtils | 8117f987582b13181fbb6cdfc3afc8e220dedfbb | [
"MIT"
] | null | null | null | docs/TODO.md | blacktrue/CfdiUtils | 8117f987582b13181fbb6cdfc3afc8e220dedfbb | [
"MIT"
] | null | null | null | # Lista de tareas pendientes e ideas
## Documentación del proyecto
Documentar los otros helpers de `Elements`:
- Complemento de comercio exterior
- Impuestos locales
- Pagos
Documentar los validadores:
- Revisar todos los validadores documentados en CFDI
- Pagos
## Prepare for version 3
Version 3 will deprecate some classes and methods, it may be good point of start to migrate the project
to a new namespace `PhpCfdi\CfdiUtils`
## CfdiVersion & TfdVersion
The classes `CfdiUtils\CfdiVersion` and `CfdiUtils\TimbreFiscalDigital\CfdiVersion`
share the same logic and methos. They are detected as code smell and it would be better
to have a single class to implement the logic and extend that class to provide configuration.
## Status of a Cfdi using the SAT webservice
This is already implemented in `CfdiUtils\ConsultaCfdiSat\WebService` but there are two
ideas than need a solution:
- Find a way to not depend on PHP SOAP but in something that can do async
request and configure the connection like setting a proxy, maybe depending on guzzle.
- Create a cache of the WSDL page (?)
## Validation rules for Pagos
The validation rules for "Complemento de Recepción de pagos" are included since version 2.6 but
they require more cases of use and a better understanding of the rules published by SAT.
## Validation rules for ComercioExterior
Create validation rules for "Complemento de Comercio Exterior"
## Ideas not to be implemented
### Add a pretty command line utility to validate cfdi files
This will be implemented on a different project, for testing proposes there is the file `tests/validate.php`
### Implement catalogs published by SAT
This will be implemented on a different project.
| 27.806452 | 108 | 0.785963 | eng_Latn | 0.987298 |
a939a93ec455016baee0f9f012fe1d805d7d5ddf | 195 | md | Markdown | content/papers/1998-PPIG-10th-Retowsky.md | psychology-of-programming/ppig.org | f8743920ae777c64b7c3d133ba4c730151ee4c50 | [
"MIT"
] | null | null | null | content/papers/1998-PPIG-10th-Retowsky.md | psychology-of-programming/ppig.org | f8743920ae777c64b7c3d133ba4c730151ee4c50 | [
"MIT"
] | 1 | 2019-05-25T20:03:29.000Z | 2019-05-25T20:03:29.000Z | content/papers/1998-PPIG-10th-Retowsky.md | psychology-of-programming/ppig.org | f8743920ae777c64b7c3d133ba4c730151ee4c50 | [
"MIT"
] | 1 | 2019-06-03T08:53:48.000Z | 2019-06-03T08:53:48.000Z | ---
title: "Software reuse from an external memory: the cognitve issues of support tools"
authors: [Fabrice Retowsky]
abstract: ""
publishedAt: "ppig-1998"
year: 1998
url_pdf: ""
paper_no: 2
---
| 19.5 | 85 | 0.728205 | eng_Latn | 0.957131 |
a93a5ea6e262b728933c89877929addb5633d9fd | 41 | md | Markdown | README.md | LeeBumSeok/oop | 15cda1e66b70943de16caedde7bd6fdeb662e2f7 | [
"MIT"
] | null | null | null | README.md | LeeBumSeok/oop | 15cda1e66b70943de16caedde7bd6fdeb662e2f7 | [
"MIT"
] | null | null | null | README.md | LeeBumSeok/oop | 15cda1e66b70943de16caedde7bd6fdeb662e2f7 | [
"MIT"
] | null | null | null | # oop
KMU CS Object Oriented Programming
| 13.666667 | 34 | 0.804878 | eng_Latn | 0.803995 |
a93bfc07d7d4de0684d78870c384fb26c534d7c1 | 9,910 | md | Markdown | docs/framework/wcf/feature-details/how-to-specify-channel-security-credentials.md | jhonyfrozen/docs.pt-br | c9e86b6a5de2ff8dffd54dd64d2e87aee85a5cb8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/how-to-specify-channel-security-credentials.md | jhonyfrozen/docs.pt-br | c9e86b6a5de2ff8dffd54dd64d2e87aee85a5cb8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/how-to-specify-channel-security-credentials.md | jhonyfrozen/docs.pt-br | c9e86b6a5de2ff8dffd54dd64d2e87aee85a5cb8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Como: especificar credenciais de segurança de canal'
ms.date: 03/30/2017
ms.assetid: f8e03f47-9c4f-4dd5-8f85-429e6d876119
ms.openlocfilehash: 0bfbb71ade3822b9f504c2f89a41145ce0d435f6
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/23/2019
ms.locfileid: "62038864"
---
# <a name="how-to-specify-channel-security-credentials"></a>Como: especificar credenciais de segurança de canal
O Moniker de serviço do Windows Communication Foundation (WCF) permite que aplicativos de COM chamar serviços WCF. A maioria dos serviços WCF requer que o cliente especificar credenciais para autenticação e autorização. Ao chamar um serviço WCF de um cliente WCF, você pode especificar essas credenciais no código gerenciado ou em um arquivo de configuração de aplicativo. Ao chamar um serviço WCF em um aplicativo COM, você pode usar o <xref:System.ServiceModel.ComIntegration.IChannelCredentials> interface para especificar as credenciais. Este tópico será ilustram várias maneiras para especificar as credenciais usando o <xref:System.ServiceModel.ComIntegration.IChannelCredentials> interface.
> [!NOTE]
> <xref:System.ServiceModel.ComIntegration.IChannelCredentials> é uma interface baseada em IDispatch e você não obterá a funcionalidade do IntelliSense no ambiente do Visual Studio.
Este artigo usará o serviço WCF definido na [exemplo de segurança de mensagem](../../../../docs/framework/wcf/samples/message-security-sample.md).
### <a name="to-specify-a-client-certificate"></a>Para especificar um certificado de cliente
1. Execute o arquivo Setup. bat no diretório de segurança de mensagem para criar e instalar os certificados de teste necessário.
2. Abra o projeto de segurança de mensagem.
3. Adicione `[ServiceBehavior(Namespace="http://Microsoft.ServiceModel.Samples")]` para o `ICalculator` definição de interface.
4. Adicionar `bindingNamespace="http://Microsoft.ServiceModel.Samples"` à marca de ponto de extremidade em App. config para o serviço.
5. Criar o exemplo de segurança de mensagem e execute Service.exe. Usar o Internet Explorer e navegue até o URI do serviço (http://localhost:8000/ServiceModelSamples/Service) para garantir que o serviço está funcionando.
6. Abra o Visual Basic 6.0 e crie um novo arquivo .exe padrão. Adicione um botão ao formulário e clique duas vezes no botão para adicionar o seguinte código ao manipulador de cliques:
```
monString = "service:mexAddress=http://localhost:8000/ServiceModelSamples/Service?wsdl"
monString = monString + ", address=http://localhost:8000/ServiceModelSamples/Service"
monString = monString + ", contract=ICalculator, contractNamespace=http://Microsoft.ServiceModel.Samples"
monString = monString + ", binding=BasicHttpBinding_ICalculator, bindingNamespace=http://Microsoft.ServiceModel.Samples"
Set monikerProxy = GetObject(monString)
'Set the Service Certificate.
monikerProxy.ChannelCredentials.SetServiceCertificateAuthentication "CurrentUser", "NoCheck", "PeerOrChainTrust"
monikerProxy.ChannelCredentials.SetDefaultServiceCertificateFromStore "CurrentUser", "TrustedPeople", "FindBySubjectName", "localhost"
'Set the Client Certificate.
monikerProxy.ChannelCredentials.SetClientCertificateFromStoreByName "CN=client.com", "CurrentUser", "My"
MsgBox monikerProxy.Add(3, 4)
```
7. Execute o aplicativo do Visual Basic e verificar os resultados.
O aplicativo Visual Basic exibe uma caixa de mensagem com o resultado de chamar o método Add (3, 4). <xref:System.ServiceModel.ComIntegration.IChannelCredentials.SetClientCertificateFromFile%28System.String%2CSystem.String%2CSystem.String%29> ou <xref:System.ServiceModel.ComIntegration.IChannelCredentials.SetClientCertificateFromStoreByName%28System.String%2CSystem.String%2CSystem.String%29> também pode ser usado no lugar de <xref:System.ServiceModel.ComIntegration.IChannelCredentials.SetClientCertificateFromStore%28System.String%2CSystem.String%2CSystem.String%2CSystem.Object%29> para definir o certificado do cliente:
```
monikerProxy.ChannelCredentials.SetClientCertificateFromFile "C:\MyClientCert.pfx", "password", "DefaultKeySet"
```
> [!NOTE]
> Para essa chamada funcione, o certificado do cliente precisa ser confiável no computador que cliente está em execução no.
> [!NOTE]
> Se o moniker está mal formado ou se o serviço está indisponível, a chamada para `GetObject` retornará um erro informando que "Sintaxe inválida". Se você receber esse erro, verifique se você estiver usando o identificador de origem está correto e o serviço está disponível.
### <a name="to-specify-user-name-and-password"></a>Para especificar o nome de usuário e senha
1. Modifique o arquivo App. config do serviço para usar o `wsHttpBinding`. Isso é necessário para a validação de nome e a senha do usuário:
2. Defina o `clientCredentialType` ao nome de usuário:
3. Abra o Visual Basic 6.0 e crie um novo arquivo .exe padrão. Adicione um botão ao formulário e clique duas vezes no botão para adicionar o seguinte código ao manipulador de cliques:
```
monString = "service:mexAddress=http://localhost:8000/ServiceModelSamples/Service?wsdl"
monString = monString + ", address=http://localhost:8000/ServiceModelSamples/Service"
monString = monString + ", contract=ICalculator, contractNamespace=http://Microsoft.ServiceModel.Samples"
monString = monString + ", binding=WSHttpBinding_ICalculator, bindingNamespace=http://Microsoft.ServiceModel.Samples"
Set monikerProxy = GetObject(monString)
monikerProxy.ChannelCredentials.SetServiceCertificateAuthentication "CurrentUser", "NoCheck", "PeerOrChainTrust"
monikerProxy.ChannelCredentials.SetUserNameCredential "username", "password"
MsgBox monikerProxy.Add(3, 4)
```
4. Execute o aplicativo do Visual Basic e verificar os resultados. O aplicativo Visual Basic exibe uma caixa de mensagem com o resultado de chamar o método Add (3, 4).
> [!NOTE]
> A associação especificada neste exemplo, o moniker de serviço foi alterada para WSHttpBinding_ICalculator. Observe também que você deve fornecer um nome de usuário válido e uma senha na chamada para <xref:System.ServiceModel.ComIntegration.IChannelCredentials.SetUserNameCredential%28System.String%2CSystem.String%29>.
### <a name="to-specify-windows-credentials"></a>Para especificar as credenciais do Windows
1. Definir `clientCredentialType` para Windows no arquivo App. config do serviço:
2. Abra o Visual Basic 6.0 e crie um novo arquivo .exe padrão. Adicione um botão ao formulário e clique duas vezes no botão para adicionar o seguinte código ao manipulador de cliques:
```
monString = "service:mexAddress=http://localhost:8000/ServiceModelSamples/Service?wsdl"
monString = monString + ", address=http://localhost:8000/ServiceModelSamples/Service"
monString = monString + ", contract=ICalculator, contractNamespace=http://Microsoft.ServiceModel.Samples"
monString = monString + ", binding=WSHttpBinding_ICalculator, bindingNamespace=http://Microsoft.ServiceModel.Samples"
monString = monString + ", upnidentity=domain\userID"
Set monikerProxy = GetObject(monString)
monikerProxy.ChannelCredentials.SetWindowsCredential "domain", "userID", "password", 1, True
MsgBox monikerProxy.Add(3, 4)
```
3. Execute o aplicativo do Visual Basic e verificar os resultados. O aplicativo Visual Basic exibe uma caixa de mensagem com o resultado de chamar o método Add (3, 4).
> [!NOTE]
> Você deve substituir "domínio", "userID" e "senha" pelos valores válidos.
### <a name="to-specify-an-issue-token"></a>Para especificar um token de problema
1. Emitir tokens são usados apenas para aplicativos que usam segurança federada. Para obter mais informações sobre segurança federada, consulte [federação e Tokens emitidos](../../../../docs/framework/wcf/feature-details/federation-and-issued-tokens.md) e [exemplo de Federação](../../../../docs/framework/wcf/samples/federation-sample.md).
O exemplo de código Visual Basic a seguir ilustra como chamar o <xref:System.ServiceModel.ComIntegration.IChannelCredentials.SetIssuedToken%28System.String%2CSystem.String%2CSystem.String%29> método:
```
monString = "service:mexAddress=http://localhost:8000/ServiceModelSamples/Service?wsdl"
monString = monString + ", address=http://localhost:8000/SomeService/Service"
monString = monString + ", contract=ICalculator, contractNamespace=http://SomeService.Samples"
monString = monString + ", binding=WSHttpBinding_ISomeContract, bindingNamespace=http://SomeService.Samples"
Set monikerProxy = GetObject(monString)
monikerProxy.SetIssuedToken("http://somemachine/sts", "bindingType", "binding")
```
Para obter mais informações sobre os parâmetros para esse método, consulte <xref:System.ServiceModel.ComIntegration.IChannelCredentials.SetIssuedToken%28System.String%2CSystem.String%2CSystem.String%29>.
## <a name="see-also"></a>Consulte também
- [Federação](../../../../docs/framework/wcf/feature-details/federation.md)
- [Como: Configurar credenciais em um serviço de Federação](../../../../docs/framework/wcf/feature-details/how-to-configure-credentials-on-a-federation-service.md)
- [Como: Criar um cliente federado](../../../../docs/framework/wcf/feature-details/how-to-create-a-federated-client.md)
- [Segurança de mensagem](../../../../docs/framework/wcf/feature-details/message-security-in-wcf.md)
- [Associações e segurança](../../../../docs/framework/wcf/feature-details/bindings-and-security.md)
| 70.283688 | 699 | 0.759132 | por_Latn | 0.905016 |
a93c3e26583a2a3d2a733dd567c9af3f8032be23 | 14,767 | md | Markdown | docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-3.md | BSafes-Publications/BSafes-Publications.github.io | 377a2d6898cd30edd14c4f2b25cbd2a72f7863d2 | [
"MIT"
] | null | null | null | docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-3.md | BSafes-Publications/BSafes-Publications.github.io | 377a2d6898cd30edd14c4f2b25cbd2a72f7863d2 | [
"MIT"
] | null | null | null | docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-3.md | BSafes-Publications/BSafes-Publications.github.io | 377a2d6898cd30edd14c4f2b25cbd2a72f7863d2 | [
"MIT"
] | null | null | null | ---
layout: default
title: Part II - Perimeter security best practices and mitigations
parent: . June 2021, Deploying Secure Unified Communications/Voice and Video over IP Systems
grand_parent: NSA
nav_order: 30
---
<style>
.dont-break-out {
/* These are technically the same, but use both */
overflow-wrap: break-word;
word-wrap: break-word;
-ms-word-break: break-all;
/* This is the dangerous one in WebKit, as it breaks things wherever */
word-break: break-all;
/* Instead use this non-standard one: */
word-break: break-word;
}
</style>
<div class="dont-break-out" markdown="1">
1. TOC
{:toc}
## Part II: Perimeter security best practices and mitigations
While the first part of this report addressed UC/VVoIP security on the network, Part II addresses security at the perimeter.
The perimeter is where all communications external to the organization’s UC/VVoIP system enter or leave the call-processing network. Session border controllers are essential and enforce call signaling protocol standards for traffic entering and exiting the local UC/VVoIP network. By enforcing call signaling protocol standards, a layer of protection is provided to the servers residing on the internal network that process UC/VVoIP communication packets. In addition, SBCs support secure connectivity from local UC/VVoIP servers to remote service providers and other external UC/VVoIP systems. Implementation of perimeter security should be done after implementing best practices for the network, as described in Part I.
The perimeter, in this case, refers to the external method of communication for the UC/VVoIP call-processing system only. This includes PSTN gateways, SBCs, and virtual private networks (VPNs). All devices that form the perimeter should be securely managed from a dedicated management network.
### *PSTN gateway protections*
PSTN gateways connect a UC/VVoIP call-processing system to the PSTN. The threat presented by gateway devices is that malicious users from outside of the network could connect directly to the gateway device and make unauthorized calls. Unauthorized calls could lead to toll fraud. Another problem with some PSTN gateways is that they can directly pass call-signaling messages to internal enterprise session controllers (ESCs). This could allow a direct compromise of the UC/VVoIP servers.
#### Mitigations
The best way to prevent unauthorized calls is to require authorized users or peer gateways to authenticate before the gateway will complete a call. Some gateways query a separate server to check if a call is authorized. In this case, a secure channel must be used between the gateway and the authorization server.
Place PSTN gateways on their own VLAN and in a DMZ off of a session border controller interface. Use packet filtering to allow signaling messages from authorized servers only. Prevent UC/VVoIP endpoints from sending signaling messages directly to the gateway. Instead, use the UC/VVoIP server as an intermediary.
Gateways must validate and terminate all PSTN signaling at the gateway. The gateway should convert PSTN signaling messages to UC/VVoIP signaling messages. This reduces the likelihood of a successful compromise of the UC/VVoIP servers through the gateway. A malicious actor could still directly compromise the gateway and use it as a platform for lateral movement to other UC/VVoIP devices, but this requires additional steps.
Regularly apply security updates to the software on gateways located at the perimeter.

_Figure 2: Perimeter security device placement following NSA guidelines_
### *Protections for public IP networks functioning as voice carriers*
A benefit of UC/VVoIP is the ability to use public IP networks to carry voice traffic between physically separate offices or between organizations. However, this use of UC/VVoIP requires special security considerations because the organization has little control over its voice/video traffic once it enters other networks.
Once on the public network, an organization’s UC/VVoIP call-processing traffic will traverse computers and networks owned by any number of third parties who could intercept and modify packets without the caller’s or organization’s knowledge. An organization’s internal network policy may allow call-processing traffic to be sent in the clear; however, the accessibility of voice/video traffic on a public network necessitates the use of encryption and authentication to establish a secure channel between the calling and answering parties.
#### Mitigations
Using UC/VVoIP over a public network to establish calls between different organizations requires specific security steps. For confidentiality reasons, UC/VVoIP should use encrypted trunks when communicating over public IP networks. An organization should not trust the traffic originating from another organization. Decrypt and inspect any UC/VVoIP traffic before it is allowed into the internal network. Additionally, an organization should hide its internal network topology by using network address translation (NAT) and non-routable IP addresses on its internal UC/VVoIP network.
An SBC deployed at the network perimeter can provide inspection of the UC/VVoIP traffic as well as provide for NAT traversal. An SBC sits on the edge of the network and proxies the UC/VVoIP connection between the network and the service provider. The SBC rewrites signaling messages (control signals) and dynamically opens ports so media streams can traverse the SBC. SBCs are back-to-back user agents (B2BUAs). B2BUAs proxy connections between endpoints resulting in two separate connections for the communication channel. SBCs understand and inspect UC/VVoIP communication at a level that traditional network firewalls cannot. Because they are B2BUAs, SBCs maintain a separate connection between the internal network and the service provider. This property allows the SBC to inspect and manipulate (i.e., rewrite) portions of the UC/VVoIP packets traversing it. If the streams traversing the SBC are unencrypted, the SBC can rewrite the internal IP addresses buried within the UC/VVoIP packets with external IP addresses, allowing for NAT. The use of non-routable addresses prohibits a malicious actor from directly routing a packet across the Internet to a device on the internal network.
Inter-office communication can be established using encrypted VPNs. A VPN is likely already established between offices for data-only traffic. However, since it is recommended that UC/VVoIP call-processing and data networks should be kept on separate VLANs, a separate VPN must be established for call-processing traffic or the VPN must respect and maintain VLAN separation.
### *Signaling gateway protections*
A signaling gateway is a translation device that is used to pass signaling (i.e., call control) information between two different network protocols or across a public IP network. In the case of UC/VVoIP, this is between an IP-based call-processing system and an external legacy telephony network (i.e., central office SS7, T1, etc.). A compromise of a signaling gateway can lead to a disruption of voice and video services, access to the topological information of the network, identifying the subscribers, or other effects. The gateway device can be stand-alone or integrated with another signaling gateway.
#### Mitigations
Signaling gateways are public facing servers. As with all public facing servers, the signaling gateway should be placed in a demilitarized zone (DMZ). The DMZ in this case should be an interface off of the SBC. In addition to being placed in a DMZ, the signaling gateway should be placed in its own VLAN. UC/VVoIP devices should not be able to send call control signaling messages directly to the gateway, and instead should use the UC/VVoIP server as an intermediary or protocol translation device. The gateway will send the signaling messages to the ESC server, which acts as an intermediary between the two UC/VVoIP endpoint devices. Signaling gateways must validate and terminate all PSTN signaling, then convert the terminated signaling messages to UC/VVoIP call control signaling messages for communications to UC/VVoIP-based devices. This type of protocol translation enacted by the signaling gateway helps reduce the likelihood of a successful compromise of the ESC server. For all signaling protocols that can be encrypted, encryption should be utilized.
The signaling gateway should be configured to log all calls. Because the signaling gateway is located at the perimeter, it is capable of keeping records of calls entering and exiting the network. Keeping call records that include call connection time, length of call, and other data often proves useful when trying to identify origin and identification of a malicious actor.
### *Media gateway protections*
A media gateway is a translation device that converts media streams (voice, video) between different communications formats and protocols. For example, a media gateway device can convert voice media originating from a time-division multiplexingbased (TDM) system to voice media destined for a UC/VVoIP system. The media gateway device can be stand-alone or integrated with another device (i.e., signaling gateway). Also note that a signaling gateway can initiate and terminate communications on the media gateway. A successful compromise of the media gateway can lead to the eavesdropping or disruption of all voice and video calls traversing the gateway.
#### Mitigations
Place media gateways in their own media VLAN and in a DMZ off of an SBC interface. When calls are routed over public networks, encryption of media protocols is essential in the same way as with signaling protocols. Use a VPN for any inter-office communications across the public network.
### *Wide area network (WAN) link protections*
Network connections to remote offices are considered part of the internal network and thus should follow the same data and UC/VVoIP call-processing separation guidelines. In this context, remote office WAN links refer to dedicated leased lines connecting the remote and primary networks where both ends of the link are managed by the same organization. Because the WAN link connects the internal network to the outside world, if it is not protected properly, it puts the internal network at risk.
#### Mitigations
WAN protection methods include VPN protocols such as IPsec and TLS. The VPN must support the separation of UC/VVoIP call-processing networks and data networks by either supporting VLANs or creating individual VPNs for each network.
### *Cloud connectivity protections*
Some organizations are currently migrating the Internet Protocol private branch exchange (PBX) to the cloud to accrue benefits the cloud offers (increased efficiency, greater flexibility, reduced infrastructure costs, lower operational costs, and improved communications). Cloud-based communications systems can include IP PBXs, SIP servers, UC/VVoIP teleconferencing, and other applications.
With the rise of cloud computing, security remains a top concern. Just as security concerns expanded when PBXs migrated to IP PBXs and then evolved to UC/VVoIP systems, security is just as relevant now, as UC/VVoIP systems begin the migration to the cloud. Threats to the cloud include denial of service effects, access misconfigurations, and unsecured application programmable interfaces used by programmers. When migrating to a cloud-based solution, data security must be maintained. Confidentiality of the call signaling must be maintained, the media channel (voice, video, and data) must prevent eavesdropping, and all devices involved must be properly authenticated.
#### Mitigations
To help mitigate risks around migrations to the cloud, employ cryptographic protocols to encrypt communications between UC/VVoIP devices. Whether moving a call server entirely into the cloud, or just providing trunk connectivity from an external call server to the cloud, it is best to protect call server peripherals with encryption and authentication. The encryption should be configured on UC/VVoIP signaling and media devices. To protect call control signaling originating from local UC/VVoIP systems out to the cloud, use SIP over TLS or H.235 (H.323 over TLS). To protect voice/video media originating from local UC/VVoIP systems out to the cloud, use Secure Real-Time Protocol (SRTP). Secure connections to the cloud must be established by implementing trusted paths and channels that support encryption and two-way authentication such as IPsec, TLS, DTLS, HTTPS, and SSH.
DMZ-like separation between logical external gateways and logical internal capabilities should be maintained. Access control mechanisms should be employed to restrict access to the systems hosted in the cloud. Robust logging should be enabled and those logs routinely reviewed to detect and trace any potential compromises.
### *Summary of Part II*
Perimeter security is paramount when deploying UC/VVoIP systems. Protection from external intrusions can be mitigated by employing the security features of devices located at the perimeter, as well as deploying special purpose UC/VVoIP security devices such as an SBC. Access control, data/voice separation, encryption, authentication, logging, and secure management are all considerations. By implementing these core security components in accordance with this document, the security at the perimeter will be greatly enhanced.
Once security at the network and perimeter is addressed, one can turn attention to ESCs.
***
#### Table of Contents
{: .no_toc}
<ul><li> <a href="/docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-1/">Executive summary</a></li><li> <a href="/docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-2/">Part I - Network security best practices and mitigations</a></li><li> <a href="/docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-3/">Part II - Perimeter security best practices and mitigations</a></li><li> <a href="/docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-4/">Part III - Enterprise session controller security best practices and mitigations</a></li><li> <a href="/docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-5/">Part IV - UC/VVoIP endpoint best practices and mitigations</a></li><li> <a href="/docs/nsa/deploying-secure-unified-communications-voice-and-video-over-ip-systems-6/">End of guidelines</a></li></ul>
***
</div>
| 139.311321 | 1,191 | 0.811336 | eng_Latn | 0.998088 |
a93ca6c8b83f9f4170b0c194470dc01d91a74b25 | 1,945 | md | Markdown | specification/workflow-yaml/parallel.md | direktiv/direktiv | 74d3eedf437790d32390408b89ed5299a355ca16 | [
"Apache-2.0"
] | 40 | 2021-10-30T18:44:25.000Z | 2022-03-06T20:26:14.000Z | specification/workflow-yaml/parallel.md | direktiv/direktiv | 74d3eedf437790d32390408b89ed5299a355ca16 | [
"Apache-2.0"
] | 76 | 2021-10-31T01:59:27.000Z | 2022-03-31T22:40:51.000Z | specification/workflow-yaml/parallel.md | direktiv/direktiv | 74d3eedf437790d32390408b89ed5299a355ca16 | [
"Apache-2.0"
] | 8 | 2021-11-01T17:10:15.000Z | 2022-03-17T05:36:26.000Z | # Parallel State
```yaml
- id: a
type: parallel
mode: and
actions:
- function: myfunc
input: 'jq(.x)'
- function: myfunc
input: 'jq(.y)'
```
## ParallelStateDefinition
The `parallel` state is an alternative to the `action` state when a workflow can perform multiple threads of logic simultaneously.
| Parameter | Description | Type | Required |
| --- | --- | --- | --- |
| `type` | Identifies which kind of [StateDefinition](./states.md) is being used. In this case it must be set to `parallel`. | string | yes |
| `id` | An identifier unique within the workflow to this one state. | string | yes |
| `log` | If defined, the workflow will generate a log when it commences this state. See [StateLogging](./logging.md). | [Structured JQ](../instance-data/structured-jx.md) | no |
| `metadata` | If defined, updates the instance's metadata. See [InstanceMetadata](./metadata.md). | [Structured JQ](../instance-data/structured-jx.md) | no |
| `transform` | If defined, modifies the instance's data upon completing the state logic. See [StateTransforms](../instance-data/transforms.md). | [Structured JQ](../instance-data/structured-jx.md) | no |
| `transition` | Identifies which state to transition to next, referring to the next state's unique `id`. If undefined, this state terminates the workflow. | string | no |
| `catch` | Defines behaviour for handling of catchable errors. | [[]ErrorCatchDefinition](./errors.md) | no |
| `timeout` | ISO8601 duration string to set a non-default timeout. | string | no |
| `mode` | If defined, must be either `and` or `or`. The default is `and`. This setting determines whether the state is considered successfully completed only if all threads have returned without error (`and`) or as soon as any single thread returns without error (`or`). | string | no |
| `actions` | Defines the action to perform. | [[]ActionDefinition](./actions.md) | yes |
| 64.833333 | 289 | 0.68946 | eng_Latn | 0.976741 |