hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
48d7faa5be516474f40c3040ca48d5477ac8cec8 | 4,006 | md | Markdown | README.md | sprocketbox/eloquence | 8e303ca1d011c167d0dd6465961dc00e0e232ce8 | [
"MIT"
] | 20 | 2020-06-24T16:43:00.000Z | 2020-07-07T16:52:25.000Z | README.md | sprocketbox/eloquence | 8e303ca1d011c167d0dd6465961dc00e0e232ce8 | [
"MIT"
] | 2 | 2020-06-25T09:27:12.000Z | 2020-07-08T08:46:30.000Z | README.md | sprocketbox/eloquence | 8e303ca1d011c167d0dd6465961dc00e0e232ce8 | [
"MIT"
] | 1 | 2020-06-25T09:24:04.000Z | 2020-06-25T09:24:04.000Z | # Due to a conflict in naming, this package has been renamed as [eloquent-identity](https://github.com/sprocketbox/eloquent-identity) to avoid confusion.
# Eloquence
[](https://packagist.org/packages/sprocketbox/eloquence)
[](https://packagist.org/packages/sprocketbox/eloquence)
[](https://packagist.org/packages/sprocketbox/eloquence)
[](https://scrutinizer-ci.com/g/sprocketbox/eloquence/?branch=master)
- **Laravel**: 7
- **PHP**: 7.4+
- **License**: MIT
- **Author**: Ollie Read
- **Author Homepage**: https://sprocketbox.io
Eloquence provides a cache on top of Eloquent that prevents multiple models being created for a single database row
using the Identity Map design pattern ([P of EAA](https://martinfowler.com/eaaCatalog/identityMap.html) [Wikipedia](https://en.wikipedia.org/wiki/Identity_map_pattern)).
#### Table of Contents
- [Installing](#installing)
- [Usage](#usage)
- [Finding](#finding)
- [Hydrating](#hydrating)
- [Belongs To](#belongsto)
- [Flushing](#flushing)
- [How does it work](#how)
- [Why?](#why)
## Installing
To install this package simply run the following command.
```
composer require sprocketbox/eloquence
```
This package uses auto-discovery to register the service provider but if you'd rather do it manually,
the service provider is:
```
Sprocketbox\Eloquence\ServiceProvider
```
There is no configuration required.
## Usage
To make use of Eloquence on your models, add the following trait.
```
Sprocketbox\Eloquence\Concerns\MapsIdentity
```
### Finding
Any calls to `find()`, `findOrFail()` or `findOrNew()` on a model that uses this trait, will skip the query
if a model has already been created using the provided id.
Calls to `findMany()` will skip any ids that have already been used, only querying ones that are not present in the cache.
The query is only skipped if there are no where clauses, joins or having statements.
If you wish to force the query, you can call `refreshIdentityMap()` on the query builder instance. If you wish to skip
the query on a builder instance where `refreshIdentityMap()` has been called, you can call `useIdentityMap()`.
### Hydrating
When the query builder attempts to create a new instance of a model using this trait, with a key that matches an already
existing instance of the model, the existing instance will be used.
If the model is using timestamps, and the returned attributes are newer, the attributes on the existing instance will be
updated, but will retain any changes you'd previously made.
### BelongsTo
If a belongs to relationship is loaded (not belongs to many) without constraints and without `refreshIdentityMap()` being
called, the query will skip any model instances that already exist.
### Flushing
If you wish to flush the cached models, call `flushIdentities()` on an instance of `IdentityManager`, or on the `Eloquence`
facade.
## How
The `IdentityManager` stores an array containing all existing model instances and their identity.
The identities for models are stored as string, created using the following class.
```
Sprocketbox\Eloquence\ModelIdentity
```
This contains a key, the model class name, and the connection name. The string version of these looks like so:
```
connection:class:key
```
## Why
It's very easy to end up with multiple versions of the same model, meaning that updates on one aren't persisted
to others.
Eloquence was created to reduce the number of models created, help limit unnecessary queries, and allow for consistent
model interaction. It doesn't matter where in your code you're dealing with user 1, any changes made during a request
will persist across all instances.
| 40.06 | 182 | 0.763605 | eng_Latn | 0.989479 |
48d96ffd51e2af9da5e44ad4ea6fd7aa35d5d281 | 802 | md | Markdown | packages/insights/doc/interfaces/ruleimpact.md | rvsia/javascript-clients | f3056d12020d42e96ec0a6dfdc0ea54d127bdcaa | [
"Apache-2.0"
] | null | null | null | packages/insights/doc/interfaces/ruleimpact.md | rvsia/javascript-clients | f3056d12020d42e96ec0a6dfdc0ea54d127bdcaa | [
"Apache-2.0"
] | null | null | null | packages/insights/doc/interfaces/ruleimpact.md | rvsia/javascript-clients | f3056d12020d42e96ec0a6dfdc0ea54d127bdcaa | [
"Apache-2.0"
] | null | null | null | [@redhat-cloud-services/insights-client](../README.md) > [RuleImpact](../interfaces/ruleimpact.md)
# Interface: RuleImpact
*__export__*:
*__interface__*: RuleImpact
## Hierarchy
**RuleImpact**
## Index
### Properties
* [impact](ruleimpact.md#impact)
* [name](ruleimpact.md#name)
---
## Properties
<a id="impact"></a>
### `<Optional>` impact
**● impact**: *`number`*
*Defined in [api.ts:637](https://github.com/RedHatInsights/javascript-clients/blob/master/packages/insights/api.ts#L637)*
*__type__*: {number}
*__memberof__*: RuleImpact
___
<a id="name"></a>
### `<Optional>` name
**● name**: *`string`*
*Defined in [api.ts:631](https://github.com/RedHatInsights/javascript-clients/blob/master/packages/insights/api.ts#L631)*
*__type__*: {string}
*__memberof__*: RuleImpact
___
| 15.72549 | 121 | 0.687032 | yue_Hant | 0.178799 |
48d9b94f8f28dfba42c54cea2a7cc7eadcea7822 | 13,664 | md | Markdown | src/js/components/Box/README.md | loongluangkhot/grommet | 11c3963dc74825f45680919098f6e1aa97d24b2c | [
"Apache-2.0"
] | null | null | null | src/js/components/Box/README.md | loongluangkhot/grommet | 11c3963dc74825f45680919098f6e1aa97d24b2c | [
"Apache-2.0"
] | 1 | 2022-03-17T22:21:47.000Z | 2022-03-17T22:21:47.000Z | src/js/components/Box/README.md | loongluangkhot/grommet | 11c3963dc74825f45680919098f6e1aa97d24b2c | [
"Apache-2.0"
] | 1 | 2022-02-23T16:20:18.000Z | 2022-02-23T16:20:18.000Z | ## Box
A container that lays out its contents in one direction. Box
provides CSS flexbox capabilities for layout, as well as general
styling of things like background color, border, and animation.
[](https://storybook.grommet.io/?selectedKind=Layout-Box&full=0&stories=1&panelRight=0) [](https://codesandbox.io/s/github/grommet/grommet-sandbox?initialpath=/box&module=%2Fsrc%2FBox.js)
## Usage
```javascript
import { Box } from 'grommet';
<Box />
```
## Properties
**a11yTitle**
Custom label to be used by screen readers. When provided, an aria-label will
be added to the element.
```
string
```
**alignSelf**
How to align along the cross axis when contained in
a Box or along the column axis when contained in a Grid.
```
start
center
end
stretch
```
**gridArea**
The name of the area to place
this inside a parent Grid.
```
string
```
**margin**
The amount of margin around the component. An object can
be specified to distinguish horizontal margin, vertical margin, and
margin on a particular side.
```
none
xxsmall
xsmall
small
medium
large
xlarge
{
bottom:
xxsmall
xsmall
small
medium
large
xlarge
string,
end:
xxsmall
xsmall
small
medium
large
xlarge
string,
horizontal:
xxsmall
xsmall
small
medium
large
xlarge
string,
left:
xxsmall
xsmall
small
medium
large
xlarge
string,
right:
xxsmall
xsmall
small
medium
large
xlarge
string,
start:
xxsmall
xsmall
small
medium
large
xlarge
string,
top:
xxsmall
xsmall
small
medium
large
xlarge
string,
vertical:
xxsmall
xsmall
small
medium
large
xlarge
string
}
string
```
**align**
How to align the contents along the cross axis.
```
start
center
end
baseline
stretch
```
**alignContent**
How to align the contents when there is extra space in
the cross axis. Defaults to `stretch`.
```
start
center
end
between
around
stretch
```
**animation**
Animation effect(s) to use. 'duration' and 'delay' should
be in milliseconds. 'jiggle' and 'pulse' types are intended for
small elements, like icons.
```
fadeIn
fadeOut
jiggle
pulse
rotateLeft
rotateRight
slideUp
slideDown
slideLeft
slideRight
zoomIn
zoomOut
{
type:
fadeIn
fadeOut
jiggle
pulse
rotateLeft
rotateRight
slideUp
slideDown
slideLeft
slideRight
zoomIn
zoomOut,
delay: number,
duration: number,
size:
xsmall
small
medium
large
xlarge
}
[
fadeIn
fadeOut
jiggle
pulse
rotateLeft
rotateRight
slideUp
slideDown
slideLeft
slideRight
zoomIn
zoomOut
{
type:
fadeIn
fadeOut
jiggle
pulse
rotateLeft
rotateRight
slideUp
slideDown
slideLeft
slideRight
zoomIn
zoomOut,
delay: number,
duration: number,
size:
xsmall
small
medium
large
xlarge
}
]
```
**background**
Either a color
identifier to use for the background color. For example: 'neutral-1'. Or, a
'url()' for an image. Dark is not needed if color is provided.
```
string
{
color:
string
{
dark: string,
light: string
},
dark:
boolean
string,
image: string,
position: string,
opacity:
string
boolean
number
weak
medium
strong,
repeat:
no-repeat
repeat
string,
size:
cover
contain
string,
light: string
}
```
**basis**
A fixed or relative size along its container's main axis.
```
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
full
1/2
1/3
2/3
1/4
2/4
3/4
auto
string
```
**border**
Include a border. 'between' will place a border in the gap between
child elements. You must have a 'gap' to use 'between'.
```
boolean
top
left
bottom
right
start
end
horizontal
vertical
all
between
{
color:
string
{
dark: string,
light: string
},
side:
top
left
bottom
right
start
end
horizontal
vertical
all
between,
size:
xsmall
small
medium
large
xlarge
string,
style:
solid
dashed
dotted
double
groove
ridge
inset
outset
hidden
}
[{
color:
string
{
dark: string,
light: string
},
side:
top
left
bottom
right
start
end
horizontal
vertical
all
between,
size:
xsmall
small
medium
large
xlarge
string,
style:
solid
dashed
dotted
double
groove
ridge
inset
outset
hidden
}]
```
**direction**
The orientation to layout the child components in. Defaults to `column`.
```
row
column
row-responsive
row-reverse
column-reverse
```
**elevation**
Elevated height above the underlying context, indicated
via a drop shadow. Defaults to `none`.
```
none
xsmall
small
medium
large
xlarge
string
```
**flex**
Whether flex-grow and/or flex-shrink is true and at a desired factor.
```
grow
shrink
boolean
{
grow: number,
shrink: number
}
```
**fill**
Whether the width and/or height should fill the container.
```
horizontal
vertical
boolean
```
**focusIndicator**
When interactive via 'onClick', whether it should receive a focus
outline. Defaults to `true`.
```
boolean
```
**gap**
The amount of spacing between child elements. This
should not be used in conjunction with 'wrap' as the gap elements
will not wrap gracefully. If a child is a Fragment,
Box will not add a gap between the children of the Fragment.
```
none
xxsmall
xsmall
small
medium
large
xlarge
string
```
**height**
A fixed height.
```
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string
{
height:
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string,
min:
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string,
max:
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string
}
```
**hoverIndicator**
When 'onClick' has been specified, the hover indicator to apply
when the user is mousing over the box.
```
boolean
string
background
string
{
color:
string
{
dark: string,
light: string
},
dark:
boolean
string,
image: string,
position: string,
opacity:
string
boolean
number
weak
medium
strong,
repeat:
no-repeat
repeat
string,
size:
cover
contain
string,
light: string
}
{
background:
string
{
color:
string
{
dark: string,
light: string
},
dark:
boolean
string,
image: string,
position: string,
opacity:
string
boolean
number
weak
medium
strong,
repeat:
no-repeat
repeat
string,
size:
cover
contain
string,
light: string
},
elevation:
none
xsmall
small
medium
large
xlarge
string
}
```
**justify**
How to align the contents along the main axis. Defaults to `stretch`.
```
around
between
center
end
evenly
start
stretch
```
**onClick**
Click handler. Setting this property adds additional attributes to
the DOM for accessibility.
```
function
```
**overflow**
box overflow.
```
auto
hidden
scroll
visible
{
horizontal:
auto
hidden
scroll
visible,
vertical:
auto
hidden
scroll
visible
}
string
```
**pad**
The amount of padding around the box contents. An
object can be specified to distinguish horizontal padding, vertical
padding, and padding on a particular side of the box Defaults to `none`.
```
none
xxsmall
xsmall
small
medium
large
xlarge
{
bottom:
xxsmall
xsmall
small
medium
large
xlarge
string,
end:
xxsmall
xsmall
small
medium
large
xlarge
string,
horizontal:
xxsmall
xsmall
small
medium
large
xlarge
string,
left:
xxsmall
xsmall
small
medium
large
xlarge
string,
right:
xxsmall
xsmall
small
medium
large
xlarge
string,
start:
xxsmall
xsmall
small
medium
large
xlarge
string,
top:
xxsmall
xsmall
small
medium
large
xlarge
string,
vertical:
xxsmall
xsmall
small
medium
large
xlarge
string
}
string
```
**responsive**
Whether margin, pad, and border
sizes should be scaled for mobile environments. Defaults to `true`.
```
boolean
```
**round**
How much to round the corners.
```
boolean
xsmall
small
medium
large
xlarge
full
string
{
corner:
top
left
bottom
right
top-left
top-right
bottom-left
bottom-right,
size:
xsmall
small
medium
large
xlarge
string
}
```
**tag**
The DOM tag to use for the element. NOTE: This is deprecated in favor
of indicating the DOM tag via the 'as' property.
```
string
function
```
**as**
The DOM tag or react component to use for the element. Defaults to `div`.
```
string
function
```
**width**
A fixed width.
```
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string
{
width:
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string,
min:
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string,
max:
xxsmall
xsmall
small
medium
large
xlarge
xxlarge
string
}
```
**wrap**
Whether children can wrap if they can't all fit.
```
boolean
reverse
```
## Intrinsic element
```
div
```
## Theme
**global.animation**
The animation configuration for the Box. Expects `object`.
Defaults to
```
{
duration: '1s',
jiggle: {
duration: '0.1s',
},
}
```
**global.borderSize**
The possible border sizes in the Box. Expects `object`.
Defaults to
```
{
xsmall: '1px',
small: '2px',
medium: '4px',
large: '12px',
xlarge: '24px,
}
```
**global.elevation**
The possible shadows in Box elevation. Expects `object`.
Defaults to
```
{
light: {
none: 'none',
xsmall: '0px 1px 2px rgba(100, 100, 100, 0.50)',
small: '0px 2px 4px rgba(100, 100, 100, 0.50)',
medium: '0px 3px 8px rgba(100, 100, 100, 0.50)',
large: '0px 6px 12px rgba(100, 100, 100, 0.50)',
xlarge: '0px 8px 16px rgba(100, 100, 100, 0.50)',
},
dark: {
none: 'none',
xsmall: '0px 2px 2px rgba(255, 255, 255, 0.40)',
small: '0px 4px 4px rgba(255, 255, 255, 0.40)',
medium: '0px 6px 8px rgba(255, 255, 255, 0.40)',
large: '0px 8px 16px rgba(255, 255, 255, 0.40)',
xlarge: '0px 10px 24px rgba(255, 255, 255, 0.40)',
},
}
```
**global.colors.border**
The color of the border Expects `string | { dark: string, light: string }`.
Defaults to
```
{ dark: rgba(255, 255, 255, 0.33), light: rgba(0, 0, 0, 0.33), }
```
**global.hover.background.color**
The color of the default background when hovering Expects `string | { dark: string, light: string }`.
Defaults to
```
active
```
**global.hover.background.opacity**
The opacity of the default background when hovering Expects `string | { dark: string, light: string }`.
Defaults to
```
medium
```
**global.hover.color**
The color of the default background when hovering Expects `string | { dark: string, light: string }`.
Defaults to
```
{ dark: "white", light: "black" }
```
**global.opacity.medium**
The value used when background opacity is set to true. Expects `number`.
Defaults to
```
0.4
```
**global.size**
The possible sizes for width, height, and basis. Expects `object`.
Defaults to
```
{
xxsmall: '48px',
xsmall: '96px',
small: '192px',
medium: '384px',
large: '768px',
xlarge: '1152px',
xxlarge: '1536px',
full: '100%',
}
```
**box.extend**
Any additional style for the Box. Expects `string | (props) => {}`.
Defaults to
```
undefined
```
**box.responsiveBreakpoint**
The actual breakpoint to trigger changes in the border,
direction, gap, margin, pad, and round. Expects `string`.
Defaults to
```
small
```
**global.edgeSize**
The possible sizes for any of gap, margin, and pad. Expects `object`.
Defaults to
```
{
edgeSize: {
none: '0px',
hair: '1px',
xxsmall: '3px',
xsmall: '6px',
small: '12px',
medium: '24px',
large: '48px',
xlarge: '96px',
responsiveBreakpoint: 'small',
},
}
```
**global.breakpoints**
The possible breakpoints that could affect border, direction, gap, margin,
pad, and round. Expects `object`.
Defaults to
```
{
small: {
value: '768px',
borderSize: {
xsmall: '1px',
small: '2px',
medium: '4px',
large: '6px',
xlarge: '12px',
},
edgeSize: {
none: '0px',
hair: '1px',
xxsmall: '2px',
xsmall: '3px',
small: '6px',
medium: '12px',
large: '24px',
xlarge: '48px',
},
size: {
xxsmall: '24px',
xsmall: '48px',
small: '96px',
medium: '192px',
large: '384px',
xlarge: '768px',
full: '100%',
},
},
medium: {
value: '1536px',
},
large: {},
}
```
| 12.914934 | 325 | 0.602166 | eng_Latn | 0.976931 |
48da22e653eacfe8c4410904f9b9a585d93e7a76 | 3,530 | md | Markdown | wdk-ddi-src/content/ndis/nf-ndis-ndisfnetpnpevent.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ndis/nf-ndis-ndisfnetpnpevent.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ndis/nf-ndis-ndisfnetpnpevent.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:ndis.NdisFNetPnPEvent
title: NdisFNetPnPEvent function (ndis.h)
description: A filter driver can call the NdisFNetPnPEvent function to forward a network Plug and Play (PnP) or Power Management event to overlying drivers.
old-location: netvista\ndisfnetpnpevent.htm
tech.root: netvista
ms.assetid: 383f9dcb-68ba-4323-b25f-668169043f35
ms.date: 05/02/2018
ms.keywords: NdisFNetPnPEvent, NdisFNetPnPEvent function [Network Drivers Starting with Windows Vista], filter_ndis_functions_ref_36921970-788b-4b5e-9cf0-c54f8dcdeef2.xml, ndis/NdisFNetPnPEvent, netvista.ndisfnetpnpevent
ms.topic: function
req.header: ndis.h
req.include-header: Ndis.h
req.target-type: Desktop
req.target-min-winverclnt: Supported in NDIS 6.0 and later.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance: Irql_Filter_Driver_Function
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Ndis.lib
req.dll:
req.irql: PASSIVE_LEVEL
topic_type:
- APIRef
- kbSyntax
api_type:
- LibDef
api_location:
- ndis.lib
- ndis.dll
api_name:
- NdisFNetPnPEvent
product:
- Windows
targetos: Windows
req.typenames:
---
# NdisFNetPnPEvent function
## -description
A filter driver can call the
<b>NdisFNetPnPEvent</b> function to forward a network Plug and Play (PnP) or Power Management event to
overlying drivers.
## -parameters
### -param NdisFilterHandle
A handle to the context area for the filter module. The filter driver created and initialized this
context area in the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ndis/nc-ndis-filter_attach">FilterAttach</a> function.
### -param NetPnPEventNotification
A pointer to a
<a href="https://msdn.microsoft.com/58d3baf3-a1fa-42ae-b795-2774a148aeda">
NET_PNP_EVENT_NOTIFICATION</a> structure, which describes the network PnP event or Power Management
event being forwarded by the filter driver.
## -returns
<b>NdisFNetPnPEvent</b> can return either of the following:
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>NDIS_STATUS_SUCCESS</b></dt>
</dl>
</td>
<td width="60%">
The overlying driver succeeded in processing the PnP event.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>NDIS_STATUS_FAILURE</b></dt>
</dl>
</td>
<td width="60%">
The overlying driver failed the PnP event.
</td>
</tr>
</table>
## -remarks
NDIS calls a filter driver's
<a href="https://msdn.microsoft.com/5c52b2d2-3fba-4d28-8172-7b6854386061">FilterNetPnPEvent</a> function to notify
the filter driver of network PnP and Power Management events.
Filter drivers can forward these notifications to overlying drivers. To forward a request, call the
<b>NdisFNetPnPEvent</b> function from
<i>FilterNetPnPEvent</i>.
<div class="alert"><b>Note</b> NDIS drivers must not call
<b>NdisFNetPnPEvent</b> from within the context of the
<a href="https://msdn.microsoft.com/238bfa21-a971-4fe4-a774-6ba834efc3c5">FilterOidRequest</a> function.</div>
<div> </div>
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/ndis/nc-ndis-filter_attach">FilterAttach</a>
<a href="https://msdn.microsoft.com/5c52b2d2-3fba-4d28-8172-7b6854386061">FilterNetPnPEvent</a>
<a href="https://msdn.microsoft.com/238bfa21-a971-4fe4-a774-6ba834efc3c5">FilterOidRequest</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff568752">NET_PNP_EVENT_NOTIFICATION</a>
| 23.071895 | 220 | 0.747875 | eng_Latn | 0.419037 |
48dad20069f36d232bd110e3c6a0cf2b04154eaf | 115 | md | Markdown | CONTRIBUTING.md | nirbos/Okyroe | d305399ba7b43abcaff3c7a9631969daea485826 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | nirbos/Okyroe | d305399ba7b43abcaff3c7a9631969daea485826 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | nirbos/Okyroe | d305399ba7b43abcaff3c7a9631969daea485826 | [
"MIT"
] | null | null | null | If you'd like to contribute, please fork the repository and use a feature branch. Pull requests are warmly welcome. | 115 | 115 | 0.808696 | eng_Latn | 0.999939 |
48dae57fb31e341c7baac1fe5ec3d2640d22b31c | 956 | md | Markdown | README.md | i-miss-you/SlowMotionVideoRecorder | b08a82c8e55ff09e1f138bf3800d51ad2004a696 | [
"MIT"
] | 1 | 2015-04-28T02:07:27.000Z | 2015-04-28T02:07:27.000Z | README.md | gary109/SlowMotionVideoRecorder | b08a82c8e55ff09e1f138bf3800d51ad2004a696 | [
"MIT"
] | null | null | null | README.md | gary109/SlowMotionVideoRecorder | b08a82c8e55ff09e1f138bf3800d51ad2004a696 | [
"MIT"
] | null | null | null | Slow Motion Video Recorder for iOS
==========================
An iOS sample app for **recording 120 fps slow-motion videos** using AVFoundation. Including a wrapper class which makes the implementation much easier. Available on the **iPhone5s**.

##Usage of the wrapper class
This repository includes a wrapper class "AVCaptureHelper" which makes implementing 120fps video recorder app much easier.
###1. Initialize
````
self.captureManager = [[AVCaptureManager alloc] initWithPreviewView:self.view];
self.captureManager.delegate = self;
````
###2. Start recording
````
[self.captureManager startRecording];
````
###3. Stop recording
````
[self.captureManager stopRecording];
````
##Example for the slow motion video

<p><a href="http://vimeo.com/82064431">See the 120fps Slo-Mo video in Vimeo 120fps.</a></p>
| 24.512821 | 184 | 0.723849 | eng_Latn | 0.814762 |
48db2a58212ceb6cb80b5e41a30eafe0dfd74553 | 286 | md | Markdown | README.md | JALBAA/data-structures-in-typescript | f299da94cee68bd079372635285ffc03e5bb6301 | [
"MIT"
] | null | null | null | README.md | JALBAA/data-structures-in-typescript | f299da94cee68bd079372635285ffc03e5bb6301 | [
"MIT"
] | null | null | null | README.md | JALBAA/data-structures-in-typescript | f299da94cee68bd079372635285ffc03e5bb6301 | [
"MIT"
] | null | null | null | # data-structure-in-typescript
- [x] linkedlist
- [x] queue
- [x] stack
- [ ] linkedSet
- [ ] linkedHashTable
- [ ] openHashTable
- [x] binaryTree
- [ ] AVLTree
- [ ] BTree
- [ ] B+Tree
- [ ] B*Tree
- [ ] Trie
- [x] heap
- [ ] pqueue
- [x] tableGraph
- [ ] matrixGraph | 15.888889 | 31 | 0.541958 | kor_Hang | 0.376122 |
48dc3d6181e1763913291474e2020aa2c0079f2a | 352 | md | Markdown | translations/es-ES/data/reusables/discussions/you-can-use-discussions.md | ntnamk3/docs | 642b0352a2ab9d0296c27f53a03695b3f24b81e3 | [
"CC-BY-4.0",
"MIT"
] | 6 | 2021-02-17T03:31:27.000Z | 2021-09-11T04:17:57.000Z | translations/es-ES/data/reusables/discussions/you-can-use-discussions.md | Honeyk25/docs | de643512095e283cb3a56243c1a7adcf680a1d08 | [
"CC-BY-4.0",
"MIT"
] | 96 | 2021-10-06T16:37:09.000Z | 2022-03-28T05:19:46.000Z | translations/es-ES/data/reusables/discussions/you-can-use-discussions.md | Honeyk25/docs | de643512095e283cb3a56243c1a7adcf680a1d08 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-08-16T08:21:13.000Z | 2021-11-24T18:52:17.000Z | You can use {% data variables.product.prodname_discussions %} to ask and answer questions, share information, make announcements, and conduct or participate in conversations about a project. Para obtener más información, consulta la sección "[Acerca de los debates](/discussions/collaborating-with-your-community-using-discussions/about-discussions)".
| 176 | 351 | 0.821023 | eng_Latn | 0.862198 |
48dce9b44d3c27c51edac1e4263599e3cf9d7286 | 2,056 | md | Markdown | Code-Patterns/create-a-web-app-to-interact-with-objects-detected-using-machine-learning/README.md | LaudateCorpus1/japan-technology | 1f0864a845eb13b9300a03612b297cc098e320d9 | [
"Apache-2.0"
] | 93 | 2021-02-25T02:20:38.000Z | 2022-03-17T03:51:05.000Z | Code-Patterns/create-a-web-app-to-interact-with-objects-detected-using-machine-learning/README.md | LaudateCorpus1/japan-technology | 1f0864a845eb13b9300a03612b297cc098e320d9 | [
"Apache-2.0"
] | 1 | 2021-02-25T05:50:48.000Z | 2022-03-29T03:14:26.000Z | Code-Patterns/create-a-web-app-to-interact-with-objects-detected-using-machine-learning/README.md | LaudateCorpus1/japan-technology | 1f0864a845eb13b9300a03612b297cc098e320d9 | [
"Apache-2.0"
] | 13 | 2021-05-14T04:17:04.000Z | 2022-01-17T05:27:30.000Z | # 機械学習を使用して検出されたオプジェクトを視覚的に操作するための Web アプリを作成する
### Web アプリケーション内でオープンソースのオブジェクト検出器深層学習モデルを使用し、画像から認識されたオブジェクトをフィルタリングする
English version: https://developer.ibm.com/patterns/create-a-web-app-to-interact-with-objects-detected-using-machine-learning
ソースコード: https://github.com/IBM/MAX-Object-Detector-Web-App
###### 最新の英語版コンテンツは上記URLを参照してください。
last_updated: 2019-03-28
## 概要
[Model Asset eXchange](https://developer.ibm.com/exchanges/models/) (MAX) によって、データ・サイエンスの経験がないアプリケーション開発者が、あらかじめ作成された機械学習モデルに容易にアクセスできるようになっています。このコードに従って、MAX モデルのテキスト出力を視覚化するシンプルな Web アプリケーションを作成する方法を学んでください。この Web アプリでは、MAX から提供されている [Object Detector](https://developer.ibm.com/exchanges/models/all/max-object-detector/) を使用してシンプルな Web UI を作成します。作成する Web UI では、画像内で検出されたオブジェクトを境界ボックスで囲んで表示し、モデルによって指定されたラベルと予測の精度を基準に、オブジェクトをフィルタリングできるようにします。
## 説明
Model Asset eXchange (MAX) では、開発者がオープンソースの深層学習モデルを見つけて実験できるよう、さまざまなモデルを公開しています。このコード・パターンでは、MAX から入手できるモデルのうちの 1 つを使用します。具体的には、Object Detector を使用して、画像内のオブジェクトを認識し、検出されたオブジェクトをそのラベルと予測の精度に基づいてフィルタリングできる Web アプリケーションを作成します。この Web アプリケーションで提供するインタラクティブなユーザー・インターフェースは、Express を使った軽量の Python サーバーによってサポートされています。Python サーバーはクライアント・サイドの Web UI をホストし、モデルに対する API 呼び出しを Web UI からモデルの REST エンドポイントに中継します。Web UI は画像を取り、それをモデルの REST エンドポイントに Python サーバーを介して送信し、モデルによって検出されたオブジェクトを UI 上に表示します。モデルの REST エンドポイントは、MAX 上に用意されている Docker イメージを使用してセットアップします。画像から検出されたオブジェクトを境界ボックスとラベルを使用して表示する Web UI には、表示されたオブジェクトを、それぞれのラベルまたは予測の精度しきい値に応じてフィルタリングできるツールバーが組み込まれています。
このコード・パターンを完了すると、以下の方法がわかるようになります。
* Object Detector MAX モデルの Docker イメージをビルドする
* REST エンドポイントを使用して深層学習モデルをデプロイする
* MAX モデルの REST API を使用して、画像内のオブジェクトを認識する
* MAX モデルの REST API を使用する Web アプリケーションを実行する
## フロー

1. ユーザーが Web UI を使用して画像をモデル API に送信します。
1. モデル API が検出したオブジェクトのデータを返し、Web UI にそれらのfオブジェクトが表示されます。
1. ユーザーが Web UI を操作して、検出されたオブジェクトを表示し、フィルタリングします。
## 手順
このコード・パターンに取り組む準備はできましたか?アプリケーションを起動して使用する方法について詳しくは、[README](https://github.com/IBM/MAX-Object-Detector-Web-App/blob/master/README.md) を参照してください。
| 52.717949 | 650 | 0.840953 | jpn_Jpan | 0.630365 |
48dda4a2b7c8597397f3a64638c389e37ad5f064 | 3,316 | md | Markdown | input/keyboard-input/navigation.md | hq804116393/android-training-course-in-chinese | aef9bb31849cca72ec6341a7ce146c802317e06c | [
"Apache-2.0"
] | 10,595 | 2015-01-01T02:21:32.000Z | 2022-03-29T03:02:17.000Z | input/keyboard-input/navigation.md | laong/android-training-course-in-chinese | fc15d6fb682d583baa8ec1d62c20697138f6afe6 | [
"Apache-2.0"
] | 82 | 2015-02-01T10:41:05.000Z | 2021-09-10T05:12:24.000Z | input/keyboard-input/navigation.md | laong/android-training-course-in-chinese | fc15d6fb682d583baa8ec1d62c20697138f6afe6 | [
"Apache-2.0"
] | 2,983 | 2015-01-04T01:07:30.000Z | 2022-03-11T06:19:12.000Z | # 支持键盘导航
> 编写:[zhaochunqi](https://github.com/zhaochunqi) - 原文:<http://developer.android.com/training/keyboard-input/navigation.html>
除了软键盘输入法(如虚拟键盘)以外,Android支持将物理键盘连接到设备上。键盘不仅方便输入文本,而且提供一种方法来导航和与应用交互。尽管多数的手持设备(如手机)使用触摸作为主要的交互方式,但是随着平板和一些类似的设备正在逐步流行起来,许多用户开始喜欢外接键盘。
随着更多的Android设备提供这种体验,优化应用以支持通过键盘与应用进行交互变得越来越重要。这节课介绍了怎样为键盘导航提供更好的支持。
> **Note:** 对那些没有使用可见导航提示的应用来说,在应用中支持方向性的导航对于应用的可用性也是很重要的。在我们的应用中完全支持方向导航还可以帮助我们使用诸如 [uiautomator](http://developer.android.com/tools/help/uiautomator/index.html) 等工具进行[自动化用户界面测试](http://developer.android.com/tools/testing/testing_ui.html)。
## 测试应用
因为Android系统默认开启了大多必要的行为,所以用户可能已经可以在我们的应用中使用键盘导航了。
所有由Android framework(如Button和EditText)提供的交互部件是可获得焦点的。这意味着用户可以使用如D-pad或键盘等控制设备,并且当某个部件被选中时,部件会发光或者改变外观。
为了测试我们的应用:
1. 将应用安装到一个带有实体键盘的设备上。
如果我们没有带实体键盘的设备,连接一个蓝牙键盘或者USB键盘(尽管并不是所有的设备都支持USB连接)
我们还可以使用Android模拟器:
1. 在AVD管理器中,要么点击**New Device**,要么选择一个已存在的文档点击**Clone**。
2. 在出现的窗口中,确保**Keyboard**和**D-pad**开启。
2. 为了验证我们的应用,只是用Tab键来进行UI导航,确保每一个UI控制的焦点与预期的一致。
找到任何不在预期焦点的实例。
3. 从头开始,使用方向键(键盘上的箭头键)来控制应用的导航。
在 UI 中每一个被选中的元素上,按上、下、左、右。
找到每个不在预期焦点的实例。
如果我们找到任何使用Tab键或方向键后导航的效果不如预期的实例,那么在布局中指定焦点应该聚焦在哪里,如下面几部分所讨论的。
## 处理Tab导航
当用户使用键盘上的Tab键导航我们的应用时,系统会根据组件在布局中的显示顺序,在组件之间传递焦点。如果我们使用相对布局(relative layout),例如,在屏幕上的组件顺序与布局文件中组件的顺序不一致,那么我们可能需要手动指定焦点顺序。
举例来说,在下面的布局文件中,两个对齐右边的按钮和一个对齐第二个按钮左边的文本框。为了把焦点从第一个按钮传递到文本框,然后再传递到第二个按钮,布局文件需要使用属性 [android:nextFocusForward](http://developer.android.com/reference/android/view/View.html#attr_android:nextFocusForward),清楚地为每一个可被选中的组件定义焦点顺序:
```xml
<RelativeLayout ...>
<Button
android:id="@+id/button1"
android:layout_alignParentTop="true"
android:layout_alignParentRight="true"
android:nextFocusForward="@+id/editText1"
... />
<Button
android:id="@+id/button2"
android:layout_below="@id/button1"
android:nextFocusForward="@+id/button1"
... />
<EditText
android:id="@id/editText1"
android:layout_alignBottom="@+id/button2"
android:layout_toLeftOf="@id/button2"
android:nextFocusForward="@+id/button2"
... />
...
</RelativeLayout>
```
现在焦点从 `button1` 到 `button2` 再到 `editText1`,改成了按照在屏幕上出现的顺序:从 `button1` 到 `editText1` 再到 `button2`。
## 处理方向导航
用户也能够使用键盘上的方向键在我们的app中导航(这种行为与在D-pad和轨迹球中的导航一致)。系统提供了一个最佳猜测:根据屏幕上 view 的布局,在给定的方向上,应该将交掉放在哪个 view 上。然而有时,系统会猜测错误。
当在给定的方向进行导航时,如果系统没有传递焦点给合适的 View,那么指定接收焦点的 view 来使用如下的属性:
* [android:nextFocusUp](http://developer.android.com/reference/android/view/View.html#attr_android:nextFocusUp)
* [android:nextFocusDown](http://developer.android.com/reference/android/view/View.html#attr_android:nextFocusDown)
* [android:nextFocusLeft](http://developer.android.com/reference/android/view/View.html#attr_android:nextFocusLeft)
* [android:nextFocusRight](http://developer.android.com/reference/android/view/View.html#attr_android:nextFocusRight)
当用户导航到那个方向时,每一个属性指定了下一个接收焦点的 view,如根据 view ID 来指定。举例来说:
```xml
<Button
android:id="@+id/button1"
android:nextFocusRight="@+id/button2"
android:nextFocusDown="@+id/editText1"
... />
<Button
android:id="@id/button2"
android:nextFocusLeft="@id/button1"
android:nextFocusDown="@id/editText1"
... />
<EditText
android:id="@id/editText1"
android:nextFocusUp="@id/button1"
... />
```
| 32.831683 | 240 | 0.76146 | yue_Hant | 0.713392 |
48de411b7ff38740cf4d7ad1c2b75712fdb06faf | 1,348 | md | Markdown | README.md | yanzhiwei147/gitbook-plugin-submodule-summary | 3d1c3040199c4b3273e4ab4cc4915ce2d42c722b | [
"MIT"
] | null | null | null | README.md | yanzhiwei147/gitbook-plugin-submodule-summary | 3d1c3040199c4b3273e4ab4cc4915ce2d42c722b | [
"MIT"
] | null | null | null | README.md | yanzhiwei147/gitbook-plugin-submodule-summary | 3d1c3040199c4b3273e4ab4cc4915ce2d42c722b | [
"MIT"
] | null | null | null | # gitbook-plugin-submodule-summary[](https://travis-ci.org/yanzhiwei147/gitbook-plugin-submodule-summary)
A gitbook plugin for merge git submodule summary.md.
1. [Installation](#installation)
2. [Usage](#usage)
3. [Example](#example)
## Installation
book.json
```json
{
"plugins": [
"submodule-summary"
]
}
```
and
```sh
gitbook install
```
## Usage
### General usage:
> Host Repo(HR): origin gitbook repo.
> Submodule Repo(SR): git submodule gitbook repo.
1. HR add a git submodule from SR.
2. Edit HR `SUMMARY.md`.
3. Enjoy it!
#### Add git submodule
```bash
git submodule add git@github.com/yourname/repo.git path/to/submodule
```
#### Edit `SUMMARY.md`
Edit the HR `SUMMARY.md`, replace specific location with the placeholder string `<!-- SUBMODULE-SUMMARY path/to/submodule/SUMMARY.md -->`
#### Build
Now you can build the gitbook, the HR `SUMMARY.md` contain pre-step string will replace with `path/to/submodule/SUMMARY.md` content.
## Tests
npm test
## Contributing
1. Fork it!
2. Create your feature branch: `git checkout -b my-new-feature`
3. Commit your changes: `git commit -am 'Add some feature'`
4. Push to the branch: `git push origin my-new-feature`
5. Submit a pull request :D
## License
MIT | 56.166667 | 282 | 0.709199 | eng_Latn | 0.372403 |
48dea5c74e851c5276062a3cdb6a1e8120aef6dd | 70 | md | Markdown | README.md | thoughtworks-jumpstart/express-who-is-next-api | b3037d959782d67f2c42d1e4b8aeb593ceb8df61 | [
"MIT"
] | null | null | null | README.md | thoughtworks-jumpstart/express-who-is-next-api | b3037d959782d67f2c42d1e4b8aeb593ceb8df61 | [
"MIT"
] | null | null | null | README.md | thoughtworks-jumpstart/express-who-is-next-api | b3037d959782d67f2c42d1e4b8aeb593ceb8df61 | [
"MIT"
] | null | null | null | # express-who-is-next-api
Who Is Next - a simple API using Express.js
| 23.333333 | 43 | 0.742857 | eng_Latn | 0.655732 |
48df03d6a3724a9aaecc3b0c2420b459733b6a9a | 3,273 | md | Markdown | building-images/building-images-01-dockerfile/step04.md | ciberkleid/katacoda-scenarios | 8710e544c9f62806f4dbfa912f246b5960f4fad4 | [
"Apache-2.0"
] | null | null | null | building-images/building-images-01-dockerfile/step04.md | ciberkleid/katacoda-scenarios | 8710e544c9f62806f4dbfa912f246b5960f4fad4 | [
"Apache-2.0"
] | null | null | null | building-images/building-images-01-dockerfile/step04.md | ciberkleid/katacoda-scenarios | 8710e544c9f62806f4dbfa912f246b5960f4fad4 | [
"Apache-2.0"
] | null | null | null | # Tagging & Build Context
Objective:
Learn about the build context.
In this step, you will:
- Learn why it's important to control the build context
- Learn how to control the build context
## Why control the build context
As we saw earlier the docker build command requires us to specify a context, and the context needs to contain all of the files that the Dockerfile needs in order to build the image.
When you execute a `docker build` command, the `docker` CLI sends the Dockerfile and the contents of the build context to the Docker daemon, and the daemon builds the image.
Scroll up in the terminal window to the last `docker build` command that you executed. Notice the first line in the log output. It should indicate that it is `Sending` data to the daemon.
You can also re-run the build and grep for the "Sending" log info:
```
docker build . -t hello-img | grep Sending
```{{execute}}
**To speed up your builds, it is advantageous to reduce the amount of data that needs to be sent to the daemon.**
Once the context is sent to the daemon, any of the files in the context can be copied to the image.
Run the following command to validate that the `COPY` statement in our Dockerfile copied all the files in the context to the image. This command uses the `--entrypoint` flag to override the ENTRYPOINT defined by the Dockerfile and simply run a shell. The `-it` options makes it interactive with the Terminal:
```
docker run --rm -it --entrypoint /bin/sh hello-img
```{{execute}}
Once inside the container, run the following commands. You can type them into the terminal or copy/paste them:
Run `pwd`{{copy}} to validate that the default directory is “workspace”, as configured by the last WORKDIR instruction in the Dockerfile. Run `ls -l`{{copy}} to list the files in this directory and note that is is all of the files from the build context. Some of these files are not needed for runtime, in particular, the Dockerfile and README. These are adding unnecessary bulk and we’ve created a loophole for potentially confidential or malicious files to make it into our runtime environment.
`Send Ctrl+C`{{execute interrupt T1}} to quit the `docker run` command.
## How to control the build context
A `.dockerignore` file - similar to a `.gitignore` file for git - can serve to filter unwanted files from the docker build context. Create a new file called .dockerignore and configure it to excluding everything by default, and then allow only the application files.
```
cat << EOF > .dockerignorefile
# Exclude everything by default
*
# Include what is needed to build
!hello.go
!go.mod
!go.sum
EOF
bat .dockerignore
```{{execute}}
Rebuild the image and compare the data sent to the daemon to the last build. You should see that less data is being uploaded now that the `.dockerignore` file is present.
```
docker build . -t hello-img | grep Sending
```{{execute}}
Check the files that were copied to the image this time. You can do so by starting a container and running the sahell again. This time, rather than using the interactive (`--it`) mode, add a command to list the files in-line. you should see that only the necessary files have been copied to the image.
```
docker run --rm --entrypoint /bin/sh hello-img -c "ls -l"
```{{execute}}
| 48.132353 | 496 | 0.75802 | eng_Latn | 0.999373 |
48e0cb5a91b32a696e81e5bca19fd4954b1ab4ef | 693 | md | Markdown | data/services/agence-de-developpement-et-dencadrement-des-petites-et-moyennes-entreprises-adepme.md | senegalouvert/servicepublic | b29994d6ee41337573172bf2aa00f2a5f1f934f6 | [
"MIT"
] | 2 | 2021-02-12T11:38:34.000Z | 2022-01-30T17:31:52.000Z | data/services/agence-de-developpement-et-dencadrement-des-petites-et-moyennes-entreprises-adepme.md | senegalouvert/servicepublic | b29994d6ee41337573172bf2aa00f2a5f1f934f6 | [
"MIT"
] | 2 | 2021-02-05T13:42:22.000Z | 2021-02-23T07:34:47.000Z | data/services/agence-de-developpement-et-dencadrement-des-petites-et-moyennes-entreprises-adepme.md | senegalouvert/servicepublic | b29994d6ee41337573172bf2aa00f2a5f1f934f6 | [
"MIT"
] | 1 | 2021-03-11T16:14:00.000Z | 2021-03-11T16:14:00.000Z | # Agence de développement et d'encadrement des petites et moyennes entreprises
Ministre du Commerce et des Petites et Moyennes Entreprises (PME) (MCPME)
----------------------------------------------------------------------------
**Adresse**
-----------
9, Fenêtre Mermoz
Avenue Cheikh Anta Diop - BP 45333 - Dakar
**Téléphone**
-------------
33 869 70 70 /71
**Télécopie**
-------------
33 860 13 63
**Adresse électronique**
------------------------
[adepme@orange.sn](../../../services/adepmeorangesn.md)
**Site web**
------------
[http://www.adepme.sn/](../../../services/httpwwwadepmesn.md)
Directeur général : Idrissa DIABIRA
Secrétaire général : Jean Marie MBAYE DIOUF | 20.382353 | 78 | 0.556999 | yue_Hant | 0.31761 |
48e106aa1bac621aa394f6b13b57cd536d06f740 | 47 | md | Markdown | README.md | noseparte/Spring- | f42a89bfae6300306e978ac4103f7e94978c1489 | [
"Apache-2.0"
] | null | null | null | README.md | noseparte/Spring- | f42a89bfae6300306e978ac4103f7e94978c1489 | [
"Apache-2.0"
] | null | null | null | README.md | noseparte/Spring- | f42a89bfae6300306e978ac4103f7e94978c1489 | [
"Apache-2.0"
] | null | null | null | # Spring-
Spring source code in-depth analysis
| 15.666667 | 36 | 0.787234 | kor_Hang | 0.41892 |
48e1650e4a5793216aa898fe2dfcc446307fe0d7 | 4,174 | md | Markdown | README.md | normanjoyner/cluster-manager | 854f3e5ed8b9c65d19d51de1431e62123e10b95e | [
"Apache-2.0"
] | null | null | null | README.md | normanjoyner/cluster-manager | 854f3e5ed8b9c65d19d51de1431e62123e10b95e | [
"Apache-2.0"
] | null | null | null | README.md | normanjoyner/cluster-manager | 854f3e5ed8b9c65d19d51de1431e62123e10b95e | [
"Apache-2.0"
] | null | null | null | # Containership Cluster Manager
[](https://travis-ci.org/containership/cluster-manager)
[](https://goreportcard.com/report/github.com/containership/cluster-manager)
[](https://codecov.io/gh/containership/cluster-manager)
## Overview
The [Containership][containership] cluster manager is a plugin that enables deep integrations with Containership Cloud for clusters provisioned using Containership Kubernetes Engine (CKE) as well as existing clusters that have been attached to the platform.
### Features
#### Core Features
There are many core features that apply to both CKE clusters and attached clusters.
- [Plugin management][plugins]
- [SSH key synchronization][ssh-keys]
- [Container registry credential synchronization][registries]
#### CKE Features
These features are available only for clusters provisioned using CKE.
- Kubernetes (and etcd) upgrade management
## Structure
The Containership cluster manager is composed of two primary components: the `cloud-coordinator` and `cloud-agent`.
### Coordinator
The Containership coordinator runs as a Kubernetes [Deployment][deployment] with a single replica.
It performs synchronization and reconciliation of Containership cloud resources (e.g. registries, plugins).
It is also responsible for orchestrating Kubernetes cluster upgrades.
### Agent
The Containership agent runs as a Kubernetes [DaemonSet][daemonset] on all nodes, including masters.
It performs synchronization and reconciliation of host-level resources such as SSH keys.
It is also responsible for performing Kubernetes upgrades of individual nodes.
## Releases and Compatibility
Containership releases follow the [semver](https://semver.org/) format.
Similar to other projects in the Kubernetes ecosystem, a new major version of the cluster manager is released for every Kubernetes minor version.
Also in line with the Kubernetes ecosystem, Containership supports a sliding window of 3 minor versions of Kubernetes.
Please refer to the following compatibility matrix for specific compatibility information.
Please also refer to the [official changelog][cluster-management-changelog] for release notes.
### Compatibility Matrix
Note that only currently supported versions in the sliding window are listed here for conciseness.
Support for previous Kubernetes versions can be inferred by using the sliding window.
For example, `cluster-manager` 2.x supports Kubernetes versions 1.8.x - 1.10.x.
| | Kubernetes 1.12.x | Kubernetes 1.11.x | Kubernetes 1.10.x |
|---------------------|-------------------|-------------------|-------------------|
| cluster-manager 4.x | ✓ | ✓ | ✓ |
| cluster-manager 3.x | ✗ | ✓ | ✓ |
| cluster-manager 2.x | ✗ | ✗ | ✓ |
## Contributing
Thank you for your interest in this project and for your interest in contributing! Feel free to open issues for feature requests, bugs, or even just questions - we love feedback and want to hear from you.
PRs are also always welcome! However, if the feature you're considering adding is fairly large in scope, please consider opening an issue for discussion first.
[containership]: https://containership.io/
[plugins]: https://docs.containership.io/getting-started/attach-cluster/containership-plugins
[ssh-keys]: https://docs.containership.io/getting-started/account-and-organization/managing-ssh-keys
[registries]: https://docs.containership.io/getting-started/account-and-organization/managing-image-registry-credentials
[deployment]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[daemonset]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
[cluster-management-changelog]: https://github.com/containership/plugins-changelog/blob/master/cluster_management/containership/CHANGELOG.md
| 54.921053 | 257 | 0.748682 | eng_Latn | 0.951529 |
48e1d3488f71f60424e0892561f0ad9e001be44b | 14 | md | Markdown | README.md | frozenskys/iOS-Demo-App | 1b5385b67057fbfdb59239b6f176e7221a16303d | [
"MIT"
] | null | null | null | README.md | frozenskys/iOS-Demo-App | 1b5385b67057fbfdb59239b6f176e7221a16303d | [
"MIT"
] | null | null | null | README.md | frozenskys/iOS-Demo-App | 1b5385b67057fbfdb59239b6f176e7221a16303d | [
"MIT"
] | null | null | null | # iOS-Demo-App | 14 | 14 | 0.714286 | kor_Hang | 0.708386 |
48e2f3ea1cae5043002766d9568494175419742a | 742 | md | Markdown | docs/assembler/masm/ml-fatal-error-a1009.md | stormsen/cpp-docs.zh-cn | 171fb895da583a40a875ea2d19e76c71172688ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/assembler/masm/ml-fatal-error-a1009.md | stormsen/cpp-docs.zh-cn | 171fb895da583a40a875ea2d19e76c71172688ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/assembler/masm/ml-fatal-error-a1009.md | stormsen/cpp-docs.zh-cn | 171fb895da583a40a875ea2d19e76c71172688ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ML 错误 A1009 |Microsoft Docs
ms.custom: ''
ms.date: 08/30/2018
ms.technology:
- cpp-masm
ms.topic: error-reference
f1_keywords:
- A1009
dev_langs:
- C++
helpviewer_keywords:
- A1009
ms.assetid: f7a962a6-4280-485e-95cb-2ab8922c66c2
author: corob-msft
ms.author: corob
ms.workload:
- cplusplus
ms.openlocfilehash: 5dde8fbc68fa125afd71798707acad879691af23
ms.sourcegitcommit: a7046aac86f1c83faba1088c80698474e25fe7c3
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 09/04/2018
ms.locfileid: "43687079"
---
# <a name="ml-fatal-error-a1009"></a>ML 错误 A1009
**行太长**
在源文件中的行超过 512 个字符的限制。
如果多个物理行与行继续符 (\) 进行连接,然后生成逻辑行仍被限制为 512 个字符。
## <a name="see-also"></a>请参阅
[ML 错误消息](../../assembler/masm/ml-error-messages.md)<br/> | 20.611111 | 60 | 0.754717 | yue_Hant | 0.117149 |
48e4566cf76fb5d5f46d2134da9bfa1502117984 | 249 | md | Markdown | about.md | mikemiky/mikemiky.github.io | 3b28f2523e16e41c8b4f6e144e93d96be0ed249b | [
"MIT"
] | null | null | null | about.md | mikemiky/mikemiky.github.io | 3b28f2523e16e41c8b4f6e144e93d96be0ed249b | [
"MIT"
] | null | null | null | about.md | mikemiky/mikemiky.github.io | 3b28f2523e16e41c8b4f6e144e93d96be0ed249b | [
"MIT"
] | null | null | null | ---
layout: page
title: About
permalink: /about/
---
a student learning programming
### More Information
None Weibo
None Twitter
None Facebook
Just this Blog
### Contact me
[chrystaltyler19007@gmail.com](mailto:chrystaltyler19007@gmail.com)
| 11.318182 | 67 | 0.751004 | eng_Latn | 0.29584 |
48e5121ff132fd0faeb2c9c351abfdb69bef28a4 | 685 | md | Markdown | _posts/2017-04-13-Hilary-Morgan-Prom-Style-20419.md | marjoriermoses/marjoriermoses.github.io | cf45d4879f2cef9f76cd2ad1ede8f2a69fe5ad1b | [
"MIT"
] | null | null | null | _posts/2017-04-13-Hilary-Morgan-Prom-Style-20419.md | marjoriermoses/marjoriermoses.github.io | cf45d4879f2cef9f76cd2ad1ede8f2a69fe5ad1b | [
"MIT"
] | null | null | null | _posts/2017-04-13-Hilary-Morgan-Prom-Style-20419.md | marjoriermoses/marjoriermoses.github.io | cf45d4879f2cef9f76cd2ad1ede8f2a69fe5ad1b | [
"MIT"
] | null | null | null | ---
layout: post
date: 2017-04-13
title: "Hilary Morgan Prom Style 20419"
category: Hilary Morgan
tags: [Hilary Morgan]
---
### Hilary Morgan Prom Style 20419
Just **$269.99**
###
<table><tr><td>BRANDS</td><td>Hilary Morgan</td></tr></table>
<a href="https://www.readybrides.com/en/hilary-morgan/63108-hilary-morgan-prom-style-20419.html"><img src="//img.readybrides.com/145877/hilary-morgan-prom-style-20419.jpg" alt="Hilary Morgan Prom Style 20419" style="width:100%;" /></a>
<!-- break -->
Buy it: [https://www.readybrides.com/en/hilary-morgan/63108-hilary-morgan-prom-style-20419.html](https://www.readybrides.com/en/hilary-morgan/63108-hilary-morgan-prom-style-20419.html)
| 42.8125 | 235 | 0.719708 | yue_Hant | 0.700499 |
48e549575c373e82caf7901acd27daaf533ff8b5 | 10,556 | md | Markdown | README.md | skywalkerluc/functional-programming-learning-path | 07dac09c9fabfa54f8b4d80b62f43b092cb87b0d | [
"MIT"
] | 627 | 2019-01-24T14:41:52.000Z | 2021-12-31T12:31:58.000Z | README.md | gabizinha12/functional-programming-learning-path | 803e0d9416a0319d62bebeb86e1ba57cf6822680 | [
"MIT"
] | 1 | 2019-01-30T13:09:21.000Z | 2019-01-30T13:09:21.000Z | README.md | gabizinha12/functional-programming-learning-path | 803e0d9416a0319d62bebeb86e1ba57cf6822680 | [
"MIT"
] | 76 | 2019-02-07T23:08:22.000Z | 2021-12-29T01:41:05.000Z | # Resources
## Table of Content
- [Foundation](#foundation)
- [Why Functional](#why-functional)
- [Advanced Topics](#advanced-topics)
- [Talks](#talks)
- [Books](#books)
- [Declarative Programming](#declarative-programming)
- [Blogs](#blogs)
- [Projects](#projects)
- [Podcasts](#podcasts)
- [Courses](#courses)
- [Lists](#lists)
- [Community](#community)
- [Clojure](https://github.com/LeandroTk/learning-functional/tree/master/clojure)
- [JavaScript](https://github.com/LeandroTk/learning-functional/tree/master/javascript)
- [Python](https://github.com/LeandroTk/learning-functional/tree/master/python)
- [Ruby](https://github.com/LeandroTk/learning-functional/tree/master/ruby)
- [Programming Challenges](https://github.com/LeandroTk/learning-functional/tree/master/programming_challenges)
## Foundation
- [Functional Programming Learning Path](https://purelyfunctional.tv/learning-paths/functional-programming/)
- [Introduction to Funcional Programming - edX](https://www.edx.org/course/introduction-functional-programming-delftx-fp101x-0)
- [Como programar funcional?](https://www.youtube.com/watch?v=jIYfTKYXJr8)
- [Demystifying functional programming](https://medium.com/building-nubank/demystifying-functional-programming-in-a-real-company-e954a2591504)
- [Programação Funcional para iniciantes](https://medium.com/trainingcenter/programa%C3%A7%C3%A3o-funcional-para-iniciantes-9e2beddb5b43)
- [Awesome Functional Programming](https://github.com/xgrommx/awesome-functional-programming)
- [Functional Programming Jargon](https://github.com/hemanth/functional-programming-jargon)
- [Functional Programming Study Plan](https://ericdouglas.github.io/2016/12/04/functional-programming-study-plan/)
- [Your functional journey](https://purelyfunctional.tv/guide/your-functional-journey/)
- [The Benefits of Pure Functions](https://alvinalexander.com/scala/fp-book/benefits-of-pure-functions)
- [Pure Functions](https://www.sitepoint.com/functional-programming-pure-functions/)
- [Functional Koans](https://github.com/relevance/functional-koans)
- [Programação funcional, imutabilidade, e previsibilidade](https://mauricioszabo.wordpress.com/2016/03/22/programacao-funcional-imutabilidade-e-previsibilidade/)
- [A practical introduction to functional programming](https://maryrosecook.com/blog/post/a-practical-introduction-to-functional-programming)
- [An introduction to functional programming](https://codewords.recurse.com/issues/one/an-introduction-to-functional-programming)
- [Software Composition](https://medium.com/javascript-scene/composing-software-an-introduction-27b72500d6ea)
- [Functional Education](http://bitemyapp.com/posts/2014-12-31-functional-education.html)
- [What is functional?](https://dev.to/drbearhands/functional-fundamentals-what-is-functional-l66)
- [Types as propositions, programs as proofs](https://dev.to/drbearhands/functional-fundamentals-types-as-propositions-programs-as-proofs-56gh)
- [So You Want to be a Functional Programmer (Part 1)](https://medium.com/@cscalfani/so-you-want-to-be-a-functional-programmer-part-1-1f15e387e536)
- [So You Want to be a Functional Programmer (Part 2)](https://medium.com/@cscalfani/so-you-want-to-be-a-functional-programmer-part-2-7005682cec4a)
- [So You Want to be a Functional Programmer (Part 3)](https://medium.com/@cscalfani/so-you-want-to-be-a-functional-programmer-part-3-1b0fd14eb1a7)
- [So You Want to be a Functional Programmer (Part 4)](https://medium.com/@cscalfani/so-you-want-to-be-a-functional-programmer-part-4-18fbe3ea9e49)
- [So You Want to be a Functional Programmer (Part 5)](https://medium.com/@cscalfani/so-you-want-to-be-a-functional-programmer-part-5-c70adc9cf56a)
- [So You Want to be a Functional Programmer (Part 6)](https://medium.com/@cscalfani/so-you-want-to-be-a-functional-programmer-part-6-db502830403)
- [Friendly Functional Programming](https://functional.works-hub.com/learn/friendly-functional-programming-b3e93)
- [Practical Functional Programming](https://hackernoon.com/practical-functional-programming-6d7932abc58b)
- [Destroy All Ifs](http://degoes.net/articles/destroy-all-ifs)
- [THE PILLARS OF FUNCTIONAL PROGRAMMING (PART 1)](https://sigma.software/about/media/pillars-functional-programming-part-1)
- [Functional Programming Patterns: Cookbook](https://medium.com/@karthikiyengar/functional-programming-patterns-cookbook-3a0dfe2d7e0a)
## Higher Order Functions
- [Getting a handle on Reduce](https://adambard.com/blog/getting-a-handle-on-reduce/)
## Immutability
- [Immutability - something worth striving for](https://dev.to/rwoodnz/immutability---something-worth-striving-for-3ppp)
- [Why shared mutable state is the root of all evil](http://henrikeichenhardt.blogspot.com/2013/06/why-shared-mutable-state-is-root-of-all.html)
- [Immutable data](https://www.sitepoint.com/functional-programming-ruby-value-objects/)
- [Thoughts on Immutability, CI/CD, FP](https://www.infoq.com/podcasts/Vitor-Olivier)
## Why Functional
- [Why functional programming matters](https://hackernoon.com/why-functional-programming-matters-c647f56a7691)
- [Why Functional Programming?](https://purelyfunctional.tv/article/why-functional-programming/)
- [Advantages of Functional Programming](https://blog.codeship.com/advantages-of-functional-programming/)
- [Pros and cons of functional programming](https://itnext.io/pros-and-cons-of-functional-programming-32cdf527e1c2)
- [What are the benefits of functional programming?](https://stackoverflow.com/a/128128/3159162)
- [Benefits of Functional Programming](https://alvinalexander.com/scala/fp-book/benefits-of-functional-programming)
- [Benefits of Functional Programming beyond map/filter/reduce](https://www.youtube.com/watch?v=oaa4XiwEq1E&ab_channel=Jfokus)
- [Benefits of Functional Programming by Example](https://medium.com/@nickmccurdy/benefits-of-functional-programming-by-example-76f1135b0b18)
- [Goodbye, Object Oriented Programming](https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53)
- [Switching from OOP to Functional Programming](https://medium.com/@olxc/switching-from-oop-to-functional-programming-4187698d4d3)
## Advanced Topics
- [Transducers: Efficient Data Processing Pipelines in JavaScript](https://medium.com/javascript-scene/transducers-efficient-data-processing-pipelines-in-javascript-7985330fe73d)
- [Functors, Monads and better functions](https://dev.to/drbearhands/functors-monads-and-better-functions-26f3)
- [Why Curry Helps](https://hughfdjackson.com/javascript/why-curry-helps/)
- [Curry and Function Composition](https://medium.com/javascript-scene/curry-and-function-composition-2c208d774983)
- [Functional JavaScript: Function Composition For Every Day Use](https://hackernoon.com/javascript-functional-composition-for-every-day-use-22421ef65a10)
- [A Modern Architecture for FP](http://degoes.net/articles/modern-fp)
- [Modern Functional Programming: Part 2](http://degoes.net/articles/modern-fp-part-1)
- [Functional Programming, Abstraction, and Naming Things](http://www.stephendiehl.com/posts/abstraction.html)
- [SOLID: the next step is Functional](http://blog.ploeh.dk/2014/03/10/solid-the-next-step-is-functional/)
- [Functional Design Patterns](https://www.youtube.com/watch?reload=9&v=srQt1NAHYC0&ab_channel=NDCConferences)
- [Free Monads Explained (pt 1)](https://medium.com/@olxc/free-monads-explained-pt-1-a5c45fbdac30)
- [Perpetual Currying in JavaScript](https://medium.com/@paramsingh_66174/perpetual-currying-in-javascript-5ae1c749adc5)
## Advanced Topics: Category Theory
- [From design patterns to category theory by Mark Seemann](http://blog.ploeh.dk/2017/10/04/from-design-patterns-to-category-theory/)
## Talks
- [Functional programming design patterns by Scott Wlaschin](https://www.youtube.com/watch?v=E8I19uA-wGY&ab_channel=IvanPlyusnin)
- [Why Functional Programming Matters by John Hughes](https://www.youtube.com/watch?v=XrNdvWqxBvA&ab_channel=ConfEngine)
## Books
- [How to Design Programs](https://htdp.org/2019-02-24/)
- [Category Theory for Programmers](https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/)
- [SICP - Structure and Interpretation
of Computer Programs](https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book.html)
## Declarative Programming
- [What is declarative programming?](https://stackoverflow.com/a/129639/3159162)
- [Declarative vs Imperative Programming by Ian Mundy](https://codeburst.io/declarative-vs-imperative-programming-a8a7c93d9ad2)
- [Imperative vs Declarative Programming by Tyler McGinnis](https://tylermcginnis.com/imperative-vs-declarative-programming/)
- [Imperative versus declarative code… what’s the difference?](https://medium.com/front-end-hacking/imperative-versus-declarative-code-whats-the-difference-adc7dd6c8380)
## Blogs
- [Patterns in Functional Programming](https://patternsinfp.wordpress.com/)
- [Kevin Sookocheff - Functional Programming](https://sookocheff.com/tags/functional-programming/)
## Projects
- [YouTube instant search with FP](https://jaysoo.ca/2016/01/13/functional-programming-little-ideas/)
## Podcasts
- [Programação Funcional com Erick Pintor e Bruno Tavares](http://tecnologicamentearretado.com.br/2014/12/31/programacao-funcional-com-erick-e-bruno/)
- [Programação Funcional com Juliano alves](https://www.tecnoretorica.com.br/2013/04/programacao-funcional/)
- [Programação Funcional - Inviavel Podcast](http://inviavelpodcast.com/10/)
- [CapyCast #10 Linguagens Funcionais Com Marcelo Camargo e Derek Stavis](https://player.fm/series/capycast/capycast-10-linguagens-funcionais-com-marcelo-camargo-e-derek-stavis)
- [The Imposter's Handbook: Functional Programming and Databases](https://www.orbit.fm/bookbytes/17)
- [Functional Programming Languages and the Pursuit of Laziness with Dr. Simon Peyton Jones](https://player.fm/series/microsoft-research-podcast-1910051/ep-056-rerun-functional-programming-languages-and-the-pursuit-of-laziness-with-dr-simon-peyton-jones)
## Courses
- [Functional Programming Principles in Scala](https://www.coursera.org/learn/progfun1)
- [Introduction to Functional Programming](https://www.edx.org/course/introduction-to-functional-programming)
## Lists
- [Awesome Functional Programming](https://github.com/lucasviola/awesome-functional-programming)
- [Awesome Functional Programming](https://github.com/xgrommx/awesome-functional-programming)
## Community
- Richard Bird
- Philip Wadler
- Olivier Danvy
- Andrzej Filinski
- Daniel P. Friedman
- Matthias Felleisen
- J. Michael Ashley
- R. Kent Dybvig
- Erik Hilsdale
| 66.810127 | 254 | 0.791019 | yue_Hant | 0.61367 |
48e565d53d0589815c22facad819476257ba54da | 120 | md | Markdown | .changeset/fuzzy-mails-brush.md | tjwds/vite-plugin-svelte | c086c9f7db99b87a53dcb04e8d07d8ee7846798b | [
"MIT"
] | 9 | 2021-01-28T12:44:37.000Z | 2021-03-17T06:49:10.000Z | .changeset/fuzzy-mails-brush.md | tjwds/vite-plugin-svelte | c086c9f7db99b87a53dcb04e8d07d8ee7846798b | [
"MIT"
] | null | null | null | .changeset/fuzzy-mails-brush.md | tjwds/vite-plugin-svelte | c086c9f7db99b87a53dcb04e8d07d8ee7846798b | [
"MIT"
] | 1 | 2021-02-05T05:03:38.000Z | 2021-02-05T05:03:38.000Z | ---
'@sveltejs/vite-plugin-svelte': patch
---
use createRequire to load svelte.config.cjs in esm projects (fixes #141)
| 20 | 72 | 0.725 | eng_Latn | 0.461036 |
48e64575abc905379c43300b04b83dff6cd8d26a | 50 | md | Markdown | _includes/05-emphasis.md | JoelAnto2001/markdown-portfolio | f9773944ecdbff234a297654f654d97c2dc36e23 | [
"MIT"
] | null | null | null | _includes/05-emphasis.md | JoelAnto2001/markdown-portfolio | f9773944ecdbff234a297654f654d97c2dc36e23 | [
"MIT"
] | 5 | 2021-06-01T13:21:29.000Z | 2021-06-01T15:03:07.000Z | _includes/05-emphasis.md | JoelAnto2001/markdown-portfolio | f9773944ecdbff234a297654f654d97c2dc36e23 | [
"MIT"
] | null | null | null | ** here to learn new skills **
*Nice meeting you*
| 16.666667 | 30 | 0.68 | eng_Latn | 0.999 |
48e667423a05a9ab8d4b024de04821f78710a24a | 54,163 | md | Markdown | README.md | jsa4000/c4-model-plantuml | b635ba7616b90f1f92e68df25a5828f6fb422544 | [
"MIT"
] | null | null | null | README.md | jsa4000/c4-model-plantuml | b635ba7616b90f1f92e68df25a5828f6fb422544 | [
"MIT"
] | null | null | null | README.md | jsa4000/c4-model-plantuml | b635ba7616b90f1f92e68df25a5828f6fb422544 | [
"MIT"
] | 1 | 2022-03-09T14:07:50.000Z | 2022-03-09T14:07:50.000Z | The C4 model for visualising software architecture
==================================================
Context, Containers, Components, and Code
-----------------------------------------
Uses and benefits
The C4 model is an easy to learn, developer friendly approach to software architecture diagramming. Good software architecture diagrams assist with communication inside/outside of software development/product teams, efficient onboarding of new staff, architecture reviews/evaluations, risk identification (e.g. [risk-storming](https://riskstorming.com)), threat modelling (e.g. [STRIDE/LINDDUN](https://daniel.spilsbury.io/2020/04/29/c4-threat-model.html)), etc.
[Abstractions](#Abstractions) [Core diagrams](#CoreDiagrams) [Supplementary diagrams](#SupplementaryDiagrams) [Notation](#Notation) [Diagram review checklist](review)
[FAQ](#FAQ) [Diagramming vs modelling](#Modelling) [Training](https://architectis.je) [Adopting](#Adopting) [Tooling](#Tooling) [References](#References)
Introduction
------------
Ask somebody in the building industry to visually communicate the architecture of a building and you'll be presented with site plans, floor plans, elevation views, cross-section views and detail drawings. In contrast, ask a software developer to communicate the software architecture of a software system using diagrams and you'll likely get a confused mess of boxes and lines ... inconsistent notation (colour coding, shapes, line styles, etc), ambiguous naming, unlabelled relationships, generic terminology, missing technology choices, mixed abstractions, etc.




As an industry, we do have the Unified Modeling Language (UML), ArchiMate and SysML, but asking whether these provide an effective way to communicate software architecture is often irrelevant because many teams have already thrown them out in favour of much simpler "boxes and lines" diagrams. Abandoning these modelling languages is one thing but, perhaps in the race for agility, many software development teams have lost the ability to communicate visually.
### Maps of your code
The C4 model was created as a way to help software development teams describe and communicate software architecture, both during up-front design sessions and when retrospectively documenting an existing codebase. It's a way to create maps of your code, at various levels of detail, in the same way you would use something like Google Maps to zoom in and out of an area you are interested in.
Like source code, Google Street View provides a very low-level and accurate view of a location.

Navigating an unfamiliar environment becomes easier if you zoom out though.

Zooming out further will provide additional context you might not have been aware of.

Different levels of zoom allow you to tell different stories to different audiences.

Although primarily aimed at software architects and developers, the C4 model provides a way for software development teams to efficiently and effectively communicate their software architecture, at different levels of detail, telling different stories to different types of audience, when doing up front design or retrospectively documenting an existing codebase.
Level 1: A **System Context** diagram provides a starting point, showing how the software system in scope fits into the world around it.

Level 2: A **Container** diagram zooms into the software system in scope, showing the high-level technical building blocks.

Level 3: A **Component** diagram zooms into an individual container, showing the components inside it.

Level 4: A **code** (e.g. UML class) diagram can be used to zoom into an individual component, showing how that component is implemented.

Different levels of zoom allow you to tell different stories to different audiences.
[](./img/c4-overview.png)
The C4 model is an "abstraction-first" approach to diagramming software architecture, based upon abstractions that reflect how software architects and developers think about and build software. The small set of abstractions and diagram types makes the C4 model easy to learn and use. Please note that you don't need to use all 4 levels of diagram; only those that add value - the System Context and Container diagrams are sufficient for many software development teams.
Abstractions
------------
In order to create these maps of your code, we first need a common set of abstractions to create a ubiquitous language that we can use to describe the static structure of a software system. The C4 model considers the static structures of a **software system** in terms of **containers**, **components** and **code**. And **people** use the software systems that we build.
[](./img/abstractions.png)
_A **software system** is made up of one or more **containers** (web applications, mobile apps, desktop applications, databases, file systems, etc), each of which contains one or more **components**, which in turn are implemented by one or more **code elements** (e.g. classes, interfaces, objects, functions, etc)._
### Person
A person represents one of the human users of your software system (e.g. actors, roles, personas, etc).
### Software System
A software system is the highest level of abstraction and describes something that delivers value to its users, whether they are human or not. This includes the software system you are modelling, and the other software systems upon which your software system depends (or vice versa). In many cases, a software system is "owned by" a single software development team.
### Container (applications and data stores)
Not Docker! In the C4 model, a container represents an **application** or a **data store**. A container is something that needs to be running in order for the overall software system to work. In real terms, a container is something like:
* **Server-side web application**: A Java EE web application running on Apache Tomcat, an ASP.NET MVC application running on Microsoft IIS, a Ruby on Rails application running on WEBrick, a Node.js application, etc.
* **Client-side web application**: A JavaScript application running in a web browser using Angular, Backbone.JS, jQuery, etc.
* **Client-side desktop application**: A Windows desktop application written using WPF, an OS X desktop application written using Objective-C, a cross-platform desktop application written using JavaFX, etc.
* **Mobile app**: An Apple iOS app, an Android app, a Microsoft Windows Phone app, etc.
* **Server-side console application**: A standalone (e.g. "public static void main") application, a batch process, etc.
* **Serverless function**: A single serverless function (e.g. Amazon Lambda, Azure Function, etc).
* **Database**: A schema or database in a relational database management system, document store, graph database, etc such as MySQL, Microsoft SQL Server, Oracle Database, MongoDB, Riak, Cassandra, Neo4j, etc.
* **Blob or content store**: A blob store (e.g. Amazon S3, Microsoft Azure Blob Storage, etc) or content delivery network (e.g. Akamai, Amazon CloudFront, etc).
* **File system**: A full local file system or a portion of a larger networked file system (e.g. SAN, NAS, etc).
* **Shell script**: A single shell script written in Bash, etc.
* **etc**
A container is essentially a context or boundary inside which some code is executed or some data is stored. And each container is a separately deployable/runnable thing or runtime environment, typically (but not always) running in its own process space. Because of this, communication between containers typically takes the form of an inter-process communication.
### Component
The word "component" is a hugely overloaded term in the software development industry, but in this context a component is a grouping of related functionality encapsulated behind a well-defined interface. If you're using a language like Java or C#, the simplest way to think of a component is that it's a collection of implementation classes behind an interface. Aspects such as how those components are packaged (e.g. one component vs many components per JAR file, DLL, shared library, etc) is a separate and orthogonal concern.
An important point to note here is that all components inside a container typically execute in the same process space. **In the C4 model, components are not separately deployable units.**
Core diagrams
-------------
Visualising this hierarchy of abstractions is then done by creating a collection of **Context**, **Container**, **Component** and (optionally) **Code** (e.g. UML class) diagrams. This is where the C4 model gets its name from.
### Level 1: System Context diagram (#SystemContextDiagram)
[](./img/bigbankplc-SystemContext.png)
A System Context diagram is a good starting point for diagramming and documenting a software system, allowing you to step back and see the big picture. Draw a diagram showing your system as a box in the centre, surrounded by its users and the other systems that it interacts with.
Detail isn't important here as this is your zoomed out view showing a big picture of the system landscape. The focus should be on people (actors, roles, personas, etc) and software systems rather than technologies, protocols and other low-level details. It's the sort of diagram that you could show to non-technical people.
**Scope**: A single software system.
**Primary elements**: The software system in scope.
**Supporting elements**: People (e.g. users, actors, roles, or personas) and software systems (external dependencies) that are directly connected to the software system in scope. Typically these other software systems sit outside the scope or boundary of your own software system, and you don’t have responsibility or ownership of them.
**Intended audience**: Everybody, both technical and non-technical people, inside and outside of the software development team.
**Recommended for most teams**: Yes.
### Level 2: Container diagram (#ContainerDiagram)
[](./img/bigbankplc-Containers.png)
Once you understand how your system fits in to the overall IT environment, a really useful next step is to zoom-in to the system boundary with a Container diagram. A "container" is something like a server-side web application, single-page application, desktop application, mobile app, database schema, file system, etc. Essentially, a container is a separately runnable/deployable unit (e.g. a separate process space) that executes code or stores data.
The Container diagram shows the high-level shape of the software architecture and how responsibilities are distributed across it. It also shows the major technology choices and how the containers communicate with one another. It's a simple, high-level technology focussed diagram that is useful for software developers and support/operations staff alike.
**Scope**: A single software system.
**Primary elements**: Containers within the software system in scope.
**Supporting elements**: People and software systems directly connected to the containers.
**Intended audience**: Technical people inside and outside of the software development team; including software architects, developers and operations/support staff.
**Recommended for most teams**: Yes.
**Notes**: This diagram says nothing about deployment scenarios, clustering, replication, failover, etc.
### Level 3: Component diagram (#ComponentDiagram)
[](./img/bigbankplc-Components.png)
Next you can zoom in and decompose each container further to identify the major structural building blocks and their interactions.
The Component diagram shows how a container is made up of a number of "components", what each of those components are, their responsibilities and the technology/implementation details.
**Scope**: A single container.
**Primary elements**: Components within the container in scope.
**Supporting elements**: Containers (within the software system in scope) plus people and software systems directly connected to the components.
**Intended audience**: Software architects and developers.
**Recommended for most teams**: No, only create component diagrams if you feel they add value, and consider automating their creation for long-lived documentation.
### Level 4: Code (#CodeDiagram)
[](./img/bigbankplc-Classes.png)
Finally, you can zoom in to each component to show how it is implemented as code; using UML class diagrams, entity relationship diagrams or similar.
This is an optional level of detail and is often available on-demand from tooling such as IDEs. Ideally this diagram would be automatically generated using tooling (e.g. an IDE or UML modelling tool), and you should consider showing only those attributes and methods that allow you to tell the story that you want to tell. This level of detail is not recommended for anything but the most important or complex components.
**Scope**: A single component.
**Primary elements**: Code elements (e.g. classes, interfaces, objects, functions, database tables, etc) within the component in scope.
**Intended audience**: Software architects and developers.
**Recommended for most teams**: No, for long-lived documentation, most IDEs can generate this level of detail on demand.
Supplementary diagrams
----------------------
Once you have a good understanding of the static structure, you can supplement the C4 diagrams to show other aspects.
### System Landscape diagram (#SystemLandscapeDiagram)
[](./img/bigbankplc-SystemLandscape.png)
The C4 model provides a static view of a **single software** system but, in the real-world, software systems never live in isolation. For this reason, and particularly if you are responsible for a collection of software systems, it's often useful to understand how all of these software systems fit together within the bounds of an enterprise. To do this, simply add another diagram that sits "on top" of the C4 diagrams, to show the system landscape from an IT perspective. Like the System Context diagram, this diagram can show the organisational boundary, internal/external users and internal/external systems.
Essentially this is a high-level map of the software systems at the enterprise level, with a C4 drill-down for each software system of interest. From a practical perspective, a system landscape diagram is really just a system context diagram without a specific focus on a particular software system.
**Scope**: An enterprise.
**Primary elements**: People and software systems related to the enterprise in scope.
**Intended audience**: Technical and non-technical people, inside and outside of the software development team.
### Dynamic diagram (#DynamicDiagram)
[](./img/bigbankplc-SignIn.png)
A dynamic diagram can be useful when you want to show how elements in a static model collaborate at runtime to implement a user story, use case, feature, etc. This dynamic diagram is based upon a [UML communication diagram](https://en.wikipedia.org/wiki/Communication_diagram) (previously known as a "UML collaboration diagram"). It is similar to a [UML sequence diagram](https://en.wikipedia.org/wiki/Sequence_diagram) although it allows a free-form arrangement of diagram elements with numbered interactions to indicate ordering.
**Scope**: An enterprise, software system or container.
**Primary and supporting elements**: Depends on the diagram scope; enterprise (see System Landscape diagram), software system (see System Context or Container diagrams), container (see Component diagram).
**Intended audience**: Technical and non-technical people, inside and outside of the software development team.
### Deployment diagram (#DeploymentDiagram)
[](./img/bigbankplc-LiveDeployment.png)
A deployment diagram allows you to illustrate how software systems and/or containers in the static model are mapped to infrastructure. This deployment diagram is based upon a [UML deployment diagram](https://en.wikipedia.org/wiki/Deployment_diagram), although simplified slightly to show the mapping between **containers** and **deployment nodes**. A deployment node is something like physical infrastructure (e.g. a physical server or device), virtualised infrastructure (e.g. IaaS, PaaS, a virtual machine), containerised infrastructure (e.g. a Docker container), an execution environment (e.g. a database server, Java EE web/application server, Microsoft IIS), etc. Deployment nodes can be nested.
You may also want to include **infrastructure nodes** such as DNS services, load balancers, firewalls, etc.
**Scope**: One or more software systems.
**Primary elements**: Deployment nodes, software system instances, and container instances.
**Supporting elements**: Infrastructure nodes used in the deployment of the software system.
**Intended audience**: Technical people inside and outside of the software development team; including software architects, developers, infrastructure architects, and operations/support staff.
Notation
--------
The C4 model doesn't prescribe any particular notation. A simple notation that works well on whiteboards, paper, sticky notes, index cards and a variety of diagraming tools is as follows.
Person
[](./img/notation-person.png)
Software System
[](./img/notation-software-system.png)
Container
[](./img/notation-container.png)
Component
[](./img/notation-component.png)
Relationship
[](./img/notation-relationship.png)
You can then use colour and shapes to supplement the diagram, either to add additional information or simply to make the diagram more aesthetically pleasing.
### C4 and UML
Although the example diagrams above are created using a "boxes and lines" notation, the core diagrams can be illustrated using UML with the appropriate use of packages, components and stereotypes. The resulting UML diagrams do tend to lack the same degree of descriptive text though, because adding such text isn't possible (or easy) with some UML tools.
Here are three examples of a System Context, Container and Component diagram for comparison.
[](./img/spring-petclinic-system-context.png)
[](./img/spring-petclinic-containers.png)
[](./img/spring-petclinic-components.png)
System Context diagram
Container diagram
Component diagram
[](./img/spring-petclinic-system-context-plantuml.png)
[](./img/spring-petclinic-containers-plantuml.png)
[](./img/spring-petclinic-components-plantuml.png)
[](./img/spring-petclinic-system-context-staruml.png)
[](./img/spring-petclinic-containers-staruml.png)
[](./img/spring-petclinic-components-staruml.png)
### C4 and ArchiMate
See [C4 Model, Architecture Viewpoint and Archi 4.7](https://www.archimatetool.com/blog/2020/04/18/c4-model-architecture-viewpoint-and-archi-4-7/) for details of how to create C4 model diagrams with ArchiMate.
### Diagram key/legend
Any notation used should be as self-describing as possible, but **all diagrams should have a key/legend** to make the notation explicit. This applies to diagrams created with notations such as UML, ArchiMate and SysML too, as not everybody will know the notation being used.
[](./img/bigbankplc-Containers-key.png)
### Notation, notation, notation
Although the C4 model is an abstraction-first approach and notation independent, you still need to ensure that your diagram notation makes sense, and that the diagrams are comprehensible. A good way to think about this is to ask yourself whether each diagram can stand alone, and be (mostly) understood without a narrative. You can use this short [software architecture diagram review checklist](review) to help.
[](review)
And here are some recommendations related to notation.
#### Diagrams
* Every diagram should have a title describing the diagram type and scope (e.g. "System Context diagram for My Software System").
* Every diagram should have a key/legend explaining the notation being used (e.g. shapes, colours, border styles, line types, arrow heads, etc).
* Acronyms and abbreviations (business/domain or technology) should be understandable by all audiences, or explained in the diagram key/legend.
#### Elements
* The type of every element should be explicitly specified (e.g. Person, Software System, Container or Component).
* Every element should have a short description, to provide an "at a glance" view of key responsibilities.
* Every container and component should have a technology explicitly specified.
#### Relationships
* Every line should represent a unidirectional relationship.
* Every line should be labelled, the label being consistent with the direction and intent of the relationship (e.g. dependency or data flow). Try to be as specific as possible with the label, ideally avoiding single words like, "Uses".
* Relationships between containers (typically these represent inter-process communication) should have a technology/protocol explicitly labelled.
Frequently asked questions
--------------------------
### What's the background behind the C4 model?
The C4 model was created by [Simon Brown](http://simonbrown.je), who started teaching people about software architecture, while working as a software developer/architect in London. Part of Simon's training course was a design exercise, where groups of people were given some requirements, asked to do some design, and to draw some diagrams to express that design.
Although this was a design focussed exercise, the wide variety of diagrams made it evident that the visualisation of ideas was a skill that most people sorely lacked. The C4 model is essentially a formalisation of how Simon used to visualise software architecture, which has evolved over the years.
### What's the inspiration behind the C4 model?
The C4 model was inspired by the [Unified Modeling Language](https://en.wikipedia.org/wiki/Unified_Modeling_Language) and the [4+1 model for software architecture](https://en.wikipedia.org/wiki/4%2B1_architectural_view_model). In summary, you can think of the C4 model as a simplified version of the underlying concepts, designed to (1) make it easier for software developers to describe and understand how a software system works and (2) to minimise the gap between the software architecture model/description and the source code.
The roots of the C4 model, and the various diagram types within it, can be traced back to somewhere in the region of 2006, although the "C4" name came much later, around the end of 2011. It was created during a time where teams, influenced by the agile movement, were less than enthusiastic about using UML.
### Isn't the C4 model a step backwards? Why are you reinventing UML? Why not just use UML?
Whether you see the C4 model as a step forwards or a step backwards depends upon where you are. If you're using UML (or SysML, ArchiMate, etc) and it's working for you, stick with it. Unfortunately, UML usage seems to be in decline, and many teams have reverted to using ad hoc boxes and lines diagrams once again. Given that many of those teams don't want to use UML (for various reasons), the C4 model helps introduce some structure and discipline into the way software architecture is communicated. For many teams, the C4 model is sufficient. And for others, perhaps it's a stepping stone to UML.
### How many people use the C4 model?
The honest answer is that nobody knows. Simon has personally taught the C4 model to somewhere over 10,000 people in more than 30 countries; with conference talks, videos, books and articles reaching many more than this. Other people are also teaching, speaking and writing about the C4 model too. It's definitely being used though, in organisations ranging from startups to global household names.
### Why "container"?
Terms like "process", "application", "app", "server", "deployable unit", etc all have associated implications, so the name "container" was chosen as a generic way to describe something in which components live. From one perspective, it's unfortunate that containerisation has become popular, because many software developers now associate the term "container" with Docker. From another perspective though, there is sometimes a nice parity between a container in the C4 model and an infrastructure (e.g. Docker) container.
While many teams successfully use the C4 model as is, feel free to change the terminology if needed.
### Can we change the terminology?
This terminology (context, containers, components and code) works for many organisations and many types of software. However, sometimes an organisation will have an existing terminology that people are already familiar with. Or perhaps "components" and "classes" don't easily map on to the technology being used (e.g. functional languages often use the terms "module" and "function").
Feel free to modify the terminology that you use to describe software architecture at different levels of abstraction. Just make sure that everybody explicitly understands it.
### How do you model microservices and serverless?
Broadly speaking, there are two options for diagramming microservices when using the C4 model, although it depends what you mean by "microservice".
**Approach 1: Each "microservice" is owned by a separate team**
If your software system has a dependency upon a number of microservices that are outside of your control (e.g. they are owned and/or operated by a separate team), model these microservices as external software systems, that you can't see inside of.
**Approach 2: A single team owns multiple "microservices"**
Imagine that you have an API app (e.g. Spring Boot, ASP.NET MVC, etc) that reads/writes to a relational database schema. Regardless of whether you consider the term "microservice" to refer to just the API app, or the combination of the API app and database schema ... if the microservices are a part of a software system that you are building (i.e. you own them), model every deployable thing as a container. In other words, you'd show two containers: the API app, and the database schema. Feel free to draw a box around these two containers to indicate they are related/grouped.
The same is true for serverless functions/lambdas/etc; treat them as software systems or containers based upon ownership.
### How do you diagram large and complex software systems?
Even with a relatively small software system, it's tempting to try and include the entire story on a single diagram. For example, if you have a web application, it seems logical to create a single component diagram that shows all of the components that make up that web application. Unless your software system really is that small, you're likely to run out of room on the diagram canvas or find it difficult to discover a layout that isn't cluttered by a myriad of overlapping lines. Using a larger diagram canvas can sometimes help, but large diagrams are usually hard to interpret and comprehend because the cognitive load is too high. And if nobody understands the diagram, nobody is going to look at it.
Instead, don't be afraid to split that single complex diagram into a larger number of simpler diagrams, each with a specific focus around a business area, functional area, functional grouping, bounded context, use case, user interaction, feature set, etc. The key is to ensure that each of the separate diagrams tells a different part of the same overall story, at the same level of abstraction.
See also [Diagramming vs modelling](#Modelling) for an alternative approach.
### Will the diagrams become outdated quickly?
Due to the hierarchical nature of the C4 model, each diagram will change at a different rate.
* **System Context diagram**: In most cases, the system context diagram will change very slowly, as this describes the landscape that the software system is operating within.
* **Container diagram**: Unless you're building a software system that makes heavy use of microservices or serverless lambdas/functions/etc, the container diagram will also change relatively slowly.
* **Component diagram**: For any software system under active development, the component diagrams may change frequently as the team adds, removes or restructures the code into cohesive components. Automating the generation of this level of detail with tooling can help.
* **Code diagram**: The level 4 code (e.g. class) diagrams will potentially become outdated very quickly if the codebase is under active development. For this reason, the recommendation is to (1) not create them at all or (2) generate them on-demand using tooling such as your IDE.
### Why doesn't the C4 model cover business processes, workflows, state machines, domain models, data models, etc?
The focus of the C4 model is the static structures that make up a software system, at different levels of abstraction. If you need to describe other aspects, feel free to supplement the C4 diagrams with UML diagrams, BPML diagrams, ArchiMate diagrams, entity relationship diagrams, etc.
### The C4 model vs UML, ArchiMate and SysML?
Although existing notations such as UML, ArchiMate and SysML already exist, many software development teams don't seem to use them. Often this is because teams don't know these notations well enough, perceive them to be too complicated, think they are not compatible with agile approaches or don't have the required tooling.
If you are already successfully using one of these notations to communicate software architecture and it's working, stick with it. If not, try the C4 model. And don't be afraid to supplement the C4 diagrams with UML state diagrams, timing diagrams, etc if you need to.
### Can we combine C4 and arc42?
Yes, many teams do, and the C4 model is compatible with the [arc42 documentation template](http://arc42.org) as follows.
* Context and Scope => System Context diagram
* Building Block View (level 1) => Container diagram
* Building Block View (level 2) => Component diagram
* Building Block View (level 3) => Class diagram
### Does the C4 model imply a design process or team structure?
A common misconception is that a team's design process should follow the levels in the C4 model hierarchy, perhaps with different people on the team being responsible for different levels of diagrams. For example, a business analyst creates the system context diagram, the architect creates the container diagram, while the developers look after the remaining levels of detail.
Although you can certainly use the C4 model in this way, this is not the intended or recommended usage pattern. The C4 model is just a way to describe a software system, from different levels of abstraction, and it implies nothing about the process of delivering software.
### Using C4 to describe libraries, frameworks and SDKs?
The C4 model is really designed to model a software system, at various levels of abstraction. To document a library, framework or SDK, you might be better off using something like UML. Alternatively, you could use the C4 model to describe a usage example of your framework, library or SDK; perhaps using colour coding to signify which parts of the software system are bespoke vs those provided for you.
### Web applications; one container or two?
If you're building a server-side web application (e.g. Spring MVC, ASP.NET, Ruby on Rails, Django, etc) that is predominantly generating static HTML content, then that's a single container. If there's a significant quantity of JavaScript being delivered by the server-side web application (e.g. a single-page application built using Angular), then that's two containers. [Here's an example](./img/bigbankplc-Containers.png).
Although, at deployment time, the server-side web application includes both the server-side and client-side code, treating the client and server as two separate containers makes it explicit that these are two separate process spaces, communicating via an inter-process/remote communication mechanism (e.g. JSON/HTTPS). It also provides a basis for zooming in to each container separately to show the components inside them.
### Should the lines represent dependencies or data flow?
This is your choice. Sometimes diagrams work better showing dependency relationships (e.g. uses, reads from, etc), and sometimes data flow (e.g. customer update events) works better. Whichever you choose, make sure that the description of the line matches the direction of the arrow.
It's also worth remembering that most relationships can be expressed either way, and the more explicit you can be, the better. For example, describing a relationship as "sends customer update events to" can be more descriptive than simply "customer update events".
### Is a Java JAR, C# assembly, DLL, module, etc a container?
Typically not. A container is a runtime construct, like an application; whereas Java JAR files, C# assemblies, DLLs, modules, etc are used to organise the code within those applications.
### Is a Java JAR, C# assembly, DLL, module, package, namespace, folder etc a component?
Perhaps but, again, typically not. The C4 model is about showing the runtime units (containers) and how functionality is partitioned across them (components), rather than organisational units such as Java JAR files, C# assemblies, DLLs, modules, packages, namespaces or folder structures.
Of course, there may be a one-to-one mapping between these constructs and a component; e.g. if you're building a hexagonal architecture, you may create a single Java JAR file or C# assembly per component. On the other hand, a single component might be implemented using code from a number of JAR files, which is typically what happens when you start to consider third-party frameworks/libraries, and how they become embedded in your codebase.
### Should you include message buses, API gateways, service meshes, etc?
If you have two services, A and B, that communicate by sending a message via a message bus (irrespective of topics, queues, p2p, pub/sub, etc) or another intermediary (e.g. an API gateway or service mesh), you have a couple of options. The first option is to show service A sending a message to the intermediary, and the intermediary subsequently forwarding that message to service B. While accurate, the "hub and spoke" nature of the diagram tends to obscure the notion that there's coupling between the message producer and consumer.
The other approach is to omit the intermediary, and instead use notation (e.g. a textual description, colour coding, line style, etc) to signify that the interaction between service A and B happens via an intermediary. This approach tends to result in diagrams that tell a clearer story.
### Should data storage services be shown as software systems or containers?
A frequently asked question is whether services like Amazon S3, Amazon RDS, Azure SQL Database, content delivery networks, etc should be shown as software systems or containers. After all, these are external services that most of us don’t own or run ourselves.
If you’re building a software system that is using Amazon S3 for storing data, it’s true that you don’t run S3 yourself, but you do have ownership and responsibility for the buckets you are using. Similarly with Amazon RDS, you have (more or less) complete control over any database schemas that you create. For this reason, treat them as containers because they are an integral part of your software architecture, although they are hosted elsewhere.
### Is the C4 model universally applicable?
The C4 model was designed to help describe, document, and diagram custom-built, bespoke software systems. From this perspective, the C4 model can be used to describe a variety of software architectures (monolithic or distributed), built in a variety of programming languages, deployed on a variety of platforms (on-premises or cloud).
Solutions that are perhaps less suited to the C4 model include embedded systems/firmware, and solutions that reply on heavy customization rather than bespoke development (e.g. SAP and Salesforce). Even with these solutions, you still may find the System Context and Container diagrams useful.
Diagramming vs modelling
------------------------
As an industry, we've tended to prefer diagramming over modelling, primarily because the barrier to entry is relatively low, and it's seen as a much simpler task. When you're diagramming, you're typically creating one or more separate diagrams, often with an ad hoc notation, using tools (e.g. Microsoft Visio or a whiteboard) that don't understand anything about the semantics of your diagrams. The domain language of diagramming tools is really just boxes and lines, so you can't ask them questions such as "what dependencies does component X have?". Additionally, reusing diagram elements across diagrams is usually done by duplication (i.e. copying and pasting), thereby putting the responsibility on you to keep diagrams in sync when you rename such elements. It's worth noting here that the C4 model can be used irrespective of whether you are diagramming or modelling, but there are some interesting opportunities when you progress from diagramming to modelling.
With modelling, you're building up a non-visual model of something (e.g. the software architecture of a software system), and then creating different views (e.g. diagrams) on top of that model. This requires a little more rigour, but the result is a single definition of all elements and the relationships between them. This, in turn, allows modelling tools to understand the semantics of what you're trying to do, and provide additional intelligence on top of the model. It also allows modelling tools to provide alternative visualisations, often automatically.
One of the frequently asked questions (above) is about diagramming large and complex software systems. Once you start to have more than ~20 elements (plus the relationships between them) on a diagram, the resulting diagram starts to become cluttered very quickly. For example, image 1 (below) is a component diagram for a single container.
One approach to dealing with this is to not show all of the components on a single diagram, and instead create multiple diagrams, one per "slice" through the container (image 2, below). This approach can certainly help, but it's worth asking whether the resulting diagrams are useful. Are you going to use them and, if so, what are you going to use them for? Although the System Context and Container diagrams are very useful, Component diagrams for large software systems often have less value because they are harder to keep up to date, and you might find that very few people look at them anyway, especially if they are not included in documentation or presentations.
[](./img/modelling-1.png)
Once you have more than ~20 elements on a diagram, the diagram starts to become cluttered very quickly.
[](./img/modelling-2.png)
Creating multiple diagrams, one per "slice", can help, although the resulting diagrams tend to be very simple and increase the effort needed to keep them up to date.
[](./img/modelling-3.png)
Rather than creating a diagram, you can use alternative visualisations instead. This visualisation shows the dependencies between components inside a container.
[](./img/modelling-4.png)
And this alternative visualisation shows all of the elements and relationships in the model, filtered to show a subset of the model.
Often, the diagrams themselves aren't the end-goal, with teams using the diagrams to answer other questions that they have, such as, "what dependencies does component X have?". If this is the case, building a model will allow you to answer such questions, without the additional effort of creating a diagram. In other words, once you have a model, you can visualise it in a number of different ways (images 3 and 4, above), helping to answer the real questions that you are seeking to answer. Diagrams certainly are a fantastic way to communicate software architecture, but other visualisations can sometimes help answer the real underlying questions that you might have.
Metamodel
---------
If you're interested in using the C4 model or building tooling to support it, here is some information about the basic metamodel.
### Elements and relationships
Element type
Parent
Properties
**Person**
None
* Name\*
* Description
* Location (Internal or External)
**Software System**
None
* Name\*
* Description
* Location (Internal or External)
* The set of containers that make up the software system
**Container**
A software system
* Name\*
* Description
* Technology
* The set of components within the container
**Component**
A container
* Name\*
* Description
* Technology
* The set of code elements (e.g. classes, interfaces, etc) that the component is implemented by
**Code Element**
A component
* Name\*
* Description
* Fully qualified type
**Relationship**\*\*
* Description
* Technology
\* All elements in the model must have a name, and that name should be unique within the parent context (tooling may or may not choose to enforce this uniqueness).
\*\* Relationships are permitted between any elements in the model, in either direction.
### Views
View type
Scope
Permitted elements
**1\. System Context**
A software system.
* Software systems
* People
[](./img/bigbankplc-SystemContext.png)
**2\. Container**
A software system
* Software systems
* People
* Containers within the software system in scope
[](./img/bigbankplc-Containers.png)
**3\. Component**
A container
* Software systems
* People
* Other containers within the parent software system of the container in scope
* Components within the container in scope
[](./img/bigbankplc-Components.png)
**4\. Code**
A component
* Code elements (e.g. classes, interfaces, etc) that are used to implement the component in scope
[](./img/bigbankplc-Classes.png)
More information
----------------
The following resources are recommended if you're looking for more information about visualising software architecture and the C4 model.
[](https://leanpub.com/b/software-architecture)
This is Simon Brown's [Software Architecture for Developers](https://leanpub.com/b/software-architecture) (Volume 2) ebook, which is available to purchase from Leanpub as an ebook in PDF, EPUB and MOBI formats. It's a short guide to visualising, documenting and exploring your software architecture.
**Visualising software architecture with the C4 model**
Agile on the Beach 2019 - Falmouth, England - July 2019
There are also some podcasts with Simon Brown, where he discusses the C4 model; including [Software Engineering Daily](https://softwareengineeringdaily.com/2017/06/20/software-architecture-with-simon-brown/) and [Software Engineering Radio](http://www.se-radio.net/2015/06/episode-228-software-architecture-sketches-with-simon-brown/).
Adopting the C4 model
---------------------
If you're considering whether the C4 model is right for your team, the best approach is to **just try it**. Set aside an hour, grab a whiteboard, and draw a System Context diagram for whatever you're working on. If you find that useful, set aside another hour to draw a Container diagram for the same software system. For many teams, those two levels of detail are sufficient. If you think there's value in drawing Component diagrams, then try that out too. Component diagrams tend to be more volatile though, so you should really start to look at automating this level of detail where possible, so that the diagrams always reflect the code.
If you have retrospectives as a part of your working practices, don't forget to discuss the diagrams. Are they useful? Do people use them? Are they kept up to date? Could we use more detail? Or are our diagrams too detailed, and falling out of date quickly? Is the tooling sufficient? Does everybody have access to the diagrams when needed?
[In-person and online training is available](https://architectis.je) to help you introduce the C4 model, or scale C4 model knowledge, within your organisation.
Tooling
-------
For design sessions, you might find a whiteboard or flip chart paper better for collaboration, and iterating quickly. For long-lived documentation, the following tools can help create software architecture diagrams based upon the C4 model.
### Text-based modelling tools
* [Structurizr](https://structurizr.com)
### Visual modelling tools
* [IcePanel](https://icepanel.io)
* [Archi](https://www.archimatetool.com/blog/2020/04/18/c4-model-architecture-viewpoint-and-archi-4-7/)
* [Sparx Enterprise Architect](http://www.sparxsystems.eu/c4/)
* [Gaphor](https://gaphor.org)
* [MooD](https://supportportal.moodinternational.com/hc/en-us/articles/360015465100-MooD-and-the-C4-model)
* [Astah](https://github.com/ChangeVision/astah-c4model-plugin)
* [Archipeg](https://www.archipeg.com)
### Text-based diagramming tools
* [C4-PlantUML](https://github.com/plantuml-stdlib/C4-PlantUML)
* [c4builder](https://adrianvlupu.github.io/C4-Builder)
* [C4Sharp](https://github.com/8T4/c4sharp)
### Visual diagramming tools
* [diagrams.net](https://www.diagrams.net)
* [OmniGraffle](https://stenciltown.omnigroup.com/stencils/c4/)
* [Visual Paradigm](https://online.visual-paradigm.com/app/diagrams/#diagram:proj=0&type=C4Model&gallery=/repository/f8ca52e5-8e0f-45ec-a8fe-d2e2933518e4.xml)
* [Microsoft Visio](https://github.com/pihalve/c4model-visio-stencil)
* [yEd](https://github.com/Ferhat67/C4-yEd)
References
----------
Here are some of the significant resources that reference the C4 model:
* [Open Agile Architecture™, a Standard of The Open Group](https://publications.opengroup.org/c208)
* [Agile Architecture Modeling using the ArchiMate® Language](https://publications.opengroup.org/g20e)
* [Fundamentals of Software Architecture](https://www.oreilly.com/library/view/fundamentals-of-software/9781492043447/) (Mark Richards, Neal Ford)
* [Design It!](https://pragprog.com/titles/mkdsa/design-it/) (Michael Keeling)
* [Software Architecture with Spring 5.0](https://www.packtpub.com/free-ebook/software-architecture-with-spring-5-0/9781788992992) (René Enríquez, Alberto Salazar)
* [Software Architecture](https://leanpub.com/software-architecture) (Cesare Pautasso)
* [Martin Fowler: Building Infrastructure Platforms - Communicate your technical vision](https://martinfowler.com/articles/building-infrastructure-platform.html#CommunicateYourTechnicalVision) (Poppy Rowse and Chris Shepherd)
* [Wikipedia](https://en.wikipedia.org/wiki/C4_model)
The Structurizr DSL (designed to support the C4 model) has also appeared on the [ThoughtWorks Tech Radar - Techniques - Diagrams as code](https://www.thoughtworks.com/radar/techniques?blipid=202010027).
[](https://creativecommons.org/licenses/by/4.0/) This website, example diagrams, explanatory text, and slides are licensed under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
 The C4 model and this website were created by Simon Brown - [@simonbrown](https://twitter.com/simonbrown) | [simonbrown.je](http://simonbrown.je) | [\[email protected\]](/cdn-cgi/l/email-protection#bfccd6d2d0d1ffdecddcd7d6cbdadccbd6cc91d5da)
#### Level 1: System Context diagram
This is an example System Context diagram for a fictional Internet Banking System. It shows the people who use it, and the other software systems that the Internet Banking System has a relationship with. Personal Customers of the bank use the Internet Banking System to view information about their bank accounts, and to make payments. The Internet Banking System itself uses the bank's existing Mainframe Banking System to do this, and uses the bank's existing E-mail System to send e-mails to customers.

A colour coding has been used to indicate which software systems exist already (the grey boxes).

#### Showing the enterprise boundary
In this slightly modified example, the dashed line represents the boundary of the bank, and is used to illustrate what's inside vs what's outside of the bank.


#### Level 2: Container diagram
This is an example Container diagram for a fictional Internet Banking System. It shows that the Internet Banking System (the dashed box) is made up of five containers: a server-side Web Application, a Single-Page Application, a Mobile App, a server-side API Application, and a Database. The Web Application is a Java/Spring MVC web application that simply serves static content (HTML, CSS and JavaScript), including the content that makes up the Single-Page Application. The Single-Page Application is an Angular application that runs in the customer's web browser, providing all of the Internet banking features. Alternatively, customers can use the cross-platform Xamarin Mobile App, to access a subset of the Internet banking functionality.
Both the Single-Page Application and Mobile App use a JSON/HTTPS API, which is provided by another Java/Spring MVC application running on the server. The API Application gets user information from the Database (a relational database schema). The API Application also communicates with the existing Mainframe Banking System, using a proprietary XML/HTTPS interface, to get information about bank accounts or make transactions. The API Application also uses the existing E-mail System if it needs to send e-mails to customers.

The dashed line represents the boundary of the Internet Banking System, showing the containers (light blue) inside it. Additionally, a cylinder shape has been used to represent the database.

#### Level 3: Component diagram
This is an example Component diagram for a fictional Internet Banking System, showing some (rather than all) of the components within the API Application. Here, there are three Spring MVC Rest Controllers providing access points for the JSON/HTTPS API, with each controller subsequently using other components to access data from the Database and Mainframe Banking System, or send e-mails.

The dashed line represents the boundary of the API Application, showing the components (light blue) inside it.

#### Level 4: Code
This is an example (and partial) UML class diagram for a fictional Internet Banking System, showing the code elements (interfaces and classes) that make up the MainframeBankingSystemFacade component. It shows that the component is made up of a number of classes, with the implementation details directly reflecting the code.

Close
## References
In a hurry? Read the [Wikipedia page](https://en.wikipedia.org/wiki/C4_model) and the 5 minute introduction to the C4 model at InfoQ
[The C4 model for software architecture](https://www.infoq.com/articles/C4-architecture-model)
[O modelo C4 de documentação para Arquitetura de Software](https://www.infoq.com/br/articles/C4-architecture-model)
[用于软件架构的C4模型](http://www.infoq.com/cn/articles/C4-architecture-model)
[ソフトウェアアーキテクチャのためのC4モデル](https://www.infoq.com/jp/articles/C4-architecture-model)
| 73.892224 | 969 | 0.781438 | eng_Latn | 0.99694 |
48e66f8ec766b0c1690eb57ebf181ab7de297957 | 7,097 | md | Markdown | entities/General.ScheduledDocumentEvents.md | ErpNetDocs/model | 096ec2228e77a952c4fc9a3e535596ae5de0d1e2 | [
"CC-BY-4.0"
] | null | null | null | entities/General.ScheduledDocumentEvents.md | ErpNetDocs/model | 096ec2228e77a952c4fc9a3e535596ae5de0d1e2 | [
"CC-BY-4.0"
] | null | null | null | entities/General.ScheduledDocumentEvents.md | ErpNetDocs/model | 096ec2228e77a952c4fc9a3e535596ae5de0d1e2 | [
"CC-BY-4.0"
] | 2 | 2021-09-30T12:49:28.000Z | 2021-11-26T10:11:05.000Z | ---
uid: General.ScheduledDocumentEvents
---
# General.ScheduledDocumentEvents Entity
**Namespace:** [General](General.md)
Contains postponed events, which will be executed later. Usually these are large number of recalculation events, resulting from other events. For example, releasing a cost correction, publishes postponed events for all affected documents. Entity: Gen_Scheduled_Document_Events
## Default Visualization
Default Display Text Format:
_{Id}: {ObjectVersion}_
Default Search Members:
__
## Aggregate
An [aggregate](https://docs.erp.net/tech/advanced/concepts/aggregates.html) is a cluster of domain objects that can be treated as a single unit.
Aggregate Tree
* [General.ScheduledDocumentEvents](General.ScheduledDocumentEvents.md)
## Attributes
| Name | Type | Description |
| ---- | ---- | --- |
| [Cancelled](General.ScheduledDocumentEvents.md#cancelled) | boolean | When true, specifies that this document event has been cancelled (either manually or in respect to another event) and will not be executed. `Required` `Default(false)` `Filter(eq)`
| [CreationTime](General.ScheduledDocumentEvents.md#creationtime) | datetime | Date and time when the ScheduledDocumentEvent was created. `Required` `Default(Now)` `ReadOnly`
| [DocumentEvent](General.ScheduledDocumentEvents.md#documentevent) | string (254) | The type of the document event that is scheduled to be processed. `Required` `ReadOnly`
| [Id](General.ScheduledDocumentEvents.md#id) | guid |
| [LastProcessStatus](General.ScheduledDocumentEvents.md#lastprocessstatus) | string (max) __nullable__ | Status/information of the last attemp to process the event. Usually shows the cause in case of failure. `ReadOnly`
| [LastProcessTime](General.ScheduledDocumentEvents.md#lastprocesstime) | datetime __nullable__ | The time of the last attempt to process the event. `ReadOnly`
| [ObjectVersion](General.ScheduledDocumentEvents.md#objectversion) | int32 |
| [Processed](General.ScheduledDocumentEvents.md#processed) | boolean | Indicates wheather the event is already processed or not. `Required` `Default(false)` `Filter(eq)` `ReadOnly`
| [State](General.ScheduledDocumentEvents.md#state) | [State](General.ScheduledDocumentEvents.md#state) | The state of the document for which the event will be processed. `Required` `ReadOnly`
## References
| Name | Type | Description |
| ---- | ---- | --- |
| [Document](General.ScheduledDocumentEvents.md#document) | [Documents](General.Documents.md) | The document for which the event will be processed. `Required` `Filter(multi eq)` `ReadOnly` |
| [SourceDocument](General.ScheduledDocumentEvents.md#sourcedocument) | [Documents](General.Documents.md) | The document that has caused this event to be scheduled. `Required` `Filter(multi eq)` `ReadOnly` |
## Attribute Details
### Cancelled
When true, specifies that this document event has been cancelled (either manually or in respect to another event) and will not be executed. `Required` `Default(false)` `Filter(eq)`
_Type_: **boolean**
_Supported Filters_: **Equals**
_Supports Order By_: **False**
_Default Value_: **False**
### CreationTime
Date and time when the ScheduledDocumentEvent was created. `Required` `Default(Now)` `ReadOnly`
_Type_: **datetime**
_Supported Filters_: **NotFilterable**
_Supports Order By_: **False**
_Default Value_: **CurrentDateTime**
### DocumentEvent
The type of the document event that is scheduled to be processed. `Required` `ReadOnly`
_Type_: **string (254)**
_Supported Filters_: **NotFilterable**
_Supports Order By_: **False**
_Maximum Length_: **254**
### Id
_Type_: **guid**
_Indexed_: **True**
_Supported Filters_: **Equals, EqualsIn**
_Default Value_: **NewGuid**
### LastProcessStatus
Status/information of the last attemp to process the event. Usually shows the cause in case of failure. `ReadOnly`
_Type_: **string (max) __nullable__**
_Supported Filters_: **NotFilterable**
_Supports Order By_: **False**
_Maximum Length_: **2147483647**
### LastProcessTime
The time of the last attempt to process the event. `ReadOnly`
_Type_: **datetime __nullable__**
_Supported Filters_: **NotFilterable**
_Supports Order By_: **False**
### ObjectVersion
_Type_: **int32**
_Supported Filters_: **NotFilterable**
_Supports Order By_: ****
### Processed
Indicates wheather the event is already processed or not. `Required` `Default(false)` `Filter(eq)` `ReadOnly`
_Type_: **boolean**
_Supported Filters_: **Equals**
_Supports Order By_: **False**
_Default Value_: **False**
### State
The state of the document for which the event will be processed. `Required` `ReadOnly`
_Type_: **[State](General.ScheduledDocumentEvents.md#state)**
Allowed values for the `State`(General.ScheduledDocumentEvents.md#state) data attribute
_Allowed Values (General.ScheduledDocumentEventsRepository.State Enum Members)_
| Value | Description |
| ---- | --- |
| New | New value. Stored as 0. <br /> _Database Value:_ 0 <br /> _Model Value:_ 0 <br /> _Domain API Value:_ 'New' |
| Corrective | Corrective value. Stored as 5. <br /> _Database Value:_ 5 <br /> _Model Value:_ 5 <br /> _Domain API Value:_ 'Corrective' |
| Planned | Planned value. Stored as 10. <br /> _Database Value:_ 10 <br /> _Model Value:_ 10 <br /> _Domain API Value:_ 'Planned' |
| FirmPlanned | FirmPlanned value. Stored as 20. <br /> _Database Value:_ 20 <br /> _Model Value:_ 20 <br /> _Domain API Value:_ 'FirmPlanned' |
| Released | Released value. Stored as 30. <br /> _Database Value:_ 30 <br /> _Model Value:_ 30 <br /> _Domain API Value:_ 'Released' |
| Completed | Completed value. Stored as 40. <br /> _Database Value:_ 40 <br /> _Model Value:_ 40 <br /> _Domain API Value:_ 'Completed' |
| Closed | Closed value. Stored as 50. <br /> _Database Value:_ 50 <br /> _Model Value:_ 50 <br /> _Domain API Value:_ 'Closed' |
_Supported Filters_: **NotFilterable**
_Supports Order By_: **False**
## Reference Details
### Document
The document for which the event will be processed. `Required` `Filter(multi eq)` `ReadOnly`
_Type_: **[Documents](General.Documents.md)**
_Supported Filters_: **Equals, EqualsIn**
### SourceDocument
The document that has caused this event to be scheduled. `Required` `Filter(multi eq)` `ReadOnly`
_Type_: **[Documents](General.Documents.md)**
_Supported Filters_: **Equals, EqualsIn**
## Business Rules
[!list limit=1000 erp.entity=General.ScheduledDocumentEvents erp.type=business-rule default-text="None"]
## Front-End Business Rules
[!list limit=1000 erp.entity=General.ScheduledDocumentEvents erp.type=front-end-business-rule default-text="None"]
## API
Domain API Query:
<https://demodb.my.erp.net/api/domain/odata/General_ScheduledDocumentEvents?$top=10>
Domain API Query Builder:
<https://demodb.my.erp.net/api/domain/querybuilder#General_ScheduledDocumentEvents?$top=10>
| 42.244048 | 277 | 0.719318 | eng_Latn | 0.687612 |
48e6906fc0261984f81a487e46b48e0a97c9f08f | 60 | md | Markdown | README.md | zpember/sivu | 07299fd943d24d7bcddb640ca577a11097b6107e | [
"Apache-2.0"
] | null | null | null | README.md | zpember/sivu | 07299fd943d24d7bcddb640ca577a11097b6107e | [
"Apache-2.0"
] | null | null | null | README.md | zpember/sivu | 07299fd943d24d7bcddb640ca577a11097b6107e | [
"Apache-2.0"
] | null | null | null | # sivu
Library for supporting pagination of sequential data
| 20 | 52 | 0.833333 | eng_Latn | 0.954603 |
48e6a9686052d3a9fb57585b435fb398a4560c59 | 2,722 | md | Markdown | learn/app-development/services/java-services/java-service.md | sujith-simon/docs | 9db623d55b5ed5619b918695dbbfb350f2eee1c0 | [
"Apache-2.0"
] | 9 | 2019-12-12T11:08:51.000Z | 2022-03-11T13:34:12.000Z | learn/app-development/services/java-services/java-service.md | sujith-simon/docs | 9db623d55b5ed5619b918695dbbfb350f2eee1c0 | [
"Apache-2.0"
] | 200 | 2019-11-19T06:56:38.000Z | 2022-03-31T11:28:04.000Z | learn/app-development/services/java-services/java-service.md | sujith-simon/docs | 9db623d55b5ed5619b918695dbbfb350f2eee1c0 | [
"Apache-2.0"
] | 42 | 2019-11-18T07:04:37.000Z | 2022-03-28T04:35:51.000Z | ---
title: "Java Service"
id: "java-service"
---
---
## Overview
Java Services allow app developers to write custom business logic or enable integration with 3rd party libraries and tools and are deployed on the Server side. For every Java Service created in WaveMaker, its REST API contract is auto-generated and is ready for integration with front-end layer (web or mobile).
Variables are required to invoke Java Service methods, each Variable can be configured to take the necessary method parameters required as data inputs and return JSON response. Variables internally perform the necessary conversion to invoke the equivalent REST API endpoint/resource for the Java Service methods.
## Creating a Java Service
When you create a Java Service in WaveMaker, Java sample file is created. You can access this file from the _Java Services_ Resources. You can edit the file and even delete the service.
[](/learn/assets/Java_services.png)
A Java method can have input and output in the form of an individual or a set of parameters. These parameters can be of the following types:
### Input parameters
Input parameters can be of the type:
- primitive objects,
- POJO classes,
- a collection of POJO classes and primitives,
- page
- Http servlet response/request which can be passed as URL-based header params or in the form of cookies
- a pageable object with values pertaining to the page to be retrieved, the size of each page and sort field name
### Output parameters
Output parameters can be of the type:
- primitive objects,
- POJO classes,
- a collection of POJO classes and primitives,
- page
## Java Services framework
Creating a Java Service generates multiple files, of which the following are of importance:
[](/learn/assets/JS_files.png)
### Spring Controller File
MyJavaController.java, in this example - This file contains the details related to the Java Service. Public methods in the controller class are wrappers of the underlying Java Service code, acting as an endpoint which can be invoked as REST API. The methods can be hidden or exposed to the client using the directives: @HideFromClient and @ExposeToClient, respectively. Request JSON object from the invoking client is automatically converted to POJOs in the method parameter for POST/PUT/DELETE. Similarly, the response POJOs get converted to JSON object.
### API JSON File
`MyJavaService_API.json`, in this example - contains the file structure and details as used by the API Documentation. This is available only during the design time.
### Spring Bean Configuration File
`MyJavaService.spring.xml`, in this example - Bean classes for service/method lookup
| 50.407407 | 555 | 0.788758 | eng_Latn | 0.997329 |
48e74134826eb9027116bef789292a669c9e58fd | 5,966 | md | Markdown | docs/content/Examples.md | rsbh/cube.js | 89a2e28c3c6aea29fdf7b628174b3e03071515f6 | [
"Cube",
"Apache-2.0",
"MIT"
] | 2 | 2021-05-06T06:37:01.000Z | 2021-05-15T13:56:56.000Z | docs/content/Examples.md | rsbh/cube.js | 89a2e28c3c6aea29fdf7b628174b3e03071515f6 | [
"Cube",
"Apache-2.0",
"MIT"
] | 3 | 2021-06-30T17:12:37.000Z | 2021-12-15T15:39:47.000Z | docs/content/Examples.md | rsbh/cube.js | 89a2e28c3c6aea29fdf7b628174b3e03071515f6 | [
"Cube",
"Apache-2.0",
"MIT"
] | 1 | 2021-05-15T13:56:57.000Z | 2021-05-15T13:56:57.000Z | ---
title: Examples & Tutorials
permalink: /examples
category: Examples & Tutorials
redirect_from:
- /tutorials/
---
Below you can find examples and tutorials to help you get started with Cube.js.
<!-- prettier-ignore-start -->
[[info | ]]
| If you have any examples or tutorials that you'd like to contribute, we encourage you to create a new topic
| in the [Cube.js community forum](https://forum.cube.dev/).
<!-- prettier-ignore-end -->
## Examples
| Demo | Code | Description |
| :---------------------------------------------- | :---------------------------------------------: | :---------------------------------------------------------------------------------------- |
| [Web Analytics][link-web-analytics] | [web-analytics][code-web-analytics] | Web Analytics with AWS Athena, Snowplow, Cube.js backed by Cube Store |
| [Real-Time Dashboard][link-real-time-dashboard] | [real-time-dashboard][code-real-time-dashboard] | Real-Time Dashboard Demo using WebSockets transport |
| [React Dashboard][link-react-dashboard] | [react-dashboard][code-react-dashboard] | Dynamic dashboard with React, GraphQL, and Cube.js |
| [D3 Dashboard][link-d3-dashboard] | [d3-dashboard][code-d3-dashboard] | Dashboard with Cube.js, D3, and Material UI |
| [Stripe Dashboard][link-stripe-dashboard] | [stripe-dashboard][code-stripe-dashboard] | Stripe Demo Dashboard built with Cube.js and Recharts |
| [Event Analytics][link-event-analytics] | [event-analytics][code-event-analytics] | Mixpanel like Event Analytics App built with Cube.js and Snowplow |
| [External Rollups][link-external-rollups] | [external-rollups][code-external-rollups] | Compare performance of direct BigQuery querying vs MySQL cached version for the same data |
| Simple Dynamic Schema Creation | [async-module-simple][code-simple-asyncmodule] | A simple example of using `asyncModule` to generate schemas |
| Auth0 | [auth0][code-auth0] | Cube.js deployment configured with Auth0 JWK/JWT integration |
| Cognito | [cognito][code-cognito] | Cube.js deployment configured with AWS Cognito JWK/JWT integration |
[link-real-time-dashboard]: https://real-time-dashboard-demo.cube.dev/
[code-real-time-dashboard]:
https://github.com/cube-js/cube.js/tree/master/examples/real-time-dashboard
[link-react-dashboard]: https://react-dashboard-demo.cube.dev/
[code-react-dashboard]:
https://github.com/cube-js/cube.js/tree/master/guides/react-dashboard/demo
[link-d3-dashboard]: https://d3-dashboard-demo.cube.dev/
[code-d3-dashboard]:
https://github.com/cube-js/cube.js/tree/master/examples/d3-dashboard
[link-stripe-dashboard]:
http://cubejs-stripe-dashboard-example.s3-website-us-west-2.amazonaws.com/
[code-stripe-dashboard]:
https://github.com/cube-js/cube.js/tree/master/examples/stripe-dashboard
[link-event-analytics]: https://d1ygcqhosay4lt.cloudfront.net/
[code-event-analytics]:
https://github.com/cube-js/cube.js/tree/master/examples/event-analytics
[link-external-rollups]: https://external-rollups-demo.cube.dev/
[code-external-rollups]:
https://github.com/cube-js/cube.js/tree/master/examples/external-rollups
[link-web-analytics]: https://web-analytics-demo.cube.dev/
[code-web-analytics]:
https://github.com/cube-js/cube.js/tree/master/examples/web-analytics
[code-simple-asyncmodule]:
https://github.com/cube-js/cube.js/tree/master/examples/async-module-simple
[code-auth0]: https://github.com/cube-js/cube.js/tree/master/examples/auth0
[code-cognito]: https://github.com/cube-js/cube.js/tree/master/examples/cognito
## Tutorials
### Getting Started Tutorials
These tutorials are a good place to start learning Cube.js.
- [Cube.js, the Open Source Dashboard Framework: Ultimate Guide ](https://cube.dev/blog/cubejs-open-source-dashboard-framework-ultimate-guide) -
It is "**Cube.js 101**" style tutorial, which walks through the building a
simple dashboard with React on the frontend.
- [Building MongoDB Dashboard using Node.js ](https://cube.dev/blog/building-mongodb-dashboard-using-node.js) -
It is a great place to start if you are planning to use Cube.js with MongoDB.
It covers the MongoDB Connector for BI and how to connect it to Cube.js.
- [Node Express Analytics Dashboard with Cube.js ](https://cube.dev/blog/node-express-analytics-dashboard-with-cubejs) -
This tutorials shows how Cube.js could be embedded into an existing Express.js
application.
### Advanced
- [Pre-Aggregations Tutorial](https://cube.dev/blog/high-performance-data-analytics-with-cubejs-pre-aggregations/) -
Pre-Aggregations is one of the most powerful Cube.js features. By using it you
can significantly speed up performance of your dashboards and reports. This
tutorial is a good first step to master pre-aggregations.
- Building an Open Source Mixpanel Alternative - It's a series of tutorials on
building a production ready application with Cube.js.
- [Part 1: Collecting and Displaying Events](https://cube.dev/blog/building-an-open-source-mixpanel-alternative-1)
- [Part 2: Conversion Funnels ](https://cube.dev/blog/building-open-source-mixpanel-alternative-2/)
- [React Query Builder with Cube.js ](https://cube.dev/blog/react-query-builder-with-cubejs) -
It shows you how to build a dynamic query builder with Cube.js React Component.
| 64.847826 | 193 | 0.645994 | eng_Latn | 0.599336 |
48e88c3b0eea1deb29144df4b83b6e6a203e132b | 1,418 | md | Markdown | docs/manual-CN/2.query-language/2.functions-and-operators/order-by-function.md | nianiaJR/nebula-docs | d8a1763d0d0970548ea2a1ea2dbf991248892ae4 | [
"Apache-2.0"
] | 1 | 2020-03-06T02:30:55.000Z | 2020-03-06T02:30:55.000Z | docs/manual-CN/2.query-language/2.functions-and-operators/order-by-function.md | nianiaJR/nebula-docs | d8a1763d0d0970548ea2a1ea2dbf991248892ae4 | [
"Apache-2.0"
] | null | null | null | docs/manual-CN/2.query-language/2.functions-and-operators/order-by-function.md | nianiaJR/nebula-docs | d8a1763d0d0970548ea2a1ea2dbf991248892ae4 | [
"Apache-2.0"
] | null | null | null | # Order By 函数
类似于 SQL,`ORDER BY` 可以进行升序 (`ASC`) 或降序 (`DESC`) 排序并返回结果,并且它只能在 `PIPE` 语句 (`|`) 中使用。
```ngql
ORDER BY <expression> [ASC | DESC] [, <expression> [ASC | DESC] ...]
```
如果没有指明 ASC 或 DESC,`ORDER BY` 将默认进行升序排序。
## 示例
```ngql
nebula> FETCH PROP ON player 100,101,102,103 YIELD player.age AS age, player.name AS name | ORDER BY age, name DESC;
-- 取 4 个顶点并将他们以 age 从小到大的顺序排列,如 age 相同,则 name 按降序排列。
-- 返回如下结果:
======================================
| VertexID | age | name |
======================================
| 103 | 32 | Rudy Gay |
--------------------------------------
| 102 | 33 | LaMarcus Aldridge |
--------------------------------------
| 101 | 36 | Tony Parker |
--------------------------------------
| 100 | 42 | Tim Duncan |
--------------------------------------
```
(使用方法参见 [FETCH](../4.statement-syntax/2.data-query-and-manipulation-statements/fetch-syntax.md) 文档)
```ngql
nebula> GO FROM 100 OVER follow YIELD $$.player.age AS age, $$.player.name AS name | ORDER BY age DESC, name ASC;
-- 从顶点 100 出发查找其关注的球员,返回球员的 age 和 name,age 按降序排列,如 age 相同,则 name 按升序排列。
-- 返回如下结果:
===========================
| age | name |
===========================
| 36 | Tony Parker |
---------------------------
| 33 | LaMarcus Aldridge |
---------------------------
| 25 | Kyle Anderson |
---------------------------
```
| 29.541667 | 118 | 0.438646 | yue_Hant | 0.369678 |
48eaa1e22292d355d966d8bf8e007d915cc8c025 | 132 | md | Markdown | README.md | Kooki-eByte/Teaching-Python | 1259e53952fde87b5e2684b2460d8394e62bae66 | [
"MIT"
] | null | null | null | README.md | Kooki-eByte/Teaching-Python | 1259e53952fde87b5e2684b2460d8394e62bae66 | [
"MIT"
] | null | null | null | README.md | Kooki-eByte/Teaching-Python | 1259e53952fde87b5e2684b2460d8394e62bae66 | [
"MIT"
] | null | null | null | # Teaching-Python
Made to teach classmates for the basics and some intermediate python skills. Shows small examples in each folder.
| 44 | 113 | 0.818182 | eng_Latn | 0.999881 |
48eb7bff78ce7bd140ff3b30510e68796991f52e | 1,233 | md | Markdown | docs/mdx/calculationcurrentpass-mdx.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mdx/calculationcurrentpass-mdx.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mdx/calculationcurrentpass-mdx.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CalculationCurrentPass (MDX) | Documentos de Microsoft
ms.date: 06/04/2018
ms.prod: sql
ms.technology: analysis-services
ms.custom: mdx
ms.topic: reference
ms.author: owend
ms.reviewer: owend
author: minewiskan
manager: kfile
ms.openlocfilehash: 2ef0a07cde73b74ee459e7391f8f99c25e4dc11b
ms.sourcegitcommit: 97bef3f248abce57422f15530c1685f91392b494
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 06/04/2018
ms.locfileid: "34739355"
---
# <a name="calculationcurrentpass-mdx"></a>CalculationCurrentPass (MDX)
Devuelve el paso de cálculo actual de un cubo para el contexto de consulta especificado.
## <a name="syntax"></a>Sintaxis
```
CalculationCurrentPass()
```
## <a name="remarks"></a>Notas
El **CalculationCurrentPass** función devuelve el índice de base cero del pase de cálculo para el contexto de consulta actual. Con resolución automática de recursividad, esta función tiene muy poco uso práctico.
## <a name="see-also"></a>Vea también
[CalculationPassValue (MDX)](../mdx/calculationpassvalue-mdx.md)
[IIf (MDX)](../mdx/iif-mdx.md)
[Referencia de funciones MDX (MDX)](../mdx/mdx-function-reference-mdx.md)
| 30.825 | 214 | 0.728305 | spa_Latn | 0.435639 |
48ed5296edfa64b9e5b5cf0bb2193a502b622cee | 2,934 | md | Markdown | docs/framework/data/adonet/ole-db-data-type-mappings.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/ole-db-data-type-mappings.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/data/adonet/ole-db-data-type-mappings.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: OLE DB-Datentypzuordnungen
ms.date: 03/30/2017
ms.assetid: 04bcb259-59d3-4fd7-894d-4f0dd0c68069
ms.openlocfilehash: 7f3b498e39feac4a6fe98e739793d20e0268b8f4
ms.sourcegitcommit: 5b475c1855b32cf78d2d1bbb4295e4c236f39464
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 09/24/2020
ms.locfileid: "91150699"
---
# <a name="ole-db-data-type-mappings"></a>OLE DB-Datentypzuordnungen
In der folgenden Tabelle wird der .NET Framework Typ abgeleitet für Datentypen aus der .NET Framework Datenanbieter für ADO und OLE DB ( <xref:System.Data.OleDb> ) angezeigt. Die typisierten Accessormethoden für <xref:System.Data.OleDb.OleDbDataReader> werden ebenfalls aufgelistet.
|ADO-Typ|OLE DB-Typ|.NET Framework-Typ|Typisierter .NET Framework-Accessor|
|--------------|-----------------|----------------------------------------------------------------------|--------------------------------------------------------------------------------|
|adBigInt|DBTYPE_I8|Int64|GetInt64()|
|adBinary|DBTYPE_BYTES|Byte[]|GetBytes()|
|adBoolean|DBTYPE_BOOL|Boolean|GetBoolean()|
|adBSTR|DBTYPE_BSTR|String|GetString()|
|adChapter|DBTYPE_HCHAPTER|Unterstützt durch den `DataReader`. Weitere Informationen finden [Sie unter Abrufen von Daten mit einem DataReader](retrieving-data-using-a-datareader.md).|GetValue()|
|adChar|DBTYPE_STR|String|GetString()|
|adCurrency|DBTYPE_CY|Decimal|GetDecimal()|
|adDate|DBTYPE_DATE|Datetime|GetDateTime()|
|adDBDate|DBTYPE_DBDATE|Datetime|GetDateTime()|
|adDBTime|DBTYPE_DBTIME|Datetime|GetDateTime()|
|adDBTimeStamp|DBTYPE_DBTIMESTAMP|Datetime|GetDateTime()|
|adDecimal|DBTYPE_DECIMAL|Decimal|GetDecimal()|
|adDouble|DBTYPE_R8|Double|GetDouble()|
|adError|DBTYPE_ERROR|ExternalException|GetValue()|
|adFileTime|DBTYPE_FILETIME|Datetime|GetDateTime()|
|adGUID|DBTYPE_GUID|Guid|GetGuid()|
|adIDispatch|DBTYPE_IDISPATCH *|Object|GetValue()|
|adInteger|DBTYPE_I4|Int32|GetInt32()|
|adIUnknown|DBTYPE_IUNKNOWN *|Object|GetValue()|
|adNumeric|DBTYPE_NUMERIC|Decimal|GetDecimal()|
|adPropVariant|DBTYPE_PROPVARIANT|Object|GetValue()|
|adSingle|DBTYPE_R4|Single|GetFloat()|
|adSmallInt|DBTYPE_I2|Int16|GetInt16()|
|adTinyInt|DBTYPE_I1|Byte|GetByte()|
|adUnsignedBigInt|DBTYPE_UI8|UInt64|GetValue()|
|adUnsignedInt|DBTYPE_UI4|UInt32|GetValue()|
|adUnsignedSmallInt|DBTYPE_UI2|UInt16|GetValue()|
|adUnsignedTinyInt|DBTYPE_UI1|Byte|GetByte()|
|adVariant|DBTYPE_VARIANT|Object|GetValue()|
|adWChar|DBTYPE_WSTR|String|GetString()|
|adUserDefined|DBTYPE_UDT|Nicht unterstützt||
|adVarNumeric|DBTYPE_VARNUMERIC|Nicht unterstützt||
\* Bei den OLE DB Typen `DBTYPE_IUNKNOWN` und `DBTYPE_IDISPATCH` ist der Objekt Verweis eine gemarshallte Darstellung des Zeigers.
## <a name="see-also"></a>Weitere Informationen
- [Abrufen und Ändern von Daten in ADO.NET](retrieving-and-modifying-data.md)
- [Übersicht über ADO.NET](ado-net-overview.md)
| 51.473684 | 284 | 0.73756 | yue_Hant | 0.726067 |
48edb2145595c6ff9b31d92dd532069edfce102a | 417 | md | Markdown | jessie/gitsrv/config/best-practice.md | bloodmud/sysup-old | 73f0487afb0449014ddfea26c36e7087640f3354 | [
"MIT"
] | null | null | null | jessie/gitsrv/config/best-practice.md | bloodmud/sysup-old | 73f0487afb0449014ddfea26c36e7087640f3354 | [
"MIT"
] | null | null | null | jessie/gitsrv/config/best-practice.md | bloodmud/sysup-old | 73f0487afb0449014ddfea26c36e7087640f3354 | [
"MIT"
] | null | null | null | ### Gitolite administration
* Prepare
1. Use a separate account
2. Move gitolite (private key file) to ~/.ssh/id_rsa
git clone git@devsrv:gitolite-admin
* Add repo and setup access
1. Edit conf/gitolite.conf
> add repo
2. Copy user public keys to keydir
> name.pub
3. Add public keys to repo and commit
> git add conf
> git add keydir
> git commit -m "Init repo and user access"
> git push
| 19.857143 | 53 | 0.693046 | eng_Latn | 0.682452 |
48eea538d3aa1ab49a8eafa6bf9971b7ac7dfe80 | 2,069 | md | Markdown | README.md | ZwX1616/mxnet-SSD | fd89424d711b1ec4f02c35987212d1038e69e905 | [
"MIT"
] | null | null | null | README.md | ZwX1616/mxnet-SSD | fd89424d711b1ec4f02c35987212d1038e69e905 | [
"MIT"
] | null | null | null | README.md | ZwX1616/mxnet-SSD | fd89424d711b1ec4f02c35987212d1038e69e905 | [
"MIT"
] | null | null | null | # SSD: Single Shot MultiBox Object Detector
SSD is an unified framework for object detection with a single network.
You can use the code to train/evaluate/test for object detection task.
### project overview
```
deploy
├── c++
│ ├── main.cc
│ ├── readme.md
│ ├── CMakeLists.txt
│ └── cmake
├── python
│ ├── main.py
│ └── readme.md
├── models
│ └── <model_list>
└── demo_images
└── <demo_images>
```
### init
```
data dir
/mnt/data/:dataset_name/
/mnt/data/:dataset_name/val
/mnt/data/:dataset_name/train
/mnt/data/:dataset_name/test
job dir
# /mnt/jobs/:job_id/config
# /mnt/jobs/:job_id/ckpt/
train logs and metrics
/mnt/training_logs/:job_id/
training model management
/mnt/models
### docker setup
docker run -it --rm -v /home/fulingzhi/workspace/mxnet-ssd-mirror:/app mxnet-ssd-bike:v0.2.1 bash
docker run -it --rm -v /home/fulingzhi/workspace/mxnet-ssd-mirror:/app mxnet-cu90-ssd:v0.1 bash
### docker env
docker run --network host -it --rm -v /home/fulingzhi/workspace/mxnet-ssd-pedestrian:/app -v /mnt/gf_mnt/datasets:/mnt/datasets -v /mnt/gf_mnt/jobs:/mnt/jobs mxnet-ssd:v0.1 bash
python3.6 deploy.py --network densenet-tiny --prefix /mnt/jobs/job2/ssd-1-1 --epoch 150 --num-class 1 --topk 100 --threshold 0.30
python3.6 deploy.py --network densenet-tiny --prefix /mnt/jobs/job2/ssd-1-1 --epoch 150 --num-class 1 --topk 100 --threshold 0.30
python3.6 deploy.py --network densenet-tiny --prefix /mnt/jobs/job2/ssd-1-1 --epoch 128 --num-class 4 --topk 400 --threshold 0.30
python3.6 deploy.py --network densenet-tiny --prefix ./upload/model/V1/deploy_ssd-densenet-tiny-ebike-detection --epoch 128 --num-class 2 --topk 400 --threshold 0.30
python3.6 demo.py --network densenet-tiny --cpu --class-names person --data-shape 300 --prefix /mnt/jobs/job2/ssd-1-1 --epoch 150 --class-names person --images ./data/demo/street.jpg
python3.6 demo.py --network densenet-tiny --cpu --class-names person --data-shape 300 --prefix /mnt/jobs/job2/deploy_ssd-1-1 --epoch 150 --deploy --class-names person --images ./data/demo/street.jpg
```
| 31.348485 | 198 | 0.705172 | eng_Latn | 0.297557 |
48ef58d8ca63d2fe56c6afe98b13cc8abe33d458 | 3,151 | md | Markdown | services/ObjectStorage/nl/it/os_configuring.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | null | null | null | services/ObjectStorage/nl/it/os_configuring.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | null | null | null | services/ObjectStorage/nl/it/os_configuring.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | 1 | 2020-09-24T04:33:34.000Z | 2020-09-24T04:33:34.000Z | ---
copyright:
years: 2014, 2017
lastupdated: "2017-02-10"
---
{:new_window: target="_blank"}
{:shortdesc: .shortdesc}
{:codeblock: .codeblock}
{:screen: .screen}
{:pre: .pre}
# Configurazione della CLI per utilizzare i comandi Swift e Cloud Foundry
Per interagire con il servizio {{site.data.keyword.objectstorageshort}} utilizzando i comandi Swift o Cloud Foundry, devi installare il software prerequisito.
{: shortdesc}
## Installazione del client Swift {: #install-swift-client}
Se non lo hai già fatto, installa il seguente software prerequisito.
* Python 2.7 o successive
* Package setuptools
* Package pip
* CLI Cloud Foundry
Per installare il client Python Swift, esegui il seguente comando:
```
pip install python-swiftclient
```
{: pre}
Per installare il client Python Keystone, esegui il seguente comando:
```
pip install python-keystoneclient
```
{: pre}
Per ulteriori informazioni sui prequisiti, consulta <a href="http://docs.openstack.org/user-guide/common/cli_install_openstack_command_line_clients.html#install-the-prerequisite-software" target="_blank">la documentazione OpenStack. <img src="../../icons/launch-glyph.svg" alt="icona link esterno"></a>
## Impostazione del client {: #setup-swift-client}
Per configurare il client Swift, devi `esportare` le tue informazioni di autenticazione. Puoi [generare le credenziali](/docs/services/ObjectStorage/os_credentials.html) tramite la CLI utilizzando i comandi Cloud Foundry o cURL o tramite la IU {{site.data.keyword.Bluemix_notm}}. Il client prende le informazioni di autenticazione dalle variabili di ambiente nella seguente tabella.
<table>
<caption> Tabella 1. Descrizioni e variabili di ambiente </caption>
<tr>
<th> Variabile di ambiente </th>
<th> Descrizione </th>
</tr>
<tr>
<td> <code>OS_USER_ID</code> </td>
<td> Il tuo ID utente {{site.data.keyword.objectstorageshort}}. </td>
</tr>
<tr>
<td> <code>OS_PASSWORD</code> </td>
<td> La password per la tua istanza {{site.data.keyword.objectstorageshort}}. </td>
</tr>
<tr>
<td> <code>OS_PROJECT_ID</code> </td>
<td> Il tuo ID progetto. </td>
</tr>
<tr>
<td> <code>OS_REGION_NAME</code> </td>
<td> La regione in cui i tuoi oggetti vengono archiviati. {{site.data.keyword.objectstorageshort}} è disponibile nelle regioni di Dallas e Londra. </td>
</tr>
<tr>
<td> <code>OS_AUTH_URL</code> </td>
<td> L'URL endpoint. Assicurati che <code>/v3</code> sia aggiunto alla fine dell'URL. </td>
</tr>
</table>
Per esportare le informazioni di autenticazione, immetti le tue credenziali ed esegui il seguente comando:
```
export OS_USER_ID=
export OS_PASSWORD=
export OS_PROJECT_ID=
export OS_AUTH_URL=
export OS_REGION_NAME=
export OS_IDENTITY_API_VERSION=
export OS_AUTH_VERSION=
swift auth
```
{: codeblock}
Esempio:
```
export OS_USER_ID=24a20b8e4e724f5fa9e7bfdc79ca7e85
export OS_PASSWORD=*******
export OS_PROJECT_ID=383ec90b22ff4ba4a78636f4e989d5b1
export OS_AUTH_URL=https://identity.open.softlayer.com/v3
export OS_REGION_NAME=dallas
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_VERSION=3
swift auth
```
{: screen}
| 29.175926 | 382 | 0.737226 | ita_Latn | 0.758259 |
48ef873f7f088e1fdabd2f6253248aa0ba972c66 | 6,413 | md | Markdown | source/_posts/operations/service.md | The-only/swan-docs | 9f56a8874fcf78f2c9cb8b76aff41cbfc1ceb413 | [
"Apache-2.0"
] | 1 | 2019-03-08T02:50:55.000Z | 2019-03-08T02:50:55.000Z | source/_posts/operations/service.md | shuffleld/swan-docs | aa08ffd26e8485d709861bf07e0a2eebadaac29c | [
"Apache-2.0"
] | null | null | null | source/_posts/operations/service.md | shuffleld/swan-docs | aa08ffd26e8485d709861bf07e0a2eebadaac29c | [
"Apache-2.0"
] | null | null | null | ---
title: 智能小程序平台服务协议
header: operations
nav: book
sidebar: service
---
欢迎您使用智能小程序平台!
请您仔细阅读并透彻理解本协议内容,特别是免除或者限制责任的条款,我们将以加粗形式提醒您注意。
**本协议签订地、争议解决管辖方式,均与《“百度”移动应用用户服务协议》一致,即签订地为中华人民共和国北京市海淀区,由本协议签订地有管辖权的人民法院管辖。**
### 1、 协议的范围
1.1 本协议是您与百度公司之间关于您使用智能小程序平台服务所订立的协议。“开发者”是指注册、开发、运营、维护智能小程序的个人或组织,在本协议中亦称为“您”。“用户”是指接受智能小程序开发者提供服务的用户。
1.2 本平台服务是指百度公司根据本协议向开发者提供的包括远程接口调用、开发框架、AI能力等一系列互联网技术的服务。您在注册后可以成为智能小程序的开发者,并通过智能小程序平台创建、开发并发布智能小程序应用,并为用户提供各类服务。
### 2、 智能小程序的开通与审核
2.1 您在完成准入认证并填写相关信息后,会获得百度公司提供的密钥和开发工具。之后您可以对所注册的智能小程序进行设计开发、内部测试、编辑维护等初始化开发管理。但在您通过高级认证前,且您的智能小程序发布审核通过前,您的智能小程序将无法进行发布,即不会向其他用户进行展示,其他用户也无法搜索、添加该智能小程序。您应妥善保管您的账户名、密码及密钥,不应随意泄露给任何第三方。
2.2 为确保智能小程序平台、用户等各方的安全、稳定及良好的用户体验,百度公司将对需要发布的智能小程序进行发布审核。
“发布审核”是指由开发者发起,将其完成初始化开发的智能小程序提交至百度公司,由百度公司自行或委托第三方对该智能小程序的可操作性、稳定性、用户体验等,进行审查的过程。发布审核结果包括审核通过与审核不通过两种。审核不通过的,该智能小程序将无法发布。
2.3 **您清楚并同意,智能小程序发布审核服务是百度公司基于开发者提交包括但不限于注册信息、服务类目、使用说明等资料,在百度公司的合法权限和能力范围内所综合作出合理、谨慎的形式评估和判断服务。<u>但百度公司无法实质审查用户的实际经营、运营以及推广等行为,并不为此提供任何担保;也不应因其审核行为而被认定为是智能小程序的共同提供方,且不因该等审核而对开发者的智能小程序承担任何责任。</u>您应仔细阅读有关智能小程序发布的各项协议、规范和规则,由于客观存在不断变化的法律法规、监管要求、产品规范和用户体验需求等各类因素,您的智能小程序有可能被拒绝发布。**
**“拒绝”是指提交发布审核的智能小程序被审核不通过,或者已经审核通过的智能小程序,被下线或被拒绝提供本平台服务。**
2.4 发布审核通过后,用户可以合理的方式、频率向百度公司提交版本优化、更新等发布审核申请。
2.5 您理解并同意,发布审核将对百度公司产生人力和经济负担,若您在审核服务期内,恶意或非恶意(如技术水平、用户需求等)原因而导致过于频繁提交发布审核申请,百度公司有权对您采取要求支付发布审核服务费、暂停或终止向您提供发布审核服务等措施,由此产生的责任由您独自承担。
2.6 **开发者应当如实填写和提交帐号注册与认证资料,完成信息登记,并对资料的真实性、合法性、准确性和有效性承担责任。百度公司依据您填写和提交的帐号资料,在法律允许的范围内向其他用户展示您的注册信息。<u>如开发者提供的服务或内容需要取得相关法律法规规定的许可或进行备案的,您应当在帐号注册与认证时进行明确说明并提交相应的合法有效的许可或备案证明。</u>否则,百度公司有权拒绝或终止提供本平台服务,并依照本协议对违规帐号进行处罚。因此给百度公司或第三方造成损害的,您应当依法予以赔偿。**
上述资料如有任何变更,开发者应立即向百度公司书面提交变更后资料。变更资料未经核实前,百度公司可完全依赖变更前的资料行事,由此产生的一切风险由开发者自行承担。**如果百度公司自行发现,或第三方通知发现开发者提供的上述信息不真实或不准确的,百度公司有权终止向开发者提供平台服务,由此产生的一切法律责任由开发者自行承担。**
2.7 **您清楚理解并同意,您发布的智能小程序都应当在百度公司所公布的类目范围内。为此,您应当真实、全面、不遗漏地根据您的智能小程序所提供的服务、从事的活动及所在的行业准确选择类目,使百度公司更好地识别您所提交的智能小程序可能涉及的法律法规及监管要求,进行相应的审核。若百度公司发现您所提交的智能小程序没有对应的类目,或者所提交的资质材料不符合相关类目的资质材料要求,将不能通过发布审核。同时,基于特定行业、特定领域的法律法规或监管要求,百度公司也可能根据不断变化的现实情况要求您补充提供相关资质材料,对此您应配合提供,否则百度公司有权拒绝向您提供平台服务。**
### 3、 智能小程序的发布
3.1 审核通过后,您可以在智能小程序开放平台上发布您的智能小程序应用,并进行运营/推广、客户服务、维护等各项工作。
3.2 <u>**百度公司仅提供中立的平台服务,百度公司及智能小程序平台不参与您的智能小程序的研发、运营等任何活动。**</u>您应自行负责您智能小程序的创作、设计、信息编辑/展示、加工、修改、测试、运营/推广、客户服务及维护等,并自行承担相应的费用。
3.3 您理解并同意,百度公司一直致力于为用户提供文明健康、规范有序的网络环境,您不得利用智能小程序帐号或智能小程序平台服务制作、复制、发布、传播违反法律法规的规定以及侵犯用户或第三方合法权益的内容,您不得为其他任何非法目的而使用智能小程序的平台服务;您不得进行任何违法或不正当的活动,包括但不限于:
(1) 反对宪法所确定的基本原则的;
(2) 危害国家安全,泄露国家秘密,颠覆国家政权,破坏国家统一的;
(3) 损害国家荣誉和利益的;
(4) 煽动民族仇恨、民族歧视、破坏民族团结的;
(5) 破坏国家宗教政策,宣扬邪教和封建迷信的;
(6) 散布谣言,扰乱社会秩序,破坏社会稳定的;
(7) 散布淫秽、色情、赌博、暴力、凶杀、恐怖或者教唆犯罪的;
(8) 侮辱或者诽谤他人,侵害他人合法权利的;
(9) 违反中国法律、法规、规章、条例以及任何具有法律效力之规范的。
3.4 **如存在下列情况,百度公司有权进行警告、限制功能、下线您的某个或全部智能小程序、拒绝再向您提供任何服务等,并追究您的法律责任,由此发生的用户和开发者之间、或其他任何纠纷应由开发者自行协商解决,百度公司不承担任何责任。百度公司有权通过运营规范、规则要求、通知等形式进一步补充或说明本条所规定的内容**:
(1) 开发者违反国家相关法律法规定的规定,包括但不限于从事恐怖活动、危害网络安全、洗钱、颠覆政府、煽动民族仇恨等;
(2) 开发者违反本协议约定,或平台运营规范,或百度公司各项规则要求等;
(3) 百度公司收到司法机关或政府机关或监管机构的相关要求;
(4) 百度公司收到其他第三方通知,投诉某个开发者或其智能小程序存在违反法律规定、侵犯他人合法权益等情形;
(5) 百度公司认为开发者的行为或其智能小程序可能影响智能小程序平台的正常运营的,如诱导分享、欺诈舞弊等;
(6) 百度公司认为开发者存在违反法律规定或不当的行为,包括但不限于未经许可获取、存储、披露用户信息、可能危及网络或用户安全等情形;
(7) 百度公司认为存在其它可能损害百度公司、用户或其他第三方的合法权益的行为。
3.5 开发者仅得按本协议约定自行使用智能小程序平台上的权限和接口等,不得为任何第三方申请接口或本协议项下的其他服务。
### 4、 用户个人信息保护
4.1 您在申请本平台服务过程中,需要填写一些必要的信息,请保持这些信息的真实、准确、合法、有效并注意及时更新,以便百度公司向您提供及时有效的帮助,或更好地为您提供服务。根据相关法律法规和政策,请您填写真实的身份信息。若您填写的信息不完整或不准确,则可能无法使用本平台服务或在使用过程中受到限制。您理解并同意,为了向您提供服务,需要在法律允许的范围内向其他用户展示您的注册信息。
4.2 <u>**您在运营智能小程序的过程中,应严格遵守所适用的关于保护用户信息、个人信息和隐私权及保障网络安全运行等方面的法律法规。您的产品、服务具有收集个人信息(包括个人信息及个人敏感信息,其中“个人敏感信息”例如:反映个人财产状况、个人健康生理、个人身份、个人生物识别、网络身份标识、个人电话号码、通信记录和内容、位置状况的数据等以及未成年人个人信息)功能的,应当遵循合法、正当、必要的原则,向用户明示收集、使用信息的目的、方式和范围,并经用户明示同意。**</u>
4.3 <u>**为帮助您改善产品与服务,以更好地满足用户的需求,智能小程序提供了数据分析功能。如您选择使用数据分析功能,您需要我们处理、加工来自于用户的个人信息,包括但不限于用户在智能小程序内所访问的页面路径等。因此,您需确保:您在使用智能小程序数据分析功能之前,已经依照法律法规的要求征得了您的用户合法、充分必要的授权、同意和许可,允许我们履行数据分析等服务的目的处理数据;您已经遵守并将持续遵守适用的法律、法规和监管要求,包括但不限于制定和公布有关个人信息保护和隐私保护的相关政策等。**</u>
4.4 <u>**您保证对从智能小程序平台获取的、经过用户授权的个人信息进行加密存储并在用户授权范围内使用个人信息,不得在未经用户明示同意或超出用户同意范围的情况下使用、共享、转让和披露用户个人信息;您确保能够建立足够的数据安全能力并采取必要的管理和技术手段,按照国家标准实行个人信息安全保护措施,防止个人信息的泄露、损毁和丢失。同时,未经百度公司允许或在超出百度公司授权范围的情况下,您不得收集、处理、存储、抓取或以任何方式获取百度公司控制的用户个人信息。**</u>
4.5 <u>**如您违反上述用户个人信息保护条款或百度公司判断您收集、处理、使用、共享个人信息的行为可能损害个人信息主体权益时,您应按照百度公司的要求立即删除相关的个人信息数据,并不再收集、处理、存储、使用上述个人信息数据。**</u>
### 5、 知识产权声明
5.1 百度公司在本平台服务中提供的内容(包括但不限于网页、文字、图片、音频、视频、图表、代码等)的知识产权归百度公司所有,开发者在使用本平台服务中所产生的内容的知识产权归开发者或相关权利人所有。
5.2 除另有特别声明外,百度公司提供本平台服务时所依托软件的著作权、专利权及其他知识产权均归百度公司所有。
### 6、 法律责任
6.1 您应同时遵守《[百度用户协议](https://passport.baidu.com/static/passpc-account/html/protocal.html)》、《[百度公司移动应用软件服务协议](https://s.bdstatic.com/common/agreement/ios.html)》、《[百度公司移动应用隐私政策](https://s.bdstatic.com/common/agreement/privacy.html)》、《[智能小程序平台运营规范](https://smartprogram.baidu.com/docs/operations/specification/)》、《[Apple Developer Program许可协议](https://developer.apple.com/terms/)》、《[App Store审核指南](https://developer.apple.com/cn/app-store/review/guidelines/)》中的约定,除非该等约定与本协议存在冲突。
6.2 <u>**您清楚知悉并理解,百度公司在本协议项下仅向您提供智能小程序平台服务或相关中立的技术支持服务,并且百度公司有权基于百度公司以及服务平台有序运营、健康发展等因素挑选使用本平台服务的开发者。**</u>
6.3 <u>**您的智能小程序由您以其自身名义开发或享有合法的运营权利,任何由您的智能小程序所产生与任何第三方的纠纷、争议等,均由您独立承担全部责任,且与百度公司无关,百度公司不会、也不可能参与智能小程序的研发、运营等任何活动,百度公司也不会对您的智能小程序进行任何的修改、编辑等。**</u>
6.4 **<u>因您的智能小程序及相关服务产生的任何纠纷、责任等,以及开发者违反相关法律法规或本协议约定引发的任何后果,均由开发者独立承担责任、赔偿损失,与百度公司无关。如侵害到百度公司、百度公司的关联公司或他人权益的,开发者须自行承担全部责任和赔偿一切损失。</u>**
### 7、 其它
7.1 鉴于本平台服务将涉及百度APP的技术开发,可能会不定期邀请部分开发者参与内部测试。若您被邀请且同意参与内部测试,未经百度公司书面同意,请勿向任何第三方以任何形式披露、展示、传递涉及本次内测的相关信息、资料。否则,百度公司可能会取消您本次及其他内测资格等,由此造成百度公司损失的,您也应一并赔偿。
7.2 为了尽可能地为小程序提供分发服务,您有权利选择将自己的小程序分发到一个或多个宿主下,也有权利选择不在任何一个宿主上分发您的小程序。而宿主也有权利选择是否接受您的小程序在其平台上分发。如果您不做出选择,则默认您同意将小程序分发到全部宿主上。
宿主,是指智能小程序开源联盟成员有权运营的平台,包括但不限于APP,手机,手机操作系统,汽车,车载操作系统,智能音箱,智能电视等可运行百度智能小程序的平台。
当您选择在宿主分发时,您需要符合宿主的相关规则,并由宿主负责本协议中约定的有关“发布审核”、发布及其他运营相关的所有事宜。若由此所产生纠纷的,您放弃追究百度任意法律责任的权利。您也确认,有关运营行为导致的违法问题与百度公司无关。
当您选择停止在某个宿主或某几个宿主分发时,需要根据宿主的相关规则提交相应的材料,百度给予积极配合。
7.3 **您同意,因下列原因导致百度公司无法正常提供平台服务的,百度公司不承担任何责任**:
**(1)** **百度公司对系统及设备进行停机维护或升级;**
**(2)** **因台风、地震、洪水、雷电、恐怖袭击等不可抗力原因;**
**(3)** **开发者操作不当或通过非百度公司授权或认可的方式使用平台服务的;**
**(4)** **因病毒、木马、恶意程序攻击、网络拥堵、系统不稳定、系统或设备故障、通讯故障、电力故障、第三方服务瑕疵或政府行为等原因。**
尽管有前款约定,百度公司将采取合理行动积极促使服务恢复正常。
7.4 **您使用本平台服务即视为您已阅读并同意受本协议的约束。百度公司有权在必要时修改本协议。您可以在相关服务页面查阅最新版本的协议。本协议变更后,如果您继续使用智能小程序平台服务,即视为您已接受修改后的协议。如果您不接受修改后的协议,应当停止使用智能小程序平台服务。**
7.5 本协议的成立、生效、履行、解释及纠纷解决,适用中华人民共和国大陆地区法律(不包括冲突法)。
| 45.161972 | 475 | 0.834243 | yue_Hant | 0.723035 |
48ef8a37baa09ee1ec8bc2be9719180ab9a55e7d | 1,579 | md | Markdown | results/crinacle/usound/FitEar TG334 sample 1/README.md | liqi1982/AutoEq | 24d82474d92a53ac70b5f20a8ab4ecf71539598f | [
"MIT"
] | 1 | 2019-05-06T03:16:21.000Z | 2019-05-06T03:16:21.000Z | results/crinacle/usound/FitEar TG334 sample 1/README.md | liqi1982/AutoEq | 24d82474d92a53ac70b5f20a8ab4ecf71539598f | [
"MIT"
] | null | null | null | results/crinacle/usound/FitEar TG334 sample 1/README.md | liqi1982/AutoEq | 24d82474d92a53ac70b5f20a8ab4ecf71539598f | [
"MIT"
] | null | null | null | # FitEar TG334 sample 1
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.
### Parametric EQs
In case of using parametric equalizer, apply preamp of **-6.8dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-6.8dB**.
| Type | Fc | Q | Gain |
|:--------|:--------|:-----|:--------|
| Peaking | 212 Hz | 0.55 | -4.3 dB |
| Peaking | 879 Hz | 3.34 | 1.7 dB |
| Peaking | 2835 Hz | 1.95 | 4.7 dB |
| Peaking | 4785 Hz | 2.39 | 6.0 dB |
| Peaking | 6050 Hz | 3.26 | -2.2 dB |
| Peaking | 30 Hz | 1 | 0.9 dB |
| Peaking | 1491 Hz | 4.15 | -1.3 dB |
| Peaking | 2040 Hz | 6.37 | 1.1 dB |
### Fixed Band EQs
In case of using fixed band (also called graphic) equalizer, apply preamp of **-6.6dB**
(if available) and set gains manually with these parameters.
| Type | Fc | Q | Gain |
|:--------|:---------|:-----|:--------|
| Peaking | 31 Hz | 1.41 | 0.8 dB |
| Peaking | 62 Hz | 1.41 | -0.2 dB |
| Peaking | 125 Hz | 1.41 | -3.1 dB |
| Peaking | 250 Hz | 1.41 | -3.8 dB |
| Peaking | 500 Hz | 1.41 | -1.3 dB |
| Peaking | 1000 Hz | 1.41 | 0.3 dB |
| Peaking | 2000 Hz | 1.41 | 1.0 dB |
| Peaking | 4000 Hz | 1.41 | 6.1 dB |
| Peaking | 8000 Hz | 1.41 | -1.0 dB |
| Peaking | 16000 Hz | 1.41 | -0.0 dB |
### Graphs
 | 41.552632 | 150 | 0.581381 | eng_Latn | 0.703324 |
48f0506ede515a6dcba48706ff399ea8cd6c5137 | 10,936 | md | Markdown | join/study.md | developma/sql-study | 79013cc414bfbcd4039fb8d998a2748f2e6d9d69 | [
"Apache-2.0"
] | null | null | null | join/study.md | developma/sql-study | 79013cc414bfbcd4039fb8d998a2748f2e6d9d69 | [
"Apache-2.0"
] | null | null | null | join/study.md | developma/sql-study | 79013cc414bfbcd4039fb8d998a2748f2e6d9d69 | [
"Apache-2.0"
] | null | null | null | # 結合
テーブルとテーブルをつなげて新しいテーブルを作り出すという基本動作はシンプル。
結合を制するものがSQLを制する。
## 結合の種類
- クロス結合(cross join)
実務で使う機会はほとんど無い。
SQLにおける結合という演算を理解するには、クロス結合を知ることが早道。
- 内部結合
- 外部結合
- 自己結合
- 等値結合/非等値結合
- 自然結合
### クロス結合
以下サンプルを用いる。
```sql
CREATE TABLE Employees
(emp_id CHAR(8),
emp_name VARCHAR(32),
dept_id CHAR(2),
CONSTRAINT pk_emp PRIMARY KEY(emp_id));
CREATE TABLE Departments
(dept_id CHAR(2),
dept_name VARCHAR(32),
CONSTRAINT pk_dep PRIMARY KEY(dept_id));
CREATE INDEX idx_dept_id ON Employees(dept_id);
INSERT INTO Employees VALUES('001', '石田', '10');
INSERT INTO Employees VALUES('002', '小笠原', '11');
INSERT INTO Employees VALUES('003', '夏目', '11');
INSERT INTO Employees VALUES('004', '米田', '12');
INSERT INTO Employees VALUES('005', '釜本', '12');
INSERT INTO Employees VALUES('006', '岩瀬', '12');
INSERT INTO Departments VALUES('10', '総務');
INSERT INTO Departments VALUES('11', '人事');
INSERT INTO Departments VALUES('12', '開発');
INSERT INTO Departments VALUES('13', '営業');
commit;
```
社員およびその所属部署を管理するテーブル。
この2つに対してクロス結合を行う。
```sql
SELECT *
FROM Employees
CROSS JOIN
Departments;
```
このSQLの実行結果は24行。
社員テーブルの行数6と、部署テーブルの行数4のかけ算になる。
数学的には直積、デカルト積と呼ばれる演算で、結合対象となる2つのテーブルのレコードから、
可能なすべての組合せ網羅する演算。
#### クロス結合は実務では使われない理由
- 実際にこういう結果を求めたいケースがない
- 非常にコストがかかる演算
#### うっかりクロス結合
古い結合構文を使ってうっかり結合条件を書き忘れたときにクロス結合になってしまう。
```sql
SELECT *
FROM Employees, Departments;
```
結合条件がないため、DBMSはしかたなくテーブル同士の全行を組み合わせようとする。
実務では、3つ以上のテーブルを結合したり、結合条件を複数書くことがある。そういう時にやってしまいがち。
防ぐためには、標準SQL準拠の結合構文を使うようコーディング規約を定めること。
`INNER JOIN`のような標準SQLの構文では、結合条件がないとDBMSは構文エラーとして実行を拒否するので、こうしたミスを未然に防げる。
とはいえ、クロス結合はほかの全ての演算の母体。
### 内部結合
内部結合(INNER JOIN)は一番よく使う結合の種類。
#### 内部結合の動作
以下サンプル(再掲)を使う。
```sql
CREATE TABLE Employees
(emp_id CHAR(8),
emp_name VARCHAR(32),
dept_id CHAR(2),
CONSTRAINT pk_emp PRIMARY KEY(emp_id));
CREATE TABLE Departments
(dept_id CHAR(2),
dept_name VARCHAR(32),
CONSTRAINT pk_dep PRIMARY KEY(dept_id));
CREATE INDEX idx_dept_id ON Employees(dept_id);
INSERT INTO Employees VALUES('001', '石田', '10');
INSERT INTO Employees VALUES('002', '小笠原', '11');
INSERT INTO Employees VALUES('003', '夏目', '11');
INSERT INTO Employees VALUES('004', '米田', '12');
INSERT INTO Employees VALUES('005', '釜本', '12');
INSERT INTO Employees VALUES('006', '岩瀬', '12');
INSERT INTO Departments VALUES('10', '総務');
INSERT INTO Departments VALUES('11', '人事');
INSERT INTO Departments VALUES('12', '開発');
INSERT INTO Departments VALUES('13', '営業');
commit;
```
これらテーブルを内部結合してみる。
```sql
SELECT
E.emp_id,
E.emp_name,
E.dept_id,
D.dept_name
FROM Employees E INNER JOIN Departments D
ON E.dept_id = D.dept_id;
```
この結果と、クロス結合を見比べてみると、内部結合の結果は、そのすべてがクロス結合の一部、つまり部分集合になっている。
内部結合という語の由来は上記から。内部とは「直積の部分集合」の意味。
#### 内部結合と同値の相関サブクエリ
内部結合は、機能的に相関サブクエリを使って代替可能なことが多くある。
前述のINNER JOINを相関サブクエリに書きかえてみる。
```sql
SELECT
E.emp_id,
E.emp_name,
E.dept_id,
(SELECT
D.dept_name
FROM Departments D
WHERE E.dept_id = D.dept_id) AS dept_name
FROM Employees E;
```
基本的に結合で記述できる限りは結合を選択すると良い。
相関サブクエリをスカラサブクエリ(戻り値が単一の値になるクエリ)と使うと、
結果行数の数だけサブクエリを実行することになるのでコストがかかる。
### 外部結合
外部結合(outer join)は、内部結合の次によく使われる。
「内部」と「外部」は排他的な関係。
「外部」とは「直積の部分集合」にならない、という意味。
ただし、常に部分集合にならない、というわけではない。データの状態によってそういうこともある。
#### 外部結合の動作
外部結合は次の3つがある。
- 左外部結合
- 右外部結合
- 完全外部結合
このうち、左外部結合と右外部結合は実質的には同じ機能。
マスタとなるテーブルを左に書くなら左外部結合、右に書くなら右外部結合、という話。
```sql
-- 左のテーブルがマスタ
SELECT
E.emp_id,
E.emp_name,
E.dept_id,
D.dept_name
FROM Departments D LEFT OUTER JOIN Employees E
ON E.dept_id = D.dept_id;
-- 右のテーブルがマスタ
SELECT
E.emp_id,
E.emp_name,
E.dept_id,
D.dept_name
FROM Employees E RIGHT OUTER JOIN Departments D
ON E.dept_id = D.dept_id;
```
実行結果の最終行を見ると、外部結合の結果には、マスタ側のテーブルだけに存在するキーがあった場合、
そのキーを削除せず、結果に保存するよう動作する。
そのため、キーの値をすべて網羅するレイアウトのレポートを作る場合に多用される。
### 外部結合と内部結合の違い
実行結果のうち、違うのは最終行のみ。
これは、内部結合、クロス結合の結果のどの行とも一致しない。いわば「外部」にはみだしている。
外部結合の結果がクロス結合の結果の部分集合にならないのは、外部結合がマスタ側のテーブルの情報を保存するよう動作し、
その結果NULLを生成するため。
一方、クロス結合や内部結合は、NULLを生成することはない。
### 自己結合
自己結合(self join)は、文字どおり自分自身と結合する演算で、同じテーブルあるいは同じビューを使って結合を行うもの。
クロス結合/内部結合/外部結合とはちょっと毛色が違う。
自己結合とは、生成される結果を基準とした分類ではなく、演算の対象に何を使うかという点が問題になってくる。
そのため、自己結合は、「自己結合 + クロス結合」、「自己結合 + 外部結合」といったふうに他の結合との組み合わせが可能。
#### 自己結合の動作
```sql
CREATE TABLE Digits
(digit INTEGER PRIMARY KEY);
INSERT INTO Digits VALUES(0);
INSERT INTO Digits VALUES(1);
INSERT INTO Digits VALUES(2);
INSERT INTO Digits VALUES(3);
INSERT INTO Digits VALUES(4);
INSERT INTO Digits VALUES(5);
INSERT INTO Digits VALUES(6);
INSERT INTO Digits VALUES(7);
INSERT INTO Digits VALUES(8);
INSERT INTO Digits VALUES(9);
commit;
```
自己結合 + クロス結合
```sql
SELECT
D1.digit + (D2.digit * 10) AS seq
FROM Digits D1 CROSS JOIN Digits D2 ORDER BY seq;
```
結合対象は、Digits(D1)とDigits(D2)なので、10 * 10 = 100行になる。
#### 自己結合の考え方
自己結合を行う場合、一般的に同じテーブルに別名をつけて、それらをあたかも別テーブルであるかのように扱う。
### 結合のアルゴリズムとパフォーマンス
SQLで結合演算を行う場合、内部で選択されるアルゴリズムを基準にして考えてみる。
オプティマイザが選択可能な結合アルゴリズムは大きく次の3つがある。
1. Nested Loops
2. Hash
3. Sort Merge
最も頻繁に見るのがNested Loop、次に重要なのはHash。
#### Nested Loops
Nested Loopsは名前のとおり入れ子のループを使うアルゴリズム。
結合は常に1度につき2つのテーブルしか結合しないため、実質的には二重ループと同じ意味。
Nested Loopsには、次のような特徴がある。
- Table_A, Table_Bの結合対象の行数をR(A), R(B)とすると、アクセスされる行数はR(A) * R(B)となる。
Nested Loopsの実行時間はこの行数に比例する。
- 1つのステップで処理する行数が少ないため、HashやSort Mergeと比べてあまりメモリを消費しない
- どのDBMSでも必ずサポートしている
Nested Loopsは一見単純に見えるが、結合のパフォーマンスの鍵を握っていると言っても過言ではない重要なアルゴリズム。
特にA, Bどちらのテーブルを駆動表にするかが大きな要因となる。
どちらが駆動表でも、R(A) * R(B), R(B) * R(A)で変わらないように思えるが、実際には駆動表が小さいほどNested Loopsの性能は良くなる。
##### 駆動表の重要性
Nested Loopsの性能を改善するキーワードとして、駆動表に小さなテーブルを選ぶ、というのがあるが、実際にはある前提条件が無いと意味がない。
`内部表の結合キーの列にインデックスが存在すること`
内部表の結合キーの列にインデックスが存在する場合、そのインデックスをたどることによって、DBMSは駆動表の1行に対して内部表を馬鹿正直にループする必要がなくなる。
理想的なケースでは、駆動表のレコード1行に対して内部表レコードが1行に対応していれば、インデックスを辿ることで1行を特定できるため、
完全に内部表のループを省略出来る。
アクセス行数は、R(A) * 2となる。
しかし、内部表の結合キーのインデックスが使われないと、駆動表を小さくするメリットが少なくなる。
内部表のループを完全にスキップ出来るのは、結合キーが内部表に対して一意な場合だけ。
等値結合であれば、内部表のアクセス対象行を必ず1行に限定出来るので、二重ループの内側のループを完全に省略出来る。
一方、結合キーが内部表に対して一意でない場合、インデックスで内部表にアクセスする場合でも、
複数行がヒットする可能性がある。
この場合は、そのヒットした複数行に対してループする必要がある。
「駆動表」を小さく、という格言は、「内部表を大きく」というふうに解釈するのがよいかもしれない。
内部表が大きいほどインデックスによるループのスキップ効果が大きくなるから。
「駆動表の小さなNested Loops」 + 「内部表の結合キーにインデックス」、という組み合わせは、SQLチューニングにおける基本中の基本。
結合が遅い場合の半分は上記組み合わせによって改善出来る。
物理ERモデルとインデックスの設計を行う時、どのテーブルを内部表にして、どの結合キーにインデックスを作成しておくべきか、
設計の段階から考えておく必要がある。
##### Nested Loopsの落とし穴
結合キーで内部表にアクセスしたときのヒット件数が多いと、期待したようなレスポンスが出ないことがある。
いかにインデックスをたどったとしても、絶対量としてのループ回数が多ければ意味がない。
店舗テーブルと、そこで注文を受けつけるテーブルがあるとする。
この場合、1つの店舗に対して複数の注文が対応するので、店舗テーブルのほうが駆動表にふさわしいと仮定する。
すると、店舗IDを結合キーとして注文テーブルにアクセスする。
たとえば、そこで1つの店舗IDで数百万、数千万件のレコードがヒットしてしまうと、結局は内部表に対するループ回数が多くなる。
店舗というのは、地域や規模によって注文数に開きがあるため、小規模の店舗であれば注文少なく高速に処理されるが、
大規模の店舗だと注文多く大量の結果がヒットしてしまい、まったく結果が帰ってこないことになりかねない。
SQLのパフォーマンスは処理対象となるデータの量に依存する。
この問題に対処する方法は2つある。
1つは、あえて駆動表に大きなテーブルを置く。
内部表になった店舗テーブルのアクセスは、主キー(店舗ID)になるので常に1行のアクセスが保証される。
店舗によって性能のばらつきを抑えつつ、極端に性能が劣化するのを防ぐ方法。
注文テーブルという巨大テーブルへのアクセスコストが(検索条件によるインデックスが使えるなどで)現実的な範囲に収まるならば、有効な方法。
もう1つの方法がHash。
### Hash
Hashはアルゴリズムの世界ではよく使われる。
入力に対しなるべく一意性と一様性を持った値を出力する関数をハッシュ関数と言う。
ハッシュ結合は、まず小さいテーブルをスキャンし、結合キーに対してハッシュ関数を適用することでハッシュ値に変換する。
このハッシュ値の集合をハッシュテーブルと呼ぶ。
次に、もう一方の(大きな)テーブルをスキャンして、結合キーがそのハッシュ値に存在するかどうか調べるという方法で結合を行う。
小さいほうのテーブルからハッシュテーブルを作る理由は、ハッシュテーブルはDBMSのワーキングメモリに保存されるため、
なるべく小さなテーブルのほうが効率良い。
Hashの特徴は次のとおり。
- メモリを多く消費する
結合テーブルからハッシュテーブルを作るため。
- TEMP落ちが発生するかもしれない
メモリ内にハッシュテーブルが収まらないとストレージを使用してしまう
- 出力となるハッシュ値は入力値の順序性を保証しないため、等値結合でしか使えない
Hashが有効なケースは次のとおり。
- Nested Loopsで適切な駆動表が存在しない場合
- 駆動表として小さなテーブルは指定できるが内部表のヒット件数が多い場合
- 内部表にインデックスが無い、かつインデックスを追加出来ない場合
一般的に、Nested Loopsが効率的に動作しない場合の改善策がHash。
しかし、メモリ消費の大きさ、OLTPでの遅延を考慮すると、
夜間バッチや、スループットの低いシステムに使いどころを限定する必要がある。
### Sort Merge
Nested Loopsが非効率な場合、Hashと並んで選択肢になるのがSort Mergeというアルゴリズム。
Sort Mergeは、結合対象のテーブルをそれぞれ結合キーでソートを行い、一致する結合キーを見つけたらそれを結果セットに含める、というもの。
#### Sort Mergeの特徴
- Nested Loopsよりも多くのメモリを消費する
- 不等号を使った結合にも利用できる、ただし、否定条件の結合では利用できない
- ソート済みのテーブルであればソートをスキップ出来る
- テーブルをソートするため、片方のテーブルをすべてスキャンしたところで結合を終了出来る
#### Sort Mergeが有効なケース
Sort Mergeの結合は、結合そのものにかかる時間は結合の対象行数が多い場合でも悪くない。
しかし、テーブルのソートに多くの時間とリソースを使う可能性がある。
テーブルのソートをスキップ出来る(かなり例外的な)ケースでは、検討の価値があるが、
まずはNested Looopsと、Hashが優先的な選択肢になる。
### 意図せぬクロス結合
クロス結合を実務で使うことはほぼ無いが、意図せずしてクロス結合が現われることがある。
これを「三角結合」という。
```sql
SELECT
A.col_a,
B.col_b,
C.col_c
FROM Table_A A
INNER JOIN Table_B B
ON A.col_a = B.col_b
INNER JOIN Table_C C
ON A.col_a = C.col_c;
```
このSQLでは、Table_A, Table_B, Table_Cの3つのテーブルを結合しているが、
結合条件が存在するのは、Table_A - Table_B, Table_A - Table_Cの間。
Table_B <-> Table_A <-> Table_C
このSQLを実行すると、実行計画に'MERGE JOIN CARTESIAN'が出る。これがクロス結合。
結合条件の無いテーブル同士をクロス結合で結合するケースがある。
Table_BとTable_Cを最初に結合し、その結果をTable_Aと結合する、という順番でやっている。
Table_BとTable_Cは、結合条件がないためクロス結合するしかない。
取引明細などの大きなテーブルと、顧客やカレンダーなどの小さなマスタテーブルを結合するようなパターンがある。
小さなテーブルでのクロス結合はそれほど問題にはならないが、比較的大きなテーブルでは気をつける必要がある。
単純にテーブルのサイズだけではなく、検索条件によってヒットするレコードが変わる場合にも起きる。
#### 意図せぬクロス結合の回避
結合条件の存在しないテーブルにも(結果を変えないように)結合条件を追加する。
Table_B, Table_Cの間に結合条件を追加する。
```sql
SELECT
A.col_a,
B.col_b,
C.col_c
FROM Table_A A
INNER JOIN Table_B B
ON A.col_a = B.col_b
INNER JOIN Table_C C
ON A.col_a = C.col_c
AND C.col_c = B.col_b;
```
これだと、`MERGE JOIN CARTESIAN`が出ない。
### 結合が遅いなと感じたら
オプティマイザは、それぞれの結合アルゴリズムの利点と欠点を考慮しながらアルゴリズムを選択している。
しかし、結合アルゴリズムが最適化されず遅延が発生することがある。
最適な結合アルゴリズムを結合対象行数の観点でまとめると、以下のようになる。
- テーブル小・テーブル小
どんなアルゴリズムでも性能に大差はない
- テーブル小・テーブル大
テーブル小を駆動表とするNested Loops。テーブル大の結合キーにインデックスを作ること。
ただし、内部表の行数が多い場合は、駆動表をひっくりかえすかHashを検討する。
- テーブル大・テーブル大
まずはHash。結合キーがソート済みという前提であれば、Sort Merge.
#### そもそも実行計画の制御は可能なのか
内部表の結合キーにインデックスを作ればNested Loopsで使ってくれる、程度であればどのDBMSでもやってくれる。
それ以上の制御は?
##### DBMSごとの実行計画制御
- Oracle
ヒント句により結合アルゴリズムの制御が可能(USE_NL, USE_HASH, USE_MERGE)
駆動表の制御も可能(LEADING)
- SQL Server
ヒント句により結合アルゴリズムの制御が可能
- DB2
なし
- PostgreSQL
pg_hint_planによりヒント句による結合アルゴリズムの制御が可能
また、サーバパラメータによりデータベース全体の制御が可能
- MySQL
結合アルゴリズムがNested Loopsしかない。
##### 実行計画をユーザが制御することによるリスク
ある時点において最適な実行計画が、別時点においてはそうならないケースがある。
もともと、そのために導入されたのがコストベースによる動的な実行計画の制御。
そうしたリスクを十分に検討したうえで、予期されるシステムのライフサイクルの終了時点にも、
なお適切な実行計画を選択し、その時点のデータ特性を擬似的に表現したデータによって性能試験を実施するという実行コストの高いチューニングが必要。
##### 実行計画は揺れるよ
オプティマイザの失敗の典型例は、長期的な運用の中で実行計画が悪い方向に変動していくもの。
データ量の増加などによって統計情報が変化し、ある一定の閾値を越えたところでオプティマイザが実行計画を変化させることで起きる。
事前に発生予測が難しく、突発的なスローダウンを引きおこしてしまう。
実行計画を最も引き起しやすい演算が結合。
SQLの性能変動リスクをおさえるためには、なるべく結合をしないことが重要。
同じ結果を得るためのSQLであっても、結合を使用せず代替手段で対応できる。
ウィンドウ関数、相関サブクエリで対応もできる。
| 22.455852 | 84 | 0.761339 | yue_Hant | 0.863797 |
48f0c6531d2aa33c26e9867a7c13dc18bd4093c2 | 1,302 | md | Markdown | README.md | bpassley/Font-Awesome-For-Ionic-Tabs | 85e04e2d4f491e6a4c3b5af88de57571cc582223 | [
"MIT"
] | null | null | null | README.md | bpassley/Font-Awesome-For-Ionic-Tabs | 85e04e2d4f491e6a4c3b5af88de57571cc582223 | [
"MIT"
] | null | null | null | README.md | bpassley/Font-Awesome-For-Ionic-Tabs | 85e04e2d4f491e6a4c3b5af88de57571cc582223 | [
"MIT"
] | null | null | null | # Font-Awesome-For-Ionic-Tabs
Use Font Awesome icons with Ionic Framework tabs. Includes icons from Font Awesome 4.5.0 (thanks to [@ollycross](https://github.com/ollycross))
## Installation
Make sure you have Font Awesome installed correctly and is working with your Ionic app. You'll need to make sure the fonts are located locally on in your app and that your @font-face src is correctly pointing to files.
Then include the font-awesome-ionic-tabs.css file in your app. If it's working correctly, you should be able to use the classes just like the ionicons.
```html
<ion-tab title="Photos" icon-on="ion-tab-fa-anchor" icon-off="ion-tab-fa-anchor">
<ion-nav-view name="tab-one"></ion-nav-view>
</ion-tab>
```
## Adding Font Awesome Icons
* Find the unicode characters here: http://fortawesome.github.io/Font-Awesome/cheatsheet/
* See if they are already included in this list
* Name your style identical to the font awesome style, but prefix with "ion-tab-"
* Add the class with the unicode content to the list
*Example...*
```css
.ion-tab-fa-anchor:before {
content: "\f13d"; }
```
*Now you can use your new class...*
```html
<ion-tab title="Photos" icon-on="ion-tab-fa-anchor:before" icon-off="ion-tab-fa-anchor:before">
<ion-nav-view name="tab-one"></ion-nav-view>
</ion-tab>
```
| 34.263158 | 218 | 0.728879 | eng_Latn | 0.97905 |
48f12bcd82508a624dd42fcd608a1f7bdf2b4612 | 4,598 | md | Markdown | docs/guides/2-cli.md | amthorn/spectral | 535f4707bc05ae0622ca1a630e57c6e24585cbf8 | [
"Apache-2.0"
] | null | null | null | docs/guides/2-cli.md | amthorn/spectral | 535f4707bc05ae0622ca1a630e57c6e24585cbf8 | [
"Apache-2.0"
] | null | null | null | docs/guides/2-cli.md | amthorn/spectral | 535f4707bc05ae0622ca1a630e57c6e24585cbf8 | [
"Apache-2.0"
] | null | null | null | # Spectral CLI
[Once installed](../getting-started/2-installation.md), Spectral can be run via the command-line:
```bash
spectral lint petstore.yaml
```
You can lint multiple files at the same time by passing on multiple arguments:
```bash
spectral lint petstore.yaml https://example.com/petstore/openapi-v2.json https://example.com/todos/openapi-v3.json
```
Alternatively you can use [glob syntax](https://github.com/mrmlnc/fast-glob#basic-syntax) to match multiple files at once:
```bash
spectral lint ./reference/**/*.oas*.{json,yml,yaml}
```
Other options include:
```text
--version Show version number [boolean]
--help Show help [boolean]
--encoding, -e text encoding to use [string] [default: "utf8"]
--format, -f formatter to use for outputting results
[string] [choices: "json", "stylish", "junit", "html", "text", "teamcity"] [default: "stylish"]
--output, -o output to a file instead of stdout [string]
--resolver path to custom json-ref-resolver instance [string]
--ruleset, -r path/URL to a ruleset file [string]
--fail-severity, -F results of this level or above will trigger a failure exit code
[string] [choices: "error", "warn", "info", "hint"] [default: "error"]
--display-only-failures, -D only output results equal to or greater than --fail-severity
[boolean] [default: false]
--ignore-unknown-format do not warn about unmatched formats [boolean] [default: false]
--fail-on-unmatched-globs fail on unmatched glob patterns [boolean] [default: false]
--verbose, -v increase verbosity [boolean]
--quiet, -q no logging - output only [boolean]
```
The Spectral CLI supports loading documents as YAML or JSON, and validation of OpenAPI v2/v3 documents via our built-in ruleset.
## Using a Ruleset File
If you don't specify a ruleset file with the `--ruleset` parameter, the Spectral CLI will look for a ruleset file called `.spectral.yml`, `.spectral.yaml`, or `.spectral.json` in the current working directory. If no ruleset is specified and no default ruleset file is found, the built-in rulesets will be used.
Here you can build a [custom ruleset](../getting-started/3-rulesets.md), or extend and modify our core rulesets:
- [OpenAPI ruleset](../reference/openapi-rules.md)
- [AsyncAPI ruleset](../reference/asyncapi-rules.md)
## Error Results
Spectral has a few different error severities: `error`, `warn`, `info` and `hint`, and they are in "order" from highest to lowest. By default, all results will be shown regardless of severity, but since v5.0, only the presence of errors will cause a failure status code of 1. Seeing results and getting a failure code for it are now two different things.
The default behavior can be modified with the `--fail-severity=` option. Setting fail severity to `--fail-severity=info` would return a failure status code of 1 for any info results or higher. Using `--fail-severity=warn` will cause a failure status code for errors or warnings.
Changing the fail severity will not effect output. To change what results Spectral CLI prints to the screen, add the `--display-only-failures` switch (or just `-D` for short). This will strip out any results which are below the specified fail severity.
## Proxying
To have requests made from Spectral be proxied through a server, you'd need to specify PROXY environment variable:
`PROXY=<<PROXY_SERVER_ADDRESS>> spectral lint spec.yaml`
## Custom \$ref Resolving
If you want to customize \$ref resolving, you can leverage `--resolver` flag and pass a path to the JS file exporting a custom instance of json-ref-resolver Resolver.
### Example
Assuming the filename is called `my-resolver.js` and the content looks as follows, the path should look more or less like `--resolver=./my-resolver.js`.
```js
const { Resolver } = require("@stoplight/json-ref-resolver");
module.exports = new Resolver({
resolvers: {
// pass any resolver for protocol you need
},
});
```
You can learn more about `$ref` resolving in the [JS section](./3-javascript.md#using-custom-resolver).
| 53.465116 | 354 | 0.647455 | eng_Latn | 0.98212 |
48f1f8179b08dba151b9ead087bbe912a660d46d | 21 | md | Markdown | README.md | wentianle/xtp-backtrader-api | eb776986993f55dd03d23737a9fb807202ca17c3 | [
"Apache-2.0"
] | 1 | 2021-06-10T07:06:23.000Z | 2021-06-10T07:06:23.000Z | README.md | wentianle/xtp-backtrader-api | eb776986993f55dd03d23737a9fb807202ca17c3 | [
"Apache-2.0"
] | null | null | null | README.md | wentianle/xtp-backtrader-api | eb776986993f55dd03d23737a9fb807202ca17c3 | [
"Apache-2.0"
] | 1 | 2021-06-10T07:06:44.000Z | 2021-06-10T07:06:44.000Z | # xtp-backtrader-api
| 10.5 | 20 | 0.761905 | deu_Latn | 0.303101 |
48f223ff46bfd41d6f356d11d6743d1c284288ad | 4,537 | md | Markdown | docs/visual-basic/reference/command-line-compiler/netcf.md | jorgearimany/docs.es-es | 49946e3dab59d68100683a45a4543a1fb338e882 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/reference/command-line-compiler/netcf.md | jorgearimany/docs.es-es | 49946e3dab59d68100683a45a4543a1fb338e882 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/reference/command-line-compiler/netcf.md | jorgearimany/docs.es-es | 49946e3dab59d68100683a45a4543a1fb338e882 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: -netcf
ms.date: 03/13/2018
f1_keywords:
- /netcf
- -netcf
helpviewer_keywords:
- -netcf compiler option [Visual Basic]
- netcf compiler option [Visual Basic]
- /netcf compiler option [Visual Basic]
ms.assetid: db7cfa59-c315-401c-a59b-0daf355343d6
ms.openlocfilehash: d7c3bcba8e62d62904ed778a48d0e8ae6738ce00
ms.sourcegitcommit: 0be8a279af6d8a43e03141e349d3efd5d35f8767
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 04/18/2019
ms.locfileid: "59480773"
---
# <a name="-netcf"></a>-netcf
Establece el compilador con destino en [!INCLUDE[Compact](~/includes/compact-md.md)].
## <a name="syntax"></a>Sintaxis
```
-netcf
```
## <a name="remarks"></a>Comentarios
El `-netcf` opción hace que el compilador de Visual Basic al destino la [!INCLUDE[Compact](~/includes/compact-md.md)] en lugar de completo [!INCLUDE[dnprdnshort](~/includes/dnprdnshort-md.md)]. Funcionalidad del lenguaje que solo está presente en el completo [!INCLUDE[dnprdnshort](~/includes/dnprdnshort-md.md)] está deshabilitado.
El `-netcf` opción está diseñada para usarse con [- sdkpath](../../../visual-basic/reference/command-line-compiler/sdkpath.md). Las características del lenguaje deshabilitadas de forma `-netcf` son las mismas características de lenguaje no está presentes en los archivos de destino con `-sdkpath`.
> [!NOTE]
> El `-netcf` opción no está disponible en el entorno de desarrollo de Visual Studio; está disponible solo cuando se compila desde la línea de comandos. El `-netcf` opción se establece cuando se carga un proyecto de dispositivo de Visual Basic.
El `-netcf` opción cambia las características de lenguaje siguientes:
- El [final \<palabra clave > instrucción](../../../visual-basic/language-reference/statements/end-keyword-statement.md) palabra clave, que finaliza la ejecución de un programa, está deshabilitado. El programa siguiente compila y se ejecuta sin `-netcf` pero se produce un error en tiempo de compilación con `-netcf`.
[!code-vb[VbVbalrCompiler#34](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrCompiler/VB/netcf.vb#34)]
- Enlace en tiempo de ejecución, en todos los formularios, está deshabilitado. Se generan errores en tiempo de compilación cuando se producen los escenarios de enlace reconocidos. El programa siguiente compila y se ejecuta sin `-netcf` pero se produce un error en tiempo de compilación con `-netcf`.
[!code-vb[VbVbalrCompiler#35](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrCompiler/VB/OptionStrictOff.vb#35)]
- El [automática](../../../visual-basic/language-reference/modifiers/auto.md), [Ansi](../../../visual-basic/language-reference/modifiers/ansi.md), y [Unicode](../../../visual-basic/language-reference/modifiers/unicode.md) modificadores están deshabilitados. La sintaxis de la [Declare Statement](../../../visual-basic/language-reference/statements/declare-statement.md) también se modifica la instrucción en `Declare Sub|Function name Lib "library" [Alias "alias"] [([arglist])]`. El código siguiente muestra el efecto de `-netcf` en una compilación.
[!code-vb[VbVbalrCompiler#36](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrCompiler/VB/OptionStrictOff.vb#36)]
- Con las palabras clave de Visual Basic 6.0 que se quitaron de Visual Basic genera un error diferente cuando `-netcf` se utiliza. Esto afecta a los mensajes de error para las siguientes palabras clave:
- `Open`
- `Close`
- `Put`
- `Print`
- `Write`
- `Input`
- `Lock`
- `Unlock`
- `Seek`
- `Width`
- `Name`
- `FreeFile`
- `EOF`
- `Loc`
- `LOF`
- `Line`
## <a name="example"></a>Ejemplo
El siguiente código compila `Myfile.vb` con el [!INCLUDE[Compact](~/includes/compact-md.md)], utilizando las versiones de mscorlib.dll y Microsoft.VisualBasic.dll se encuentra en el directorio de instalación predeterminada de la [!INCLUDE[Compact](~/includes/compact-md.md)] en la unidad C:. Normalmente, se usa la versión más reciente de la [!INCLUDE[Compact](~/includes/compact-md.md)].
```console
vbc -netcf -sdkpath:"c:\Program Files\Microsoft Visual Studio .NET 2003\CompactFrameworkSDK\v1.0.5000\Windows CE " myfile.vb
```
## <a name="see-also"></a>Vea también
- [Compilador de línea de comandos de Visual Basic](../../../visual-basic/reference/command-line-compiler/index.md)
- [Líneas de comandos de compilación de ejemplo](../../../visual-basic/reference/command-line-compiler/sample-compilation-command-lines.md)
- [-sdkpath](../../../visual-basic/reference/command-line-compiler/sdkpath.md)
| 45.828283 | 550 | 0.748953 | spa_Latn | 0.903777 |
48f22a6b1d5e933c993753166f62f29850156f8e | 960 | md | Markdown | doc/bin_prot.md | mars0i/owl-bin-prot | 183d9760e77a07575e264a4cb661c3a9bbcc6820 | [
"MIT"
] | null | null | null | doc/bin_prot.md | mars0i/owl-bin-prot | 183d9760e77a07575e264a4cb661c3a9bbcc6820 | [
"MIT"
] | 10 | 2018-05-12T20:51:31.000Z | 2018-05-18T05:36:35.000Z | doc/bin_prot.md | mars0i/owl-bin-prot | 183d9760e77a07575e264a4cb661c3a9bbcc6820 | [
"MIT"
] | null | null | null | Owl serialization with `bin_prot`
====
`owl-serialization` provides functions for serializing Owl dense
matrices and ndarrays using https://github.com/janestreet/bin_prot,
among other things.
## Loading
See `owl-serialisation` installation instructions. Or to load directly
into utop, load bin_prot.cma. (NEED MORE.)
## Basic usage
For an Owl dense matrix or ndarray `x`:
```OCaml
(* Serialize x and write it into file x.bin. If x.bin exists, it will
be truncated and overwritten. *)
Owl_bin_prot.serialize_to_file x "x.bin"
(* Read the data back in from the file: *)
let x' = Owl_bin_prot.unserialize_from_file "x.bin"
(* Check that the old and new versions are equal: *)
x = x'
```
For more fine-grained options, see generated docs or the file
`src/owl_bin_prot/serialisation.mli`.
To see how to serialize data structures in which Owl matrices or
ndarrays are embedded, see
[bin_prot_embedded.md](./bin_prot_embedded.md) in the `doc`
directory.
| 26.666667 | 71 | 0.754167 | eng_Latn | 0.983667 |
48f26745d8c9ac616e52139134b7f97d5c286bc1 | 495 | md | Markdown | docs/error-messages/compiler-errors-1/compiler-error-c2242.md | taketakeyyy/cpp-docs.ja-jp | 4cc268ee390a7f6a2a8afd848f65876acc4450a9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-1/compiler-error-c2242.md | taketakeyyy/cpp-docs.ja-jp | 4cc268ee390a7f6a2a8afd848f65876acc4450a9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-errors-1/compiler-error-c2242.md | taketakeyyy/cpp-docs.ja-jp | 4cc268ee390a7f6a2a8afd848f65876acc4450a9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: コンパイラ エラー C2242
ms.date: 11/04/2016
f1_keywords:
- C2242
helpviewer_keywords:
- C2242
ms.assetid: e1b687ed-4460-4c26-9f7e-c43e65c6dd65
ms.openlocfilehash: 85578caff9d482131035cf60997a259f91ec4bfb
ms.sourcegitcommit: 1f009ab0f2cc4a177f2d1353d5a38f164612bdb1
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/27/2020
ms.locfileid: "87208786"
---
# <a name="compiler-error-c2242"></a>コンパイラ エラー C2242
列挙型、構造体、共用体の後に typedef 名を書くことはできません。
**`typedef`** 修飾名の末尾に名前が表示されます。
| 23.571429 | 60 | 0.8 | yue_Hant | 0.154249 |
48f28ff93020c025766467d1aded5dbdacb52a68 | 847 | md | Markdown | DataStructures/Stacks/README.md | SaurabhDhage/Java | 77513471c634edfaf06aa443e2452aab49b2b376 | [
"MIT"
] | 2 | 2022-01-08T11:13:31.000Z | 2022-01-08T11:13:32.000Z | DataStructures/Stacks/README.md | zybfsyqs/Java | e118abdeca362fad65ed9b1c909548bc4a0f7427 | [
"MIT"
] | 1 | 2022-02-16T23:47:19.000Z | 2022-02-16T23:47:19.000Z | DataStructures/Stacks/README.md | zybfsyqs/Java | e118abdeca362fad65ed9b1c909548bc4a0f7427 | [
"MIT"
] | 1 | 2022-01-18T22:52:36.000Z | 2022-01-18T22:52:36.000Z | # STACK
stack is an ADT (abstract data type ) that act like list of objects but there is a diffrents.
stack act is _LIFO_ (Last In First Out), it means that when we want to get an element from the stack we get the last element in the stack.
stack is based on two methods (functions)
## push(element)
add "element" to the top of the stack.
for example: we have `1, 3, 5` in stack, then we call push(9),
`9` will add to last index of stack -> `1, 3, 5 , 9`
## peek() or top()
return element at the top of the stack.
for example: we have `1, 3, 5` in stack, then we call peek(),
`5` will be returned (without removing it from the stack)
## pop()
remove the last element (i.e. top of stack) from stack.
for example: we have `1, 3, 5 , 9` in stack, then we call pop(),
the function will return `9` and the stack will change to `1, 3, 5`.
| 27.322581 | 138 | 0.687131 | eng_Latn | 0.999017 |
48f29899d065428169e587f64eb344ee86fc8e7f | 365 | md | Markdown | README.md | ThessalonikiNet-MeetUp/NoiseDetectionBot | 2b14fe727453c06620acc1ac9829942b9c9eb058 | [
"MIT"
] | 7 | 2017-05-22T08:27:53.000Z | 2017-05-28T22:15:01.000Z | README.md | ThessalonikiNet-MeetUp/NoiseDetectionBot | 2b14fe727453c06620acc1ac9829942b9c9eb058 | [
"MIT"
] | 4 | 2017-05-26T12:55:16.000Z | 2017-06-01T20:49:21.000Z | README.md | ThessalonikiNet-MeetUp/NoiseDetectionBot | 2b14fe727453c06620acc1ac9829942b9c9eb058 | [
"MIT"
] | 2 | 2017-05-25T08:52:32.000Z | 2017-05-28T22:16:05.000Z | # Noise Detection Bot
This bot will notify user if he's too noisy and suggest available meeting rooms.
In order to do so user needs to run on his machine one of the following apps which will capture noise level and notify the bot
* https://github.com/ThessalonikiNet-MeetUp/NoiseDetectionClient
* https://github.com/ThessalonikiNet-MeetUp/NoiseDetectionConsoleApp
| 60.833333 | 126 | 0.813699 | eng_Latn | 0.954988 |
48f2e1240baa5becd16349c53aff08249ea0b4cf | 150 | md | Markdown | README.md | cverna-hackathons/sup | c0c39ecf99f9cddf2b44bb85d249238b7e7c2afe | [
"MIT"
] | null | null | null | README.md | cverna-hackathons/sup | c0c39ecf99f9cddf2b44bb85d249238b7e7c2afe | [
"MIT"
] | 34 | 2019-09-25T08:08:58.000Z | 2022-01-22T11:13:24.000Z | README.md | cverna-hackathons/sup | c0c39ecf99f9cddf2b44bb85d249238b7e7c2afe | [
"MIT"
] | null | null | null | # sup
sup
Mine data from saatchi to local database:
```
./probes/saatchi/main | ts-node --project ./data-mine/tsconfig.json ./data-mine/writer.ts
``` | 21.428571 | 89 | 0.706667 | eng_Latn | 0.610209 |
48f2e8f6100c74b3dd708883204bd091bf835404 | 3,886 | md | Markdown | app/views/content/blog/why-i-became-a-teacher.md | GayeCord/get-into-teaching-app | 75a8fd3b95e7fcf07f3b03846cdc2456071b2807 | [
"MIT"
] | null | null | null | app/views/content/blog/why-i-became-a-teacher.md | GayeCord/get-into-teaching-app | 75a8fd3b95e7fcf07f3b03846cdc2456071b2807 | [
"MIT"
] | null | null | null | app/views/content/blog/why-i-became-a-teacher.md | GayeCord/get-into-teaching-app | 75a8fd3b95e7fcf07f3b03846cdc2456071b2807 | [
"MIT"
] | null | null | null | ---
title: Why I became a teacher
date: "2021-02-01"
author: Giorgio Rubbo
images:
giorgio_rubbo:
path: "media/images/content/blog/giorgio-rubbo.jpg"
alt: "A photograph of biology teacher, Giorgio Rubbo"
description: |-
Giorgio Rubbo talks about what inspired him to become a teacher, his
application process, concerns he had, and why you should become a teacher if
you're thinking about it.
keywords:
- teacher training advisers
- becoming a teacher
- applications
- teacher training
tags:
- becoming a teacher
- teacher training advisers
---
$giorgio_rubbo$
In addition to supporting aspiring teachers to achieve a place on a teacher training course, Teacher Training Advisers run a number of [groups hosted on Facebook](https://www.facebook.com/groups/1357146377672255), to allow those who are thinking about training, those what are doing their training and others who are now qualified teachers to support one-another.
Following the successful completion of his training year, Giorgio Rubbo invited aspiring teachers to ask him questions about a day in the life of a beginner teacher. Here are some of his answers.
## What inspired you to teach?
My inspiration to teach was my love of biology and science in general. I am passionate about my subject and am constantly fascinated by the world around us. I want to share my passion and enthuse as many people as I can with it. This love for science (and for talking about science) led naturally to teaching.
## How did you find the application process?
I found the application process long! It isn’t a difficult process but there are a lot of forms and hoops to jump through. My best advice would be to [get a Teacher Training Adviser](/tta-service), they are fantastically knowledgeable and helpful at all stages of the application. Another tip would be to make a list of all the things you need to do (there will be lots — UCAS forms, personal statements, finding GCSE and A Level Certificates, sorting referees out). Whilst I found the paperwork stages stressful, I found the interview process quite enjoyable! My best advice for a successful application is to be reflective in your personal statement and interview, think about why you want to teach, about the sort of classroom you envisage yourself in (and why) and talk about it!
## What were your concerns prior to beginning the course
My concerns generally centred around behaviour management. Dealing with poor behaviour is one of the most daunting things you will do as a teacher, but once you are used to it, and you have your own routines and techniques in place, it is fine. My concerns were resolved fairly quickly, through both university sessions teaching us behavioural management techniques, and seeing and using them in the classroom.
## What was the highlight of school placements?
Teaching!
## What disappointed me about the course?
Due to COVID-19, placements ended significantly earlier than usual, and thus I missed out on a lot of time in the classroom. It was very disappointing after getting to know classes and students and genuinely enjoying teaching for placements to end so abruptly.
## What has been the funniest moment on the course?
We had a university session on teaching science through debate. We were all assigned a position on a town council (chairman, eco-campaigner, local university, etc.) and asked to prepare and debate the pros and cons of nuclear power as a group. We all got into character, some of our cohort even got dressed up! It was a genuinely worthwhile learning experience and added a tool that I will use in the classroom to my repertoire, whilst also being genuinely funny!
## What would your answer be if someone asked you ‘Should I teach?’
It isn’t easy, but it’s the most fantastic and rewarding job. If you are willing to put in a lot of hard work, it is genuinely fantastic.
| 69.392857 | 783 | 0.78667 | eng_Latn | 0.999906 |
48f337151703ee50e3d8c1fa0b7e99699ce2b63b | 3,487 | md | Markdown | README.md | okursoftware/story | 1d8c3690b82d78b0e0e5ff91635636ea0c8bd9f7 | [
"MIT"
] | 1 | 2021-01-14T21:29:59.000Z | 2021-01-14T21:29:59.000Z | README.md | okursoftware/story | 1d8c3690b82d78b0e0e5ff91635636ea0c8bd9f7 | [
"MIT"
] | null | null | null | README.md | okursoftware/story | 1d8c3690b82d78b0e0e5ff91635636ea0c8bd9f7 | [
"MIT"
] | null | null | null | # story
[](https://pub.dev/packages/story)
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/license-MIT-purple.svg" alt="License: MIT"></a>
Instagram stories like UI with rich animations and customizability.

## Usage
`StoryPageView` needs at least three arguments: `itemBuilder`, `pageLength`, and `storyLength`.
```dart
/// Minimum example to explain the usage.
return Scaffold(
body: StoryPageView(
itemBuilder: (context, pageIndex, storyIndex) {
return Center(
child: Text("Index of PageView: $pageIndex Index of story on each page: $storyIndex"),
);
},
storyLength: (pageIndex) {
return 3;
},
pageLength: 4,
);
```
- `itemBuilder` is necessary to build the content of story. It is called with index of pageView and index of the story on the page.
- `storyLength` decides the length of story for each page. The example above always returns 3, but it should depend on the argument `pageIndex`
- `pageLength` is just the length of `StoryPageView`
The example above just shows 12 stories by 4 pages, which is not practical.
This one is the proper usage, extracted from [example](https://pub.dev/packages/story/example).
``` dart
return Scaffold(
body: StoryPageView(
itemBuilder: (context, pageIndex, storyIndex) {
final user = sampleUsers[pageIndex];
final story = user.stories[storyIndex];
return Stack(
children: [
Positioned.fill(child: Container(color: Colors.black)),
Center(child: Image.network(story.imageUrl)),
Padding(
padding: const EdgeInsets.only(top: 44, left: 8),
child: Row(
children: [
Container(
height: 32,
width: 32,
decoration: BoxDecoration(
image: DecorationImage(
image: NetworkImage(user.imageUrl),
fit: BoxFit.cover,
),
shape: BoxShape.circle,
),
),
const SizedBox(
width: 8,
),
Text(
user.userName,
style: TextStyle(
fontSize: 17,
color: Colors.white,
fontWeight: FontWeight.bold,
),
),
],
),
)
],
);
},
pageLength: sampleUsers.length,
storyLength: (int pageIndex) {
return sampleUsers[pageIndex].stories.length;
},
onPageLimitReached: (){
/// Navigator.pop(context)
},
),
);
```
- It is recommended to use data model with two layers. In this case, `UserModel` which has the list of `StoryModel`
```dart
/// Example Data Model
class UserModel {
UserModel(this.stories, this.userName, this.imageUrl);
final List<StoryModel> stories;
final String userName;
final String imageUrl;
}
class StoryModel {
StoryModel(this.imageUrl);
final String imageUrl;
}
```
- `onPageLimitReached` is called when the very last story is finished.
## Tips
This package is still under development. If you have any requests or questions, please ask on [github](https://github.com/santa112358/story/issues)
| 29.803419 | 147 | 0.604531 | eng_Latn | 0.770658 |
48f37944054b9d908edd3641ac927369639571c9 | 18,491 | md | Markdown | upgrade.md | ReWiG/docs | 00c4bde2f5818442b38a09d4315343de1dd15b89 | [
"MIT"
] | 124 | 2015-01-26T13:55:20.000Z | 2022-03-24T12:39:35.000Z | upgrade.md | ReWiG/docs | 00c4bde2f5818442b38a09d4315343de1dd15b89 | [
"MIT"
] | 72 | 2015-02-08T18:18:52.000Z | 2022-01-14T02:19:46.000Z | upgrade.md | ReWiG/docs | 00c4bde2f5818442b38a09d4315343de1dd15b89 | [
"MIT"
] | 162 | 2015-01-26T07:54:45.000Z | 2022-01-31T14:47:21.000Z | git 2b8ca08f38d9ede8ea0a77b9458583a1805f27a0
---
# Руководство по обновлению
- [Обновление с 7.x версии до 8.0](#upgrade-8.0)
<a name="high-impact-changes"></a>
## Изменения, оказывающие большое влияние
<!-- <div class="content-list" markdown="1"> -->
- [Фабрики модели](#model-factories)
- [Метод очереди `retryAfter`](#queue-retry-after-method)
- [Свойство очереди `timeoutAt`](#queue-timeout-at-property)
- [Методы очереди `allOnQueue` и `allOnConnection`](#queue-allOnQueue-allOnConnection)
- [Пагинация по умолчанию](#pagination-defaults)
- [Пространства имен наполнителей и фабрик](#seeder-factory-namespaces)
<!-- </div> -->
<a name="medium-impact-changes"></a>
## Изменения со средней степенью воздействия
<!-- <div class="content-list" markdown="1"> -->
- [Требование PHP 7.3.0](#php-7.3.0-required)
- [Поддержка пакетной обработки и таблица невыполненных заданий](#failed-jobs-table-batch-support)
- [Обновления режима обслуживания](#maintenance-mode-updates)
- [Параметр `php artisan down --message`](#artisan-down-message)
- [Метод `assertExactJson`](#assert-exact-json-method)
<!-- </div> -->
<a name="upgrade-8.0"></a>
## Обновление с 7.x версии до 8.0
<a name="estimated-upgrade-time-15-minutes"></a>
#### Приблизительное время обновления: 15 минут
> {note} Мы стараемся задокументировать все возможные критические изменения. Поскольку некоторые из этих критических изменений находятся в малоизвестных частях фреймворка, только часть этих изменений может повлиять на ваше приложение.
<a name="php-7.3.0-required"></a>
### Требование PHP 7.3.0
**Вероятность воздействия: средняя**
Новая минимальная версия PHP теперь 7.3.0.
<a name="updating-dependencies"></a>
### Обновление зависимостей
Обновите следующие зависимости в вашем файле `composer.json`:
<!-- <div class="content-list" markdown="1"> -->
- `guzzlehttp/guzzle` до `^7.0.1`
- `facade/ignition` до `^2.3.6`
- `laravel/framework` до `^8.0`
- `laravel/ui` до `^3.0`
- `nunomaduro/collision` до `^5.0`
- `phpunit/phpunit` до `^9.0`
<!-- </div> -->
Следующие сторонние пакеты имеют новые основные выпуски для поддержки Laravel 8. Если возможно, вы должны прочитать соответствующие руководства перед обновлением:
<!-- <div class="content-list" markdown="1"> -->
- [Horizon v5.0](https://github.com/laravel/horizon/blob/master/UPGRADE)
- [Passport v10.0](https://github.com/laravel/passport/blob/master/UPGRADE)
- [Socialite v5.0](https://github.com/laravel/socialite/blob/master/UPGRADE)
- [Telescope v4.0](https://github.com/laravel/telescope/blob/master/UPGRADE)
<!-- </div> -->
Кроме того, установщик Laravel был обновлен для поддержки `composer create-project` и Laravel Jetstream. Любой установщик старше 4.0 перестанет работать после октября 2020 года. Вам следует как можно скорее обновить глобальный установщик до `^4.0`.
Наконец, изучите любые другие сторонние пакеты, используемые вашим приложением, и убедитесь, что вы используете корректную версию с поддержкой Laravel 8.
<a name="collections"></a>
### Коллекции
<a name="the-isset-method"></a>
#### Метод `isset`
**Вероятность воздействия: низкая**
Чтобы соответствовать типичному поведению PHP, метод `offsetExists` в `Illuminate\Support\Collection` был обновлен и теперь использует `isset` вместо `array_key_exists`. Это может привести к изменению поведения при работе с элементами коллекции, имеющими значение `null`:
$collection = collect([null]);
// Laravel 7.x - true
isset($collection[0]);
// Laravel 8.x - false
isset($collection[0]);
<a name="database"></a>
### База данных
<a name="seeder-factory-namespaces"></a>
#### Пространства имен наполнителей и фабрик
**Вероятность воздействия: высокая**
Наполнители и фабрики теперь имеют пространство имен. Чтобы учесть эти изменения, добавьте пространство имен `Database\Seeders` в ваши классы наполнителей. Кроме того, имеющийся каталог `database/seeds` должен быть переименован в `database/seeders`:
<?php
namespace Database\Seeders;
use App\Models\User;
use Illuminate\Database\Seeder;
class DatabaseSeeder extends Seeder
{
/**
* Заполнить базу данных приложения.
*
* @return void
*/
public function run()
{
...
}
}
Если вы решите использовать пакет `laravel/legacy-factories`, то никаких изменений в классах ваших фабрик не требуется. Однако, если вы обновляете свои фабрики, вы должны добавить к этим классам пространство имен `Database\Factories`.
Затем в вашем файле `composer.json` удалите блок `classmap` из раздела `autoload` и добавьте новые сопоставления каталогов классов с пространством имен:
"autoload": {
"psr-4": {
"App\\": "app/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/"
}
},
<a name="eloquent"></a>
### Eloquent
<a name="model-factories"></a>
#### Фабрики модели
**Вероятность воздействия: высокая**
Функция Laravel [фабрики модели](/docs/{{version}}//docs/database-testing#defining-model-factories) была полностью переписана для поддержки классов и несовместима с фабриками стиля Laravel 7.x. Однако, чтобы упростить процесс обновления, был создан новый пакет `laravel/legacy-factories`, чтобы продолжать использовать ваши существующие фабрики с Laravel 8.x. Вы можете установить этот пакет через Composer:
composer require laravel/legacy-factories
<a name="the-castable-interface"></a>
#### Интерфейс `Castable`
**Вероятность воздействия: низкая**
Метод `castUsing` интерфейса `Castable` обновлен и теперь принимает массив аргументов. Если вы реализуете этот интерфейс, вам, соответственно, следует обновить реализацию:
public static function castUsing(array $arguments);
<a name="increment-decrement-events"></a>
#### События Increment / Decrement
**Вероятность воздействия: низкая**
События модели, связанные с «обновлением» и «сохранением», теперь будут вызываться при выполнении методов `increment` или `decrement` экземпляров модели Eloquent.
<a name="events"></a>
### События
<a name="the-event-service-provider-class"></a>
#### Класс `EventServiceProvider`
**Вероятность воздействия: низкая**
Если ваш класс `App\Providers\EventServiceProvider` содержит метод `register`, то вы должны убедиться, что вы вызываете `parent::register` в начале этого метода. В противном случае события вашего приложения не будут зарегистрированы.
<a name="the-dispatcher-contract"></a>
#### Контракт `Dispatcher`
**Вероятность воздействия: низкая**
Метод `listen` контракта `Illuminate\Contracts\Events\Dispatcher` был обновлен, чтобы сделать свойство `$listener` необязательным. Это изменение было внесено для поддержки автоматического определения обрабатываемых типов событий через рефлексию. Если вы реализуете этот интерфейс, вам, соответственно, следует обновить реализацию:
public function listen($events, $listener = null);
<a name="framework"></a>
### Фреймворк
<a name="maintenance-mode-updates"></a>
#### Обновления режима обслуживания
**Вероятность воздействия: необязательно**
[Режим обслуживания](/docs/{{version}}//docs/configuration#maintenance-mode) был улучшен в Laravel 8.x. Теперь поддерживается предварительный рендеринг шаблона режима обслуживания, что исключает вероятность того, что конечные пользователи столкнутся с ошибками в режиме обслуживания. Однако для поддержки этого в ваш файл `public/index.php` необходимо добавить следующие строки. Эти строки следует разместить непосредственно под существующим определением константы `LARAVEL_START`:
define('LARAVEL_START', microtime(true));
if (file_exists(__DIR__.'/../storage/framework/maintenance.php')) {
require __DIR__.'/../storage/framework/maintenance.php';
}
<a name="artisan-down-message"></a>
#### Параметр `php artisan down --message`
**Вероятность воздействия: средняя**
Параметр `--message` команды `php artisan down` была удалена. В качестве альтернативы рассмотрите возможность [предварительного рендеринга шаблонов в режиме обслуживания](/docs/{{version}}//docs/configuration#maintenance-mode) с желаемым сообщением.
<a name="php-artisan-serve-no-reload-option"></a>
#### Параметр `php artisan serve --no-reload`
**Вероятность воздействия: низкая**
В команде `php artisan serve` добавлен параметр `--no-reload`. Это даст указание встроенному серверу не перезагружаться при обнаружении изменений файла окружения. Эта опция в первую очередь полезна при запуске тестов Laravel Dusk в среде CI (непрерывной интеграции).
<a name="manager-app-property"></a>
#### Свойство `$app` Менеджера
**Вероятность воздействия: низкая**
Ранее устаревшее свойство `$app` класса `Illuminate\Support\Manager` было удалено. Если вы полагались на это свойство, вам следует использовать вместо него свойство `$container`.
<a name="the-elixir-helper"></a>
#### Помощник `elixir`
**Вероятность воздействия: низкая**
Ранее устаревший помощник `elixir` был удален. Приложениям, все еще использующим данный метод сборки, рекомендуется перейти на [Laravel Mix](https://github.com/JeffreyWay/laravel-mix).
<a name="mail"></a>
### Почта
<a name="the-sendnow-method"></a>
#### Метод `sendNow`
**Вероятность воздействия: низкая**
Ранее устаревший метод `sendNow` был удален. Вместо этого используйте метод `send`.
<a name="pagination"></a>
### Постраничная навигация
<a name="pagination-defaults"></a>
#### Пагинация по умолчанию
**Вероятность воздействия: высокая**
Пагинатор теперь использует [CSS-фреймворк Tailwind](https://tailwindcss.com) для стилизации по умолчанию. Чтобы продолжить использование Bootstrap, вы должны добавить следующий вызов метода в методе `boot` поставщика служб приложения `AppServiceProvider`:
use Illuminate\Pagination\Paginator;
Paginator::useBootstrap();
<a name="queue"></a>
### Очереди
<a name="queue-retry-after-method"></a>
#### Метод `retryAfter`
**Вероятность воздействия: высокая**
Для согласованности с другой функциональностью Laravel, метод `retryAfter` и свойство `retryAfter` заданий в очереди, почтовых программ, уведомлений и слушателей были переименованы в `backoff`. Вам следует обновить имя этого метода / свойства в соответствующих классах вашего приложения.
<a name="queue-timeout-at-property"></a>
#### Свойство `timeoutAt`
**Вероятность воздействия: высокая**
Свойство `timeoutAt` заданий в очереди, уведомлений и слушателей переименовано в `retryUntil`. Вам следует обновить имя этого свойства в соответствующих классах вашего приложения.
<a name="queue-allOnQueue-allOnConnection"></a>
#### Методы `allOnQueue()` / `allOnConnection()`
**Вероятность воздействия: высокая**
Для согласованности с другими методами диспетчеризации были удалены методы `allOnQueue()` и `allOnConnection()`, используемые с цепочкой заданий. Вместо этого вы можете использовать методы `onQueue()` и `onConnection()`. Эти методы следует вызывать перед вызовом метода `dispatch`:
ProcessPodcast::withChain([
new OptimizePodcast,
new ReleasePodcast
])->onConnection('redis')->onQueue('podcasts')->dispatch();
Обратите внимание, что это изменение влияет только на код, использующий метод `withChain`. В то время как методы `allOnQueue()` и `allOnConnection()` по-прежнему доступны при использовании глобального помощника `dispatch()`.
<a name="failed-jobs-table-batch-support"></a>
#### Поддержка пакетной обработки и таблица невыполненных заданий
**Вероятность воздействия: необязательно**
Если вы планируете использовать функционал [пакетной обработки заданий](/docs/{{version}}//docs/queues#job-batching) Laravel 8.x, то таблица `failed_jobs` БД должна быть обновлена. Во-первых, в эту таблицу должен быть добавлен новый столбец `uuid`:
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
Schema::table('failed_jobs', function (Blueprint $table) {
$table->string('uuid')->after('id')->nullable()->unique();
});
Затем, параметр конфигурации `failed.driver` в конфигурационном файле `config/queue.php` должен быть изменен на `database-uuids`.
Кроме того, вы можете сгенерировать UUID для существующих невыполненных заданий:
DB::table('failed_jobs')->whereNull('uuid')->cursor()->each(function ($job) {
DB::table('failed_jobs')
->where('id', $job->id)
->update(['uuid' => (string) Illuminate\Support\Str::uuid()]);
});
<a name="routing"></a>
### Маршрутизация
<a name="automatic-controller-namespace-prefixing"></a>
#### Автоматическое префикс пространства имен контроллера
**Вероятность воздействия: необязательно**
В предыдущих выпусках Laravel класс `RouteServiceProvider` содержал свойство `$namespace` со значением `App\Http\Controllers`. Это значение этого свойства использовалось для объявлений автоматического префикса маршрута контроллера и генерации URL маршрута контроллера, например, при вызове помощника `action`.
В Laravel 8 для этого свойства по умолчанию установлено значение `null`. Это позволяет объявлениям маршрута вашего контроллера использовать стандартный вызываемый синтаксис PHP, который обеспечивает лучшую поддержку перехода к классу контроллера во многих IDE:
use App\Http\Controllers\UserController;
// Использование вызываемого синтаксиса PHP ...
Route::get('/users', [UserController::class, 'index']);
// Использование строкового синтаксиса ...
Route::get('/users', 'App\Http\Controllers\UserController@index');
В большинстве случаев это не повлияет на обновляемые приложения, потому что ваш `RouteServiceProvider` по-прежнему будет содержать свойство `$namespace` с его предыдущим значением. Однако, если вы обновите свое приложение, создав новый проект Laravel, то это изменение может стать критическим.
Если вы хотите по-прежнему использовать исходную маршрутизацию контроллера с автоматическим префиксом, вы можете просто установить значение свойства `$namespace` в` RouteServiceProvider` и обновить регистрации маршрута в методе `boot`, чтобы использовать свойство `$namespace`:
class RouteServiceProvider extends ServiceProvider
{
/**
* Путь к «домашнему» маршруту вашего приложения.
*
* Используется аутентификацией Laravel для перенаправления пользователей после входа в систему.
*
* @var string
*/
public const HOME = '/home';
/**
* Если указано, это пространство имен автоматически применяется к маршрутам вашего контроллера.
*
* Кроме того, оно устанавливается как корневое пространство имен генератора URL.
*
* @var string
*/
protected $namespace = 'App\Http\Controllers';
/**
* Определить связывание модели и маршрута, фильтры шаблонов и т.д.
*
* @return void
*/
public function boot()
{
$this->configureRateLimiting();
$this->routes(function () {
Route::middleware('web')
->namespace($this->namespace)
->group(base_path('routes/web.php'));
Route::prefix('api')
->middleware('api')
->namespace($this->namespace)
->group(base_path('routes/api.php'));
});
}
/**
* Настроить ограничения запросов для приложения.
*
* @return void
*/
protected function configureRateLimiting()
{
RateLimiter::for('api', function (Request $request) {
return Limit::perMinute(60)->by(optional($request->user())->id ?: $request->ip());
});
}
}
<a name="scheduling"></a>
### Планирование задач
<a name="the-cron-expression-library"></a>
#### Библиотека `cron-expression`
**Вероятность воздействия: низкая**
Зависимость Laravel от `dragonmantank/cron-expression` была обновлена с `2.x` до `3.x`. Это не должно вызывать каких-либо критических изменений в вашем приложении, если вы не взаимодействуете напрямую с библиотекой `cron-expression`. Если вы напрямую взаимодействуете с этой библиотекой, просмотрите ее [журнал изменений](https://github.com/dragonmantank/cron-expression/blob/master/CHANGELOG).
<a name="session"></a>
### Сессия
<a name="the-session-contract"></a>
#### Контракт `Session`
**Вероятность воздействия: низкая**
Контракт `Illuminate\Contracts\Session\Session` получил новый метод `pull`. Если вы реализуете этот контракт самостоятельно, то вам следует соответствующим образом обновить его реализацию:
/**
* Получите значение переданного ключа и удалить его.
*
* @param string $key
* @param mixed $default
* @return mixed
*/
public function pull($key, $default = null);
<a name="testing"></a>
### Тестирование
<a name="decode-response-json-method"></a>
#### Метод `decodeResponseJson`
**Вероятность воздействия: низкая**
Метод `decodeResponseJson`, принадлежащий классу `Illuminate\Testing\TestResponse`, больше не принимает никаких аргументов. Пожалуйста, подумайте об использовании вместо этого метода `json`.
<a name="assert-exact-json-method"></a>
#### Метод `assertExactJson`
**Вероятность воздействия: средняя**
Метод `assertExactJson` теперь требует, чтобы числовые ключи сравниваемых массивов совпадали и располагались в том же порядке. Если вы хотите сравнить JSON с массивом, не требуя, чтобы массивы с числовыми ключами имели одинаковый порядок, вы можете вместо этого использовать метод `assertSimilarJson`.
<a name="validation"></a>
### Валидация
<a name="database-rule-connections"></a>
### Соединения для правил, использующих БД
**Вероятность воздействия: низкая**
Правила `unique` и `exists` теперь будут учитывать указанное имя соединения моделей Eloquent при выполнении запросов. Это имя соединения доступно через метод `getConnectionName` модели.
<a name="miscellaneous"></a>
### Разное
Мы также рекомендуем вам просматривать изменения в GitHub-репозитории [`laravel/laravel`](https://github.com/laravel/laravel). Хотя многие из этих изменений могут быть неважны, но вы можете синхронизировать эти файлы с вашим приложением. Некоторые из этих изменений будут рассмотрены в данном руководстве по обновлению, но другие, такие как изменения файлов конфигурации или комментарии, не будут. Вы можете легко просмотреть изменения с помощью [инструмента сравнения GitHub](https://github.com/laravel/laravel/compare/7.x...8.x) и выбрать, какие обновления важны для вас.
| 41.834842 | 573 | 0.730247 | rus_Cyrl | 0.905982 |
48f3bfcab6e2ef845f99ebb6786f9f70d286c4ed | 6,143 | md | Markdown | README.md | Weegle99/node-red-contrib-yeelight-compat-hue | 382de958be58a2552c860a8391fb7b4a8b63dcc4 | [
"MIT"
] | 9 | 2017-12-11T23:19:56.000Z | 2020-11-08T21:14:13.000Z | README.md | Weegle99/node-red-contrib-yeelight-compat-hue | 382de958be58a2552c860a8391fb7b4a8b63dcc4 | [
"MIT"
] | 24 | 2017-12-15T20:15:00.000Z | 2022-02-25T12:16:44.000Z | README.md | Weegle99/node-red-contrib-yeelight-compat-hue | 382de958be58a2552c860a8391fb7b4a8b63dcc4 | [
"MIT"
] | 15 | 2017-12-11T09:04:31.000Z | 2022-01-17T11:40:39.000Z | [](https://www.npmjs.com/package/node-red-contrib-yeelight-compat-hue)
[](https://travis-ci.org/mattmattmatt/node-red-contrib-yeelight-compat-hue)
[](https://github.com/mattmattmatt/node-red-contrib-yeelight-compat-hue)
[](https://www.npmjs.com/package/node-red-contrib-yeelight-compat-hue)
[](https://flows.nodered.org/node/node-red-contrib-yeelight-compat-hue)
[](https://github.com/semantic-release/semantic-release)
# node-red-contrib-yeelight-compat-hue
[Node-RED](http://nodered.org) nodes to control [MiHome/Xiaomi Yeelights](https://www.yeelight.com/) from your smart home, somewhat [API compatible with node-red-contrib-node-hue nodes](https://github.com/jdomeij/node-red-contrib-node-hue#input-node).
## Install
Run the following command in your Node-RED user directory – typically `~/.node-red`:
```bash
npm i node-red-contrib-yeelight-compat-hue -S
```
## Usage
Provides two palette nodes – one to send control commands to a Yeelight, and one to receive messages when a Yeelight's state changes.

### Output node
Sets the state of the selected Yeelight device.
`msg.payload` must be an object containing the new state's properties of the selected Yeelight device.
| Property | Details |
| :---| :---|
| `on` | Sets the `on` state where the value is `true` or `false`|
| `bri` | Sets the brightness value from `0` to `255` |
| `ct` | Sets the color temperature value in Kelvin from `1700` to `6500` |
| `hue` | Sets the hue value from `0` to `65535` |
| `sat` | Sets the saturation value from `0` to `255` |
| `hex` | Sets the rgb value from `#00000` to `#FFFFFF` |
| `duration` | Sets a transition time in milliseconds, e.g. `4500` means 4.5 seconds. Defaults to `500` |
#### Example payloads
```JSON
{
"on": true,
"bri": 255,
"hue": 913,
"sat": 255,
"duration": 5000
}
```
```JSON
{
"on": true,
"bri": 120,
"hex": "#AA00CC",
}
```
```JSON
{
"ct": 2200,
}
```
The node supports sending [color temperature values](http://www.erco.com/service/rgbw/), [hex values](http://htmlcolorcodes.com/) and [HSV values](https://alloyui.com/examples/color-picker/hsv).
The brightness value will always have to be provided separately and will not be deducted from e.g. a hex value's brightness component.
##### References
This node's input payload structure is based on [node-red-contrib-node-hue](https://github.com/jdomeij/node-red-contrib-node-hue#input-node), which is based on [Node Hue API](https://github.com/peter-murray/node-hue-api#lightstate-options).
### Input node
Returns the current state of the selected Yeelight device.
`msg.payload` is an object containing the current state of the selected Yeelight device.
The node will listen to changes of the connected Yeelight and send a message whenever a change is detected. The `payload` property of the message will be set to the new state of the Yeelight.
Additionally, a fresh state can be requested from the connected Yeelight by sending a message to the node. The `payload` property of the message will be overwritten with the state of the Yeelight. All other properties of the `msg` are preserved.
#### Example payload
```JSON
{
"state": {
"on": false,
"bri": 255,
"colormode": "rgb",
"hex": "#ff1500",
"hue": 913,
"sat": 255
},
"name": "Closet",
"raw": {
"name": "Closet",
"power": "off",
"bright": "100",
"rgb": "16717056",
"ct": "4244",
"hue": "0",
"sat": "100",
"color_mode": "1",
"delayoff": "0",
"flowing": "0",
"flow_params": "0,1,10000,1,16711680,69,33000,1,16711882,34,39000,1,16744704,17,34000,1,16711680,61",
"music_on": "0"
}
}
```
The `raw` property of `msg.payload` contains the raw state information retrieved from the Yeelight for advanced usage.
Note that value scales are not compatible with _node-red-contrib-node-hue_, and that `hue` value and `rgb` value will not match since only the correct color per `color_mode` is returned by the lamp.
### Configuration node
Configures a Yeelight connection to one light in the local network.
#### Options
| Property | Details |
| :--- | :--- |
| Hostname | An IP address or other hostname that points to a Yeelight on the network |
| Port number | Port number the Yeelight is accessible over, default being `55443` |
#### Details
**Developer mode/LAN control** has to be activated (usually from within the Yeelight app) to allow local control of the Yeelight through Node-RED. You can <a href="https://www.yeelight.com/en_US/developer" target="_blank">learn more about Yeelight developer options</a> here.
## Support
If something is not working as expected, if you think there is a feature missing, or if you think this node could be improved in other ways, [please create an issue](https://github.com/mattmattmatt/node-red-contrib-yeelight-compat-hue/issues) on GitHub.
### Links
- Find [node-red-contrib-yeelight-compat-hue in the Node-RED flow library](https://flows.nodered.org/node/node-red-contrib-yeelight-compat-hue)
- Find [node-red-contrib-yeelight-compat-hue on npm](https://www.npmjs.com/package/node-red-contrib-yeelight-compat-hue)
- Find [node-red-contrib-yeelight-compat-hue on GitHub](https://github.com/mattmattmatt/node-red-contrib-yeelight-compat-hue)
### Hello bear

| 42.958042 | 275 | 0.704216 | eng_Latn | 0.85009 |
48f44c8f5e19383b349ddf2356d89cdbc8025757 | 2,855 | md | Markdown | README.md | ZachVZ/real_estate_valuation | cfc42486738b03a8ecca00bbc84c5b0ed08f8040 | [
"MIT"
] | null | null | null | README.md | ZachVZ/real_estate_valuation | cfc42486738b03a8ecca00bbc84c5b0ed08f8040 | [
"MIT"
] | null | null | null | README.md | ZachVZ/real_estate_valuation | cfc42486738b03a8ecca00bbc84c5b0ed08f8040 | [
"MIT"
] | null | null | null | # real_estate_valuation
This notebook collects and analyzes housing data (price per square foot and gross rent price by neighborhood) from SFO between 2010-2016.
Below are the four steps that will be performed in this notebook:
1. Calculate and Plot the Average Sale Prices per Square Foot and the Avg. Gross Rent
2. Compare the Average Sale Prices by Neighborhood
3. Build an Interactive Neighborhood Map
4. Identify neighborhoods with best potential ROI
## Technologies Require
* This project leverages python version 3.8.5
* python dotenv Library
* Alpaca SDK
* Request Library
* JSON Library
* Project will be accomplished in JupyterLab
## Install python dotenv Library
* On the terminal (Git Bash) under the conda dev environment, type the code below:
pip install python-dotenv
## Install Alpaca SDK
* On the terminal (Git Bash) under the conda dev environment, type the code below:
pip install alpaca-trade-api
## Install Request Library
* On the terminal (Git Bash) under the conda dev environment, type the code below:
conda install -c anaconda requests
## Install JSON Libary
* On the terminal (Git Bash) under the conda dev environment, type the code below:
conda install -c jmcmurray json
## Installation Guide and Running Jupyter Notebook
Installing Jupyter notebook
* On the terminal (Git Bash) under the conda dev environment, type the code below:
pip install jupyterlab
* To open the Jupyter notebook
Open a new Git Bash and type the below command into your conda dev environment:
jupyter lab
* then hit the ENTER key to run
## Required Imports
* pandas - data manipulation and analysis
* Path from pathlib - import CSV files
* %matplotlib inline - assist in building plots and visuals
* hvPlot - enables interactive plotting tools such as line and bar graphs
* os - portable way of using operating system dependent functionality
* PyVizlot - Visualization package that provides a single platform for accessing multiple visualization libraries
* json - convert the python dictionary from the API into a JSON string that can be written into a file
* from dotenv import load_dotenv - reads key-value pairs from a .env file and can set them as environment variables
* MAPBOX API - This is used to get API data for the data. The user needs to register for personal account and save their own keys
* plotly.express - data visualization for figure generation
## Install Guide
* import os
* import pandas as pd
* import plotly.express as px
* import hvplot.pandas
* import matplotlib.pyplot as plt
* from pathlib import Path
* from dotenv import load_dotenv
## Usage
To use the JupyterLab notebook clone the repo and run git bash, open notebook san_francisco_housing.ipynb and create a new .env file that holds your MAPBOX API key
### Contributors
Zach Zwiener
### Contact
Email - zachzwiener3@gmail.com
### License
MIT | 33.988095 | 163 | 0.783187 | eng_Latn | 0.981626 |
48f47ee1a0abd6bc514aebb27c9ffecce50e425d | 14,560 | md | Markdown | README.md | vungocbinh2009/thongke.dapan | 19cca1ee1b796f153c7c3eb0eebb179b642888d2 | [
"MIT"
] | 1 | 2022-03-30T10:05:47.000Z | 2022-03-30T10:05:47.000Z | README.md | vungocbinh2009/thongke.dapan | 19cca1ee1b796f153c7c3eb0eebb179b642888d2 | [
"MIT"
] | null | null | null | README.md | vungocbinh2009/thongke.dapan | 19cca1ee1b796f153c7c3eb0eebb179b642888d2 | [
"MIT"
] | null | null | null | # thongke.dapan
My R package, generate latex code to add to my exam template :)
# Install
``` r
# Install devtools
install.packages("devtools")
# Install thongke before thongke_dapan
devtools::install_github("vungocbinh2009/thongke")
# Install thongke.dapan
devtools::install_github("vungocbinh2009/thongke_dapan")
# Others package dependency: whisker, xtable
```
# How to use
### Descriptive statistics
``` r
library(thongke)
library(thongke.dapan)
library(testthat)
library(magrittr)
```
##
## Attaching package: 'magrittr'
## The following objects are masked from 'package:testthat':
##
## equals, is_less_than, not
``` r
test_that("Test một số hàm trong file others.R", {
data <- rep(x = c(1, 2, 3, 4, 5), times = c(1, 2, 3, 4, 5))
data %>% answer_mean(answer = "Giá trị trung bình là: ") %>% cat() %>% print()
data %>% answer_var(answer = "Giá trị phương sai là: ") %>% cat() %>% print()
data %>% answer_var(answer = "Giá trị phương sai là: ", with_mean = FALSE) %>% cat() %>% print()
data %>% answer_sd(answer = "Giá trị độ lệch chuẩn là: ") %>% cat() %>% print()
expect_equal(1, 1)
})
```
### Parameter estimation
``` r
library(thongke)
library(thongke.dapan)
library(testthat)
library(magrittr)
test_that("Test các hàm answer_estimate_*, answer_sample_*", {
generate_data(
estimate_mean_norm,
list(sigma = 3, n = 36, alpha = 0.05, mean = 66, mode="two.side")
) %>%
answer_estimate_mean_norm(
sd_symbol = "\\sigma",
answer = "Khoảng tin cậy 95\\% là:",
) %>%
cat() %>%
print()
# Đáp số: 65,02 - 66,98
print("===================================================")
generate_data(
estimate_mean_norm,
list(sigma = 3, n = 36, alpha = 0.05, mean = 66, mode="min")
) %>%
answer_estimate_mean_norm(
sd_symbol = "\\sigma",
answer = "Khoảng tin cậy 95\\% là:"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
estimate_mean_norm,
list(sigma = 3, n = 36, alpha = 0.05, mean = 66, mode="max")
) %>%
answer_estimate_mean_norm(
sd_symbol = "\\sigma",
answer = "Khoảng tin cậy 95\\% là:"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
estimate_mean_t,
list(mean = 39.8, alpha = 0.01, n = 15, s = sqrt(0.144), mode="two.side")
) %>%
answer_estimate_mean_t(
answer = "Khoảng tin cậy 99\\% là"
) %>%
cat() %>%
print()
# Đáp số: 39,5023 - 40,0977
print("===================================================")
generate_data(
estimate_mean_t,
list(mean = 39.8, alpha = 0.01, n = 15, s = sqrt(0.144), mode="min")
) %>%
answer_estimate_mean_t(
answer = "Khoảng tin cậy 99\\% là"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
estimate_mean_t,
list(mean = 39.8, alpha = 0.01, n = 15, s = sqrt(0.144), mode="max")
) %>%
answer_estimate_mean_t(
answer = "Khoảng tin cậy 99\\% là"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
estimate_var,
list(n = 30, s = 0.032, alpha = 0.025, mode="two.side")
) %>%
answer_estimate_var(
answer = "Khoảng tin cậy 97,5\\% là:"
) %>%
cat() %>%
print()
# Đáp số: 0.000649 - 0.001851
print("===================================================")
generate_data(
estimate_var,
list(n = 30, s = 0.032, alpha = 0.025, mode="min")
) %>%
answer_estimate_var(
answer = "Khoảng tin cậy 97,5\\% là:"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
estimate_var,
list(n = 30, s = 0.032, alpha = 0.025, mode="max")
) %>%
answer_estimate_var(
answer = "Khoảng tin cậy 97,5\\% là:"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
estimate_prop,
list(n = 100, f = 0.6, alpha = 0.1, mode="two.side")
) %>%
answer_estimate_prop(
answer = "Khoảng tin cậy 99\\% là:"
) %>%
cat() %>%
print()
# Đáp số: 0.52 - 0.68
print("===================================================")
generate_data(
estimate_prop,
list(n = 100, f = 0.6, alpha = 0.1, mode="min")
) %>%
answer_estimate_prop(
answer = "Khoảng tin cậy 99\\% là:"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
estimate_prop,
list(n = 100, f = 0.6, alpha = 0.1, mode="max")
) %>%
answer_estimate_prop(
answer = "Khoảng tin cậy 99\\% là:"
) %>%
cat() %>%
print()
print("===================================================")
generate_data(
sample_size_mean,
list(sigma = 3, alpha = get_alpha(1.64), eps = 0.5)
) %>%
answer_sample_size_mean(
sd_symbol = "\\sigma",
conclusion = function(value) {
return(sprintf("Vậy kích thước mẫu tối thiểu để thỏa mãn yêu cầu đề bài là %d", ceiling(value)))
}
) %>%
cat() %>%
print()
# Đáp số: 96.826
print("===================================================")
generate_data(
sample_size_prop_1,
list(f = 0.64, alpha = get_alpha(1.64), eps = 0.02)
) %>%
answer_sample_size_prop_1(
conclusion = function(value) {
return(sprintf("Vậy kích thước mẫu tối thiểu để thỏa mãn yêu cầu đề bài là %d", ceiling(value)))
}
) %>%
cat() %>%
print()
# Đáp số: 1549.2
print("===================================================")
generate_data(
sample_size_prop_2,
list(eps = 0.02, alpha = get_alpha(1.64))
) %>%
answer_sample_size_prop_2(
conclusion = function(value) {
return(sprintf("Vậy kích thước mẫu tối thiểu để thỏa mãn yêu cầu đề bài là %d", ceiling(value)))
}
) %>%
cat() %>%
print()
# Đáp số: 1681
expect_equal(1, 1)
})
```
### Hypothesis testing
``` r
library(thongke)
library(thongke.dapan)
library(testthat)
library(magrittr)
test_that("Test các hàm answer_test_*", {
generate_data(
test_mean_norm,
list(sigma = 5.2, alpha = 0.05, n = 100, mean = 27.56, mean_0 = 26, mode = "neq")
) %>%
answer_test_mean_norm(
intro = "Gọi $\\mu$ là ...",
sd_symbol = "\\sigma",
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=3 - c=1.96 - Bác bỏ
print("===================================================")
data <- c(19, 18, 22, 20, 16, 25)
generate_data(
test_mean_t,
list(mean = mean(data), mean_0 = 21.5, mode = "neq", n = 6, alpha = 0.05, s = sqrt(var(data)))
) %>%
answer_test_mean_t(
intro = "Gọi $\\mu$ là ...",
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=-1.16 - c=2.571 - Chấp nhận
print("===================================================")
generate_data(
test_prop,
list(f = 0.4, p_0 = 0.45, alpha = 0.05, n = 200, mode = "neq")
) %>%
answer_test_prop(
intro = "Gọi $p$ là ...",
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=-1.43, c=1.96, Chấp nhận
print("===================================================")
generate_data(
test_goodness_of_fit,
list(
expected = c(100, 100, 100, 100, 100, 100),
actual = c(106, 92, 97, 105, 88, 112),
alpha = 0.05
)
) %>%
answer_test_goodness_of_fit(
h0 = "Nội dung của $H_0$",
col_names = c("Mặt 1", "Mặt 2", "Mặt 3", "Mặt 4", "Mặt 5", "Mặt 6"),
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=4.22, c=11.070, Chấp nhận
print("===================================================")
generate_data(
test_2_mean_norm,
list(
n1 = 40, n2 = 50, mean1 = 130, mean2 = 140,
sigma1 = sqrt(80), sigma2 = sqrt(100),
alpha = 0.01, mode = "neq"
)
) %>%
answer_test_2_mean_norm(
intro = "Gọi $\\mu_1$ và $\\mu_2$ lần lượt là ...",
sd_symbol = "\\sigma",
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=5, c=2.58, Bác bỏ
print("===================================================")
generate_data(
test_2_mean_t,
list(
mode = "neq", alpha = 0.01, mean1 = 4.8, n1 = 10,
s1 = 1.1, mean2 = 4.3, n2 = 12, s2 = 0.9
)
) %>%
answer_test_2_mean_t(
intro = "Gọi $\\mu_1$ và $\\mu_2$ lần lượt là ...",
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=1.174, c=2.845, Chấp nhận
print("===================================================")
generate_data(
test_2_prop,
list(
f1 = 0.42, f2 = 0.46, n1 = 100, n2 = 200,
alpha = 0.05, mode = "neq"
)
) %>%
answer_test_2_prop(
intro = "Gọi $p_1$ và $p_2$ lần lượt là ...",
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=-0.66, c=1.96, Chấp nhận
print("===================================================")
generate_data(
test_k_prop,
list(
m_i = c(79, 82, 77, 83, 76, 81),
n_i = c(100, 100, 100, 100, 100, 100),
alpha = 0.05
)
) %>%
answer_test_k_prop(
h0 = "Nội dung của $H_0$",
col_names = c("A", "B", "C", "D", "E", "F"),
row_names = c("Có A", "Không A"),
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=2.42. c=11.07. Chấp nhận
print("===================================================")
generate_data(
test_independent,
list(
matrix = matrix(data = c(328, 122, 77, 33), ncol = 2, nrow = 2),
alpha = 0.05
)
) %>%
answer_test_independent(
h0 = "Nội dung của $H_0$",
col_names = c("Loại 1", "Loại 2"),
row_names = c("Loại A", "Loại B"),
conclusion_h0 = "Chưa đủ cơ sở để cho rằng ...",
conclusion_h1 = "Có thể cho rằng ..."
) %>%
cat() %>%
print()
# Đáp số: T=0.368, c=3.841, Chấp nhận
expect_equal(1, 1)
})
```
### Correlation and simple linear regression
``` r
library(thongke)
library(thongke.dapan)
library(testthat)
library(magrittr)
test_that("Test 2 hàm trong file regression.R", {
generate_data(
calculate_sum,
list(
x = c(80, 85, 88, 90, 95, 92, 82, 75, 78, 85),
y = c(2.4, 2.8, 3.3, 3.1, 3.7, 3, 2.5, 2.3, 2.8, 3.1)
)
) %>%
answer_calculate_sum(
intro = "Gọi $X$ và $Y$ lần lượt là ..."
) %>%
cat() %>%
print()
generate_data(
correlation,
list(
x = c(80, 85, 88, 90, 95, 92, 82, 75, 78, 85),
y = c(2.4, 2.8, 3.3, 3.1, 3.7, 3, 2.5, 2.3, 2.8, 3.1)
)
) %>%
answer_correlation() %>%
cat() %>%
print()
# Đáp số: 0.858
print("===================================================")
generate_data(
linear_regression,
list(
x = c(400, 600, 500, 600, 400, 500),
y = c(44, 47, 48, 48, 43, 46)
)
) %>%
answer_linear_regression(
intro = "Gọi $X$ và $Y$ lần lượt là ..."
) %>%
cat() %>%
print()
# Đáp số: y=0.02x + 36, giá trị sách 700 trang là 50 nghìn
print("===================================================")
generate_data(
linear_regression_predict,
list(
x = c(400, 600, 500, 600, 400, 500),
y = c(44, 47, 48, 48, 43, 46),
value = 700
)
) %>%
answer_linear_regression_predict(
answer = "Giá sách là:",
value_unit = "nghìn đồng"
) %>%
cat() %>%
print()
expect_equal(1, 1)
})
```
### Other features
``` r
library(thongke)
library(thongke.dapan)
library(testthat)
library(magrittr)
test_that("Test một số hàm trong file others.R", {
# You can create custom answer
answer_custom(
templates = c(
"Đáp án 1 {{data1}}",
"Đáp án 2 {{data2}}",
"Đáp án 3 {{data3}}"),
scores = c(1.25, 0.5, 1),
data = list(
data1 = "Haha",
data2 = "hihi",
data3 = "huhu"
)
) %>%
cat() %>%
print()
answer_1 <- generate_data(
linear_regression,
list(
x = c(400, 600, 500, 600, 400, 500),
y = c(44, 47, 48, 48, 43, 46)
)
) %>%
answer_linear_regression(
intro = "Gọi $X$ và $Y$ lần lượt là ..."
)
# Đáp số: y=0.02x + 36, giá trị sách 700 trang là 50 nghìn
answer_2 <- generate_data(
linear_regression_predict,
list(
x = c(400, 600, 500, 600, 400, 500),
y = c(44, 47, 48, 48, 43, 46),
value = 700
)
) %>%
answer_linear_regression_predict(
answer = "Giá sách là:",
value_unit = "nghìn đồng"
)
# You can also merge answers too.
answer_merge(c(answer_1, answer_2)) %>%
cat() %>%
print()
expect_equal(1, 1)
})
```
# Note
- If you want to set comma as decimal seperator, add this code
<!-- end list -->
``` r
options(OutDec=",")
```
- In latex, use `icomma` package to display number with comma as
decimal seperator
# License
MIT License
Copyright (c) 2021-2022 vungocbinh2009
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the
“Software”), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| 26.813996 | 104 | 0.51456 | vie_Latn | 0.258747 |
48f48a07d4cc7758c110972576f5f334f3e86557 | 2,659 | md | Markdown | README-cn.md | kaijianding/clickhouse-offline-build-plugin | a7f1eaf1311fcb1a9f3c4b5878f00a854a7a2d7d | [
"Apache-2.0"
] | null | null | null | README-cn.md | kaijianding/clickhouse-offline-build-plugin | a7f1eaf1311fcb1a9f3c4b5878f00a854a7a2d7d | [
"Apache-2.0"
] | null | null | null | README-cn.md | kaijianding/clickhouse-offline-build-plugin | a7f1eaf1311fcb1a9f3c4b5878f00a854a7a2d7d | [
"Apache-2.0"
] | null | null | null | # clickhouse-offline-build-plugin
### 为什么需要新的clickhouse plugin?
waterdrop自带的clickhouse waterdrop使用jdbc去insert数据clickhouse集群.
这可能对clickhouse集群造成很大压力, 并且因为重试的原因导致重复插入.
### 这个新插件是怎么运作的?
基本上, 这个插件使用clickhouse自带的`clickhouse local`命令在本地构建`part`文件, 然后把构建好的`part`文件上传到hdfs.
然后再clickhouse节点从hdfs上把`part`文件拉下来放在detached目录, 再调用`alter table t attach part 'part_name'`加载数据
### 如何使用?
1. 先用maven打包`mvn clean package`得到clickhouse-offline-build-1.0.jar
2. 按如下目录结构准备plugins.tar.gz
```
plugins/
plugins/clickhouse-offline-build/
plugins/clickhouse-offline-build/files/
plugins/clickhouse-offline-build/files/clickhouse
plugins/clickhouse-offline-build/files/config.xml
plugins/clickhouse-offline-build/files/build.sh
plugins/clickhouse-offline-build/lib/clickhouse-offline-build-1.0.jar
```
3. 按照官方文档https://interestinglab.github.io/waterdrop-docs/#/zh-cn/v1/quick-start 准备环境
4. 使用如下output的配置
```
output {
io.github.interestinglab.waterdrop.output.clickhouse.ClickhouseOfflineBuild {
host = "${clickhouse_cluster}:8023"
database = "${database}"
table = "${tableName}"
username = "${account}"
password = "${password}"
hdfsUser = "${hdfsUser}"
tmpUploadPath = "${tmpUploadPath}"
defaultValues = {"a":1} #optional
}
}
```
### 本地运行示例
0. 先确保clickhouse已安装, 并且`test.ontime`表已建好
1. 根据官方文档https://interestinglab.github.io/waterdrop-docs/#/zh-cn/v1/installation下载到waterdrop并解压
2. 准备少量`ontime`的数据: https://clickhouse.tech/docs/en/getting-started/example-datasets/ontime/
3. 创建文件config/application.conf
```shell
spark {
spark.streaming.batchDuration = 5
spark.app.name = "Waterdrop-sample"
spark.ui.port = 13000
spark.executor.instances = 2
spark.executor.cores = 1
spark.executor.memory = "1g"
}
input {
file {
path = "path_to_ontime_data.csv.gz"
format = "csv"
options.header = "true"
result_table_name = "ontime"
}
}
filter {
}
output {
io.github.interestinglab.waterdrop.output.clickhouse.ClickhouseOfflineBuild {
host = "127.0.0.1:8023"
database = "test"
table = "ontime"
username = "default"
password = "default_password"
hdfsUser = "test_user"
defaultValues = {"a":"b"}
tmpUploadPath = "hdfs://remote_hdfs_host:9000/clickhouse-build/test/ontime/"
}
}
```
4. 本地执行命令
```shell
./bin/start-waterdrop.sh --config config/application.conf --deploy-mode client --master "local[2]"
```
5. 成功后数据在`hdfs://remote_hdfs_host:9000/clickhouse-build/test/ontime/`
6. 到clickhouse节点上执行: `sh attach.sh test ontime hdfs://remote_hdfs_host:9000/clickhouse-build/test/ontime/`
7. 验证数据`select count(*) from test.ontime` | 29.21978 | 106 | 0.718315 | yue_Hant | 0.29353 |
48f4b825cebff504042b007e712bf851c0f93e8a | 4,949 | md | Markdown | v2.0/build-a-csharp-app-with-cockroachdb.md | message/docs | b7ebf243f909560adbb3ed7d941771653be0a4d2 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 172 | 2016-02-07T13:59:56.000Z | 2022-03-30T08:34:31.000Z | v2.0/build-a-csharp-app-with-cockroachdb.md | rainleander/docs | 34b94b4ca65100c8f4239681ebd307d0907d387e | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 13,147 | 2016-01-13T20:50:14.000Z | 2022-03-31T23:42:05.000Z | v2.0/build-a-csharp-app-with-cockroachdb.md | rainleander/docs | 34b94b4ca65100c8f4239681ebd307d0907d387e | [
"CC-BY-4.0",
"BSD-3-Clause"
] | 461 | 2016-01-19T16:38:41.000Z | 2022-03-30T02:46:03.000Z | ---
title: Build a C# (.NET) App with CockroachDB
summary: Learn how to use CockroachDB from a simple C# (.NET) application with a low-level client driver.
toc: true
twitter: true
---
This tutorial shows you how build a simple C# (.NET) application with CockroachDB using a PostgreSQL-compatible driver.
We have tested the [.NET Npgsql driver](http://www.npgsql.org/) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support.
## Before You Begin
Make sure you have already [installed CockroachDB](install-cockroachdb.html) and the <a href="https://www.microsoft.com/net/download/" data-proofer-ignore>.NET SDK</a> for your OS.
## Step 1. Create a .NET project
{% include copy-clipboard.html %}
~~~ shell
$ dotnet new console -o cockroachdb-test-app
~~~
{% include copy-clipboard.html %}
~~~ shell
$ cd cockroachdb-test-app
~~~
The `dotnet` command creates a new app of type `console`. The `-o` parameter creates a directory named `cockroachdb-test-app` where your app will be stored and populates it with the required files. The `cd cockroachdb-test-app` command puts you into the newly created app directory.
## Step 2. Install the Npgsql driver
Install the latest version of the [Npgsql driver](https://www.nuget.org/packages/Npgsql/) into the .NET project using the built-in nuget package manager:
{% include copy-clipboard.html %}
~~~ shell
$ dotnet add package Npgsql
~~~
## Step 3. Start a single-node cluster
For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode:
{% include copy-clipboard.html %}
~~~ shell
$ cockroach start \
--insecure \
--store=hello-1 \
--host=localhost
~~~
## Step 4. Create a user
In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`.
{% include copy-clipboard.html %}
~~~ shell
$ cockroach user set maxroach --insecure
~~~
## Step 5. Create a database and grant privileges
As the `root` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create a `bank` database.
{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --insecure -e 'CREATE DATABASE bank'
~~~
Then [grant privileges](grant.html) to the `maxroach` user.
{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --insecure -e 'GRANT ALL ON DATABASE bank TO maxroach'
~~~
## Step 6. Run the C# code
Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html).
### Basic Statements
Replace the contents of `cockraochdb-test-app/Program.cs` with the following code:
{% include copy-clipboard.html %}
~~~ csharp
{% include {{ page.version.version }}/app/basic-sample.cs %}
~~~
Then run the code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows:
{% include copy-clipboard.html %}
~~~ shell
$ dotnet run
~~~
The output should be:
~~~
Initial balances:
account 1: 1000
account 2: 250
~~~
### Transaction (with retry logic)
Open `cockraochdb-test-app/Program.cs` again and replace the contents with the following code:
{% include copy-clipboard.html %}
~~~ csharp
{% include {{ page.version.version }}/app/txn-sample.cs %}
~~~
Then run the code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted:
{% include copy-clipboard.html %}
~~~ shell
$ dotnet run
~~~
{{site.data.alerts.callout_info}}With the default <code>SERIALIZABLE</code> isolation level, CockroachDB may require the <a href="transactions.html#transaction-retries">client to retry a transaction</a> in case of read/write contention. CockroachDB provides a generic <strong>retry function</strong> that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code.{{site.data.alerts.end}}
The output should be:
~~~
Initial balances:
account 1: 1000
account 2: 250
Final balances:
account 1: 900
account 2: 350
~~~
However, if you want to verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html):
{% include copy-clipboard.html %}
~~~ shell
$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank
~~~
~~~
+----+---------+
| id | balance |
+----+---------+
| 1 | 900 |
| 2 | 350 |
+----+---------+
(2 rows)
~~~
## What's Next?
Read more about using the [.NET Npgsql driver](http://www.npgsql.org/).
{% include {{ page.version.version }}/app/see-also-links.md %}
| 31.724359 | 448 | 0.720752 | eng_Latn | 0.969488 |
48f4fcf2734eb7c9c82969169c334a0b42101b7c | 5,824 | md | Markdown | source/_posts/taking-your-software-on-a-static-ride.md | mairaw/blog.decayingcode.com | ffce7f32f78787333fdf6bb68dd00030a6752149 | [
"CC0-1.0"
] | 3 | 2015-04-09T18:44:27.000Z | 2019-10-02T03:16:48.000Z | source/_posts/taking-your-software-on-a-static-ride.md | mairaw/blog.decayingcode.com | ffce7f32f78787333fdf6bb68dd00030a6752149 | [
"CC0-1.0"
] | 9 | 2016-07-06T17:35:00.000Z | 2020-03-20T18:24:31.000Z | source/_posts/taking-your-software-on-a-static-ride.md | mairaw/blog.decayingcode.com | ffce7f32f78787333fdf6bb68dd00030a6752149 | [
"CC0-1.0"
] | 8 | 2016-07-03T12:16:42.000Z | 2019-09-30T23:33:57.000Z | ---
title: "Taking your software on a static ride"
date: 2016-06-21 08:00:00
tags: [azure,static content,git]
---
If you haven't seen the change recently, I've changed, yet again, my blogging engine. However, this isn't a post about blogging or engines.
It's a post about static content.
### What was there before
Previously, I was on the [MiniBlog][1] blog engine written by [Mads Kristensen][2].
Despite a few issues I've found with it, the main argument for it is **Simple, flexible and powerful**. It's written on the GitHub repository after all.
### What is in place now
I currently run something called [Hexo][3]. It's a NodeJS static blog generator with support for themes and other modules if you are interested in customizing it.
### Why go static?
There is an inherent simplicity in having pure `.html` files as your content and simple folders as your *routing engine*. Over that, most web servers can handle local files very efficiently. Over that, they can output the proper caching HTTP headers without the need to reach other files or a database.
As for flexible, well it's HTML files. If I want images or any other files, I just push include them in the hierarchy of the blog.
You only need to configure your host a bit if you want to benefit from cached files.
### Caching
The `web.config` for this site contains the following:
```xml
<configuration>
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365:00:00" />
</staticContent>
</system.webServer>
</configuration>
```
You can find similar instructions for [nginx][4] or other servers/host pretty much everywhere.
### Azure & GitHub
The first step is to create an Azure Web App. If you have a custom domain, you will need to bump your plan to at least Shared. But you are just testing it out, it works with the Free plan.
* Once created, go into your Web application and click on settings.
* Navigate to Deployment source
* Choose source
* Pick GitHub and setup your authorization and pick a project/branch
This will synchronize your GitHub branch with your website. Every time there's a change on your branch, a new deployment will start.
The last piece of the puzzle is to actually generate your site with your static site generator. In my scenario, I needed to run `hexo generate`. When going through this process, you can inject a custom script into your repository to run custom commands while deploying. Mine can be found [right here][5].
### Result?
The final result is that I have a pure HTML/JS/CSS site that is running on a shared hosting.
As for all I know, it could run off of a [Raspberry Pi][6] or an [Arduino][7] plugged to the internet and nobody would know the difference. If my website ever becomes too popular for its own good, it will take a lot of visitors to slow it down. It is, after all, only serving static files.
### What changed in the architecture?
So what is a blog engine doing? It's reading post metadata (content, publishing date, authors, etc.) and converting them to HTML pages that the end-user is going to see. This blog engine is running off a web framework (ASP.NET MVC, Django, etc.) and this framework is running off a core language/runtime or something else. Finally, it is handing the content to the web server/host and this one is transferring it to the end-user.
What we've done is pre-generate all the possible results that the blog engine could ever produce and store them in static files (HTML). We've converted Dynamic to Static.
We had to drop a few functionality to get to this point like comments, live editing, pretty management UI... in my case, this is worth it for me. I'm technologically savvy enough to understand how to create a new post in my system without slowing me down. Like every architectural decision, it's all a matter of tradeoff.
### When should I use static content?
When data doesn't need to change too often. Most CMS, blogs and even e-commerce product description fit this bill. This could also be when you don't want to expose your database to external servers and instead just push content files.
You should also use static content if you need to host files on a multitude of different server OSes.
### But what to do with less tech-savvy users?
The same process can still be reused. Maybe not with a GitHub/Azure workflow.
However, you could always use the Azure Blob Storage and rewrite, using UrlRewrite, your URLs to the blob storage directly.
```xml
<rewrite>
<rules>
<rule name="imagestoazure">
<match url="images/(.*)" />
<action type="Redirect" url="https://??????.vo.msecnd.net/images/{R:1}" />
</rule>
</rules>
</rewrite>
```
That way, you can offer a rich UI to your user and re-generate your content when it change and upload it asynchronously to the blob storage. Your original website on Azure? No need to even take it down. The content change and the website keeps on forwarding requests.
### Conclusion
Yes, there is more moving pieces. But my belief is that it's only a tooling problem we have now. We have tools that were built to work with dynamic web sites. We're only starting to build them for static content.
Worse, we are thinking in MVC, cshtml and databases instead of just thinking about what the end client truly want. A web page with some content that hasn't changed in a long time.
[1]: https://github.com/madskristensen/MiniBlog
[2]: http://madskristensen.net/
[3]: https://hexo.io/
[4]: https://serversforhackers.com/nginx-caching
[5]: https://github.com/MaximRouiller/blog.decayingcode.com/blob/master/deploy.cmd
[6]: https://www.raspberrypi.org/documentation/remote-access/web-server/nginx.md
[7]: https://startingelectronics.org/software/arduino/web-server/basic-01/
| 52.468468 | 429 | 0.75206 | eng_Latn | 0.998967 |
48f54475f8251277c6b850f1f4175dbea0926181 | 3,743 | md | Markdown | app/gateway/2.8.x/get-started/quickstart/index.md | pastine/docs.konghq.com | 305348e2f0117f3465a5e9e978606716215becbf | [
"MIT"
] | 11 | 2017-11-21T11:48:49.000Z | 2018-04-18T06:44:57.000Z | app/gateway/2.8.x/get-started/quickstart/index.md | pastine/docs.konghq.com | 305348e2f0117f3465a5e9e978606716215becbf | [
"MIT"
] | 148 | 2017-10-23T22:50:49.000Z | 2018-06-20T20:34:03.000Z | app/gateway/2.8.x/get-started/quickstart/index.md | pastine/docs.konghq.com | 305348e2f0117f3465a5e9e978606716215becbf | [
"MIT"
] | 37 | 2017-10-19T12:31:03.000Z | 2018-06-16T20:09:43.000Z | ---
title: Start Kong Gateway
---
In this section, you'll learn how to install and manage your Kong Gateway instance. First, you'll start Kong Gateway to gain access to its Admin
API, where you'll manage entities including Services, Routes, and Consumers.
## Start Kong Gateway using Docker with a database
One quick way to get Kong Gateway up and running is by using [Docker with a PostgreSQL database](/gateway/{{page.kong_version}}/install-and-run/docker). We recommend this method to test out basic Kong Gateway functionality.
For a comprehensive list of installation options, see our [Install page](/gateway/{{page.kong_version}}/install-and-run/).
1. Create a Docker network:
```bash
docker network create kong-net
```
2. Run a PostGreSQL container:
```bash
docker run -d --name kong-database \
--network=kong-net \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
-e "POSTGRES_PASSWORD=kong" \
postgres:9.6
```
{% include_cached /md/enterprise/cassandra-deprecation.md %}
Data sent through the Admin API is stored in Kong's [datastore][datastore-section] (Kong
supports PostgreSQL and Cassandra).
3. Prep your database:
```bash
docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_USER=kong" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
kong:latest kong migrations bootstrap
```
4. Start Kong:
```bash
docker run -d --name kong \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_USER=kong" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 127.0.0.1:8001:8001 \
-p 127.0.0.1:8444:8444 \
kong:latest
```
5. Navigate to `http://localhost:8001/`.
## Kong default ports
By default, Kong listens on the following ports:
- `8000`: listens for incoming `HTTP` traffic from your
clients, and forwards it to your upstream services.
- `8001`: [Admin API][API] listens for calls from the command line over `HTTP`.
- `8443`: listens for incoming HTTPS traffic. This port has a
similar behavior to `8000`, except that it expects `HTTPS`
traffic only. This port can be disabled via the configuration file.
- `8444`: [Admin API][API] listens for `HTTPS` traffic.
## Lifecycle commands
{:.note}
> **Note**: If you are using Docker, [`exec`](https://docs.docker.com/engine/reference/commandline/exec) into the Docker container to use these commands.
Stop Kong Gateway using the [stop][CLI] command:
```bash
kong stop
```
Reload Kong Gateway using the [reload][CLI] command:
```bash
kong reload
```
Start Kong Gateway using the [start][CLI] command:
```bash
kong start
```
## Next Steps
Now that you have Kong Gateway running, you can interact with the Admin API.
To begin, go to [Configuring a Service ›][configuring-a-service]
[configuration-loading]: /gateway/{{page.kong_version}}/reference/configuration/#configuration-loading
[CLI]: /gateway/{{page.kong_version}}/reference/cli
[API]: /gateway/{{page.kong_version}}/admin-api
[datastore-section]: /gateway/{{page.kong_version}}/reference/configuration/#datastore-section
[configuring-a-service]: /gateway/{{page.kong_version}}/get-started/quickstart/configuring-a-service
| 30.680328 | 223 | 0.69089 | eng_Latn | 0.651666 |
48f5c998a8edd8b242c2588194ad14779d23bf8f | 1,460 | md | Markdown | _posts/2021-8-28-240.md | kev-gar/kev-gar.github.io | 2388c543994a801509429eeeb16f424422f0b68a | [
"MIT"
] | null | null | null | _posts/2021-8-28-240.md | kev-gar/kev-gar.github.io | 2388c543994a801509429eeeb16f424422f0b68a | [
"MIT"
] | null | null | null | _posts/2021-8-28-240.md | kev-gar/kev-gar.github.io | 2388c543994a801509429eeeb16f424422f0b68a | [
"MIT"
] | 1 | 2021-07-09T05:11:15.000Z | 2021-07-09T05:11:15.000Z | ---
layout: post
title: 240
---
Hoy presencié el nacimiento de una nación.
No me siento capaz de describirlo: no soy historiador, ni periodista. Solo un estudiante tímido, de esos que se entierran en la biblioteca hasta que olvidas su presencia, oculta tras las repisas llenas de conocimiento solo desactualizado un par de años. (Lo digo de picado, porque ese par de años hace una gran diferencia. Pero no viene al caso). Imagino que solo puedo contar lo que he visto.
Fue en la biblioteca donde comenzó todo. Oí en mi rincón un grupo de estudio en la esquina más lejana al mesón de la bibliotecaria. Me llamó la atención, porque la zona de grupos es para el otro lado, pero cuando me acerqué la conversación era demasiado interesante para detenerla.
Dos conspiradores venían de afuera: uno de ellos traía un plano, que un estudiante de Arquitectura observaba y comentaba.
«Este debe ser el muro menos vigilado, y esta la salida menos asegurada».
Me quedé callado. ¿Qué iba a hacer, llamar a la policía y morir como soplón? ¿Interrumpir la reunión y morir por saber mucho? Preferí quedarme afuera, y saber qué ocurriría antes de tiempo.
Y vaya que ocurrió.
Desde la semana pasada, el Primer Ministro está desaparecido, y las regiones exteriores proclamaron ya su independencia. La nuestra es la más lejana, y aunque afuera reina el caos, la universidad sigue en pie, y la biblioteca se mantiene silenciosa, con sus intrigas y estudiantes entrometidos.
| 69.52381 | 393 | 0.791781 | spa_Latn | 0.999356 |
48f66fa2595132fbce40c06efeff0a9781465f49 | 958 | md | Markdown | end2end-test-examples/gcs/README.md | nimf/grpc-gcp-java | 73362ec89ab1433ab33bcac0ec55109e5ecfd983 | [
"Apache-2.0"
] | 5 | 2019-06-17T14:09:33.000Z | 2021-06-14T13:04:13.000Z | end2end-test-examples/gcs/README.md | nimf/grpc-gcp-java | 73362ec89ab1433ab33bcac0ec55109e5ecfd983 | [
"Apache-2.0"
] | 8 | 2019-10-24T02:10:39.000Z | 2021-07-13T16:34:52.000Z | end2end-test-examples/gcs/README.md | nimf/grpc-gcp-java | 73362ec89ab1433ab33bcac0ec55109e5ecfd983 | [
"Apache-2.0"
] | 16 | 2019-03-04T22:31:50.000Z | 2022-01-19T19:16:48.000Z | ## Args
`--client`: Type of client to use, among `grpc`, `yoshi`, `gcsio-grpc`, `gcsio-http`
`--calls`: Number of calls to execute
`--bkt`: gcs bucket name
`--obj`: gcs object name
`--dp`: whether to use DirectPath code
`--method`: type of calls to gcs server, among `read`, `write`, `random`
`--size`: total size in KB for read/write
`--buffSize`: buffer size in KB for read
`--conscrypt`: whether to enable Conscrypt
`--threads`: whether to use thread pool with fixed number of threads
## Usage
Build:
```sh
./gradlew build
```
Example test run:
```sh
./gradlew run --args="--client=grpc --bkt=gcs-grpc-team-weiranf --obj=1kb --calls=100 --method=read"
```
```sh
./gradlew run --args="--client=http --bkt=gcs-grpc-team-weiranf --obj=upload-1kb --calls=100 --method=write --size=1"
```
```sh
./gradlew run --args="--client=gcsio-grpc --bkt=gcs-grpc-team-weiranf --obj=random-1gb --size=1048576 --buffSize=100 --method=random --calls=3"
```
| 21.772727 | 143 | 0.661795 | eng_Latn | 0.862001 |
48f7b74418e101d00b4c51ebd90ee60e6616ba60 | 23,543 | md | Markdown | UPGRADE-2.0.md | josemartins92/oro | 4d5272f334e06cd4d619a2870b20e6851f7285cd | [
"MIT"
] | null | null | null | UPGRADE-2.0.md | josemartins92/oro | 4d5272f334e06cd4d619a2870b20e6851f7285cd | [
"MIT"
] | null | null | null | UPGRADE-2.0.md | josemartins92/oro | 4d5272f334e06cd4d619a2870b20e6851f7285cd | [
"MIT"
] | null | null | null | UPGRADE FROM 1.10 to 2.0
========================
####ActionBundle
- Class `Oro\Bundle\ActionBundle\Layout\Block\Type\ActionLineButtonsType` was removed -> block type `action_buttons` replaced with DI configuration.
- Added class `Oro\Bundle\ActionBundle\Layout\DataProvider\ActionButtonsProvider` - layout data provider.
- Default value for parameter `applications` in operation configuration renamed from `backend` to `default`.
####WorkflowBundle
- Class `Oro\Bundle\WorkflowBundle\Model\WorkflowManager`
- construction signature was changed now it takes next arguments:
- `WorkflowRegistry` $workflowRegistry,
- `DoctrineHelper` $doctrineHelper,
- `EventDispatcherInterface` $eventDispatcher,
- `WorkflowEntityConnector` $entityConnector
- method `getApplicableWorkflow` was removed -> new method `getApplicableWorkflows` with `$entity` instance was added instead.
- method `getApplicableWorkflowByEntityClass` was removed. Use `Oro\Bundle\WorkflowBundle\Model\WorkflowManager::getApplicableWorkflows` method instead.
- method `hasApplicableWorkflowByEntityClass` was removed. Use method `hasApplicableWorkflows` instead with an entity instance.
- method `getWorkflowItemByEntity` was removed -> new method `getWorkflowItem` with arguments `$entity` and `$workflowName` to retrieve an `WorkflowItem` instance for corresponding entity and workflow.
- method `getWorkflowItemsByEntity` was added to retrieve all `WorkflowItems` instances from currently active (running) workflows for the entity provided as single argument.
- method `hasWorkflowItemsByEntity` was added to get whether entity provided as single argument has any active (running) workflows with its `WorkflowItems`.
- method `setActiveWorkflow` was removed -> method `activateWorkflow` with just one argument as `$workflowName` should be used instead. The method now just ensures that provided workflow should be active.
- now the method emits event `Oro\Bundle\WorkflowBundle\Event\WorkflowEvents::WORKFLOW_ACTIVATED` if workflow was activated.
- method `deactivateWorkflow` changed its signature. Now it handle single argument as `$workflowName` to ensure that provided workflow is inactive.
- now the method emits event `Oro\Bundle\WorkflowBundle\Event\WorkflowEvents::WORKFLOW_DEACTIVATED` if workflow was deactivated.
- method `resetWorkflowData` was added with `WorkflowDefinition $workflowDefinition` as single argument. It removes from database all workflow items related to corresponding workflow.
- method `resetWorkflowItem` was removed
- Entity configuration (`@Config()` annotation) sub-node `workflow.active_workflow` was removed in favor of `WorkflowDefinition` field `active`. Now for proper workflow activation through configuration you should use `defaults.active: true` in corresponded workflow YAML config.
- Class `Oro\Bundle\WorkflowBundle\Model\Workflow` changed constructor signature. First argument `EntityConnector` was replaced by `DoctrineHelper`
- method `resetWorkflowData` was removed - use `Oro\Bundle\WorkflowBundle\Model\WorkflowManager::resetWorkflowData` instead.
- Repository `Oro\Bundle\WorkflowBundle\Entity\Repository\WorkflowItemRepository` signature was changed for method `resetWorkflowData` :
* it requires instance of `Oro\Bundle\WorkflowBundle\Entity\WorkflowDefinition` as first argument
* argument `excludedWorkflows` was removed;
- Changed signature of `@transit_workflow` action. Added `workflow_name` parameter as a third parameter in inline call. **Be aware** previously third parameter was `data` parameter. Now `data` is fourth one.
- Service `oro_workflow.entity_connector` (`Oro\Bundle\WorkflowBundle\Model\EntityConnector.php`) removed;
- Parameter `oro_workflow.entity_connector.class` removed;
- Removed parameter `EntityConnector $entityConnector` from constructor of `Oro\Bundle\WorkflowBundle\EventListener\WorkflowItemListener`;
- Removed parameter `EntityConnector $entityConnector` from constructor of `Oro\Bundle\WorkflowBundle\Model\TriggerScheduleOptionsVerifier`;
- Removed form type `Oro\Bundle\WorkflowBundle\Form\Type\ApplicableEntitiesType`;
- Service `oro_workflow.entity_connector` (`Oro\Bundle\WorkflowBundle\Model\WorkflowEntityConnector`) was added with purpose to check whether entity can be used in workflow as related.
- Now entity can have more than one active workflows.
- Activation of workflows now provided through `WorkflowManager::activateWorkflow` and `WorkflowManager::deactivateWorkflow` methods as well as with workflow YAML configuration boolean node `defaults.active` to load default activation state from configuration.
**NOTE**: Please pay attention to make activations only through corresponded `WorkflowManager` methods.
Do **NOT** make direct changes in `WorkflowDefinition::setActive` setter.
As `WorkflowManager` is responsive for activation events emitting described above.
- Added trait `Oro\Bundle\WorkflowBundle\Helper\WorkflowQueryTrait` with methods:
* `joinWorkflowItem` - to easily join workflowItem to an entity with QueryBuilder
* `joinWorkflowStep` - to easily join workflowStep to an entity with QueryBuilder through optionally specified workflowItem alias
Note: `joinWorkflowStep` internally checks for workflowItem alias join to be already present in QueryBuilder instance to use it or creates new one otherwise.
* `addDatagridQuery` - for datagrid listeners to join workflow fields (especially workflowStatus)
* Now entity can have more than one active workflows.
* Entity config for `workflow.active_workflow` node were changed to `workflow.active_workflows` array as it now supports multiple active workflows.
* Added single point of workflow activation/deactivation through `@oro_workflow.manager.system_config` (`Oro\Bundle\WorkflowBundle\Model\WorkflowSystemConfigManager`) that emits corresponding events (`'oro.workflow.activated'`, `'oro.workflow.deactivated'`).
* Activation or deactivation of workflow now triggers removal for all data in affected entity flows. So when workflow is deactivated or reactivated - WorkflowItems will be removed from storage.
* Controllers methods (REST as well) for activation/deactivation now takes `workflowName` as `WorkflowDefinition` identifier instead of related entity class string.
* Steps retrieval for an entity now returns steps for all currently active workflows for related entity with `workflow` node in each as name of corresponding workflow for steps in `stepsData`. Example: `{"workflowName":{"workflow": "workflowName", "steps":["step1", "step2"], "currentStep": "step1"}}`
* User Interface. If entity has multiply workflows currently active there will be displayed transition buttons for each transition available from all active workflows on entity view page. Flow chart will show as many flows as workflows started for an entity.
* For workflow activation (on workflows datagrid or workflow view page) there would be a popup displayed with field that bring user to pick workflows that should not remain active and supposed to be deactivated (e.g replaced with current workflow).
* Entity datagrids with workflow steps column now will show list of currently started workflows with their steps and filtering by started workflows and their steps is available as well.
* Entity relations for fields `workflowItem` and `workflowStep` (e.g. implementation of `WorkflowAwareInterface`) are forbidden for related entity.
* Added `Oro\Bundle\WorkflowBundle\Provider\WorkflowVirtualRelationProvider` class for relations between entities and workflows. Actively used in reports.
* Added support for string identifiers in entities. Previously there was only integers supported as primary keys for workflow related entity.
* Removed `Oro\Bundle\WorkflowBundle\Model\EntityConnector` class and its usage.
* Removed `Oro\Bundle\WorkflowBundle\Model\EntityConnector` dependency form `Oro\Bundle\WorkflowBundle\Model\Workflow` class.
* Added `Oro\Bundle\WorkflowBundle\Model\WorkflowEntityConnector` class with purpose to check whether entity can be used in workflow as related.
* Entity `Oro\Bundle\WorkflowBundle\Entity\WorkflowItem` now has `entityClass` field with its related workflow entity class name.
* Service '@oro_workflow.manager' (class `Oro\Bundle\WorkflowBundle\Model\WorkflowManager`) was refactored in favor of multiple workflows support.
* Method `Oro\Bundle\WorkflowBundle\Model\WorkflowManager::getApplicableWorkflowByEntityClass` was deprecated and its usage will raise an exception. Usage of `Oro\Bundle\WorkflowBundle\Model\WorkflowManager::getApplicableWorkflows` is recommended instead.
* Interface `Oro\Bundle\WorkflowBundle\Entity\WorkflowAwareInterface` marked as deprecated. Its usage is forbidden.
* Trait `Oro\Bundle\WorkflowBundle\Entity\WorkflowAwareTrait` marked as deprecated. Its usage is forbidden.
* Updated class constructor `Oro\Bundle\WorkflowBundle\Model\Workflow`, first argument is `Oro\Bundle\EntityBundle\ORM\DoctrineHelper`.
* Removed class `Oro\Bundle\WorkflowBundle\Field\FieldProvider` and its usages.
* Removed class `Oro\Bundle\WorkflowBundle\Field\FieldGenerator` and its usages.
* Updated all Unit Tests to support new `Oro\Bundle\WorkflowBundle\Model\Workflow`
* Definition for `oro_workflow.prototype.workflow` was changed, removed `Oro\Bundle\WorkflowBundle\Model\EntityConnector` dependency
* Updated class constructor `Oro\Bundle\WorkflowBundle\Configuration\WorkflowDefinitionConfigurationBuilder`, removed second argument `$fieldGenerator`
* Updated REST callback `oro_api_workflow_entity_get`, now it uses `oro_entity.entity_provider` service to collect entities and fields
* Removed following services:
* oro_workflow.field_generator
* oro_workflow.exclusion_provider
* oro_workflow.entity_provider
* oro_workflow.entity_field_provider
* oro_workflow.entity_field_list_provider
* Removed `Oro\Bundle\WorkflowBundle\Field\FieldGenerator` dependency from class `Oro\Bundle\WorkflowBundle\Model\EntityConnector`
* Removed `Oro\Bundle\WorkflowBundle\Field\FieldGenerator` dependency from class `Oro\Bundle\WorkflowBundle\Datagrid\WorkflowStepColumnListener`, for now all required constants moved to this class
* Added new method `getActiveWorkflowsByEntityClass`, that returns all found workflows for an entity class
* Added new method `hasActiveWorkflowsByEntityClass`, that indicates if an entity class has one or more linked workflows
* Removed method `getActiveWorkflowByEntityClass` from `Oro\Bundle\WorkflowBundle\Model\WorkflowRegistry`, use `getActiveWorkflowsByEntityClass`
* Removed method `hasActiveWorkflowByEntityClass` from `Oro\Bundle\WorkflowBundle\Model\WorkflowRegistry`, use `hasActiveWorkflowsByEntityClass`
* Class `Oro\Bundle\WorkflowBundle\Form\EventListener\InitActionsListener` renamed to `Oro\Bundle\WorkflowBundle\Form\EventListener\FormInitListener`.
* Service 'oro_workflow.form.event_listener.init_actions' renamed to `oro_workflow.form.event_listener.form_init`.
* Fourth constructor argument of class `Oro\Bundle\WorkflowBundle\Form\Type\WorkflowAttributesType` changed from `InitActionsListener $initActionsListener` to `FormInitListener $formInitListener`.
* Added `preconditions` to process definition for use instead of `pre_conditions`
* Added `preconditions` to transition definition for use instead of `pre_conditions`
* Added `form_init` to transition definition for use instead of `init_actions`
* Added `actions` to transition definition for use instead of `post_actions`
* Definitions `pre_conditions`, `init_actions`, `post_actions` marked as deprecated
- Added workflow definition configuration node `exclusive_active_groups` to determine exclusiveness of active state in case with conflicting workflows in system.
- Added workflow definition configuration node `exclusive_record_groups` to determine exclusiveness of currently running workflow for an related entity by named group.
- Added `WorkflowDefinition` property with workflow YAML configuration node `priority` to be able regulate order of workflow acceptance in cases with cross-functionality.
For example `workflow_record_group` with two workflows in one group and auto start transition will be sorted by priority and started only one with higher priority value.
* Removed service `@oro_workflow.manager.system_config` and its class `Oro\Bundle\WorkflowBundle\Model\WorkflowSystemConfigManager` as now there no entity configuration for active workflows. Activation and deactivation of a workflow now should be managed through WorkflowManager (`Oro\Bundle\WorkflowBundle\Model\WorkflowManager`- `@@oro_workflow.manager`)
* Method `getApplicableWorkflows` in `Oro\Bundle\WorkflowBundle\Model\WorkflowManager` now accepts ONLY entity instance. Class name support has been removed.
* Added new interface `WorkflowApplicabilityFilterInterface` with method `Oro\Bundle\WorkflowBundle\Model\WorkflowManager::addApplicabilityFilter(WorkflowApplicabilityFilterInterface $filter)` for ability to additionally filter applicable workflows for an entity.
Used with new class `Oro\Bundle\WorkflowBundle\Model\WorkflowExclusiveRecordGroupFilter` now that represents `exclusive_record_groups` functionality part.
* Added `priority` property to `Oro\Bundle\WorkflowBundle\Entity\WorkflowDefinition` and workflow configuration to be able configure priorities in workflow applications.
* Added `isActive` property to `Oro\Bundle\WorkflowBundle\Entity\WorkflowDefinition` instead of EntityConfig
* Added `groups` property to `Oro\Bundle\WorkflowBundle\Entity\WorkflowDefinition` that contains `WorkflowDefinition::GROUP_TYPE_EXCLUSIVE_ACTIVE` and `WorkflowDefinition::GROUP_TYPE_EXCLUSIVE_RECORD` nodes of array with corresponded groups that `WorkflowDefintiion` is belongs to.
* Added methods `getExclusiveRecordGroups` and `getExclusiveActiveGroups` to `Oro\Bundle\WorkflowBundle\Entity\WorkflowDefinition`
* `getName`, `getLabel` and `isActive` methods of `Oro\Bundle\WorkflowBundle\Model\Workflow` now are proxy methods to its `Oro\Bundle\WorkflowBundle\Entity\WorkflowDefinition` instance.
* Removed method `getStartTransitions` from `Oro\Bundle\WorkflowBundle\Model\WorkflowManager` - `$workflow->getTransitionManager()->getStartTransitions()` can be used instead
* Entity config `workflow.active_workflows` was removed. Use workfow configuration boolean node `defaults.active` instead.
####LocaleBundle:
- Added helper `Oro\Bundle\LocaleBundle\Helper\LocalizationQueryTrait` for adding needed joins to QueryBuilder
- Added provider `Oro\Bundle\LocaleBundle\Provider\CurrentLocalizationProvider` for providing current localization
- Added manager `Oro\Bundle\LocaleBundle\Manager\LocalizationManager` for providing localizations
- Added datagrid extension `Oro\Bundle\LocaleBundle\Datagrid\Extension\LocalizedValueExtension` for working with localized values in datagrids
- Added datagrid property `Oro\Bundle\LocaleBundle\Datagrid\Formatter\Property\LocalizedValueProperty`
- Added extension interface `Oro\Bundle\LocaleBundle\Extension\CurrentLocalizationExtensionInterface` for providing current localization
- Added twig filter `localized_value` to `Oro\Bundle\LocaleBundle\Twig\LocalizationExtension` for getting localized values in Twig
- Added ExpressionFunction `localized_value` to `Oro\Bundle\LocaleBundle\Layout\ExpressionLanguageProvider` - can be used in Layouts
- Added Localization Settings page in System configuration
- Updated `Oro\Bundle\LocaleBundle\Helper\LocalizationHelper`, used `CurrentLocalizationProvider` for provide current localization and added `getLocalizedValue()` to retrieve fallback values
####Layout Component:
- Interface `Oro\Component\Layout\DataProviderInterface` was removed.
- Abstract class `Oro\Component\Layout\AbstractServerRenderDataProvider` was removed.
- Methods `Oro\Component\Layout\DataAccessorInterface::getIdentifier()` and `Oro\Component\Layout\DataAccessorInterface::get()` was removed.
- Added class `Oro\Component\Layout\DataProviderDecorator`.
- Add possibility to use parameters in data providers, for details please check out documentation [Layout data](./src/Oro/Bundle/LayoutBundle/Resources/doc/layout_data.md).
- Method `Oro\Component\Layout\ContextDataCollection::getIdentifier()` was removed.
- Twig method `layout_attr_merge` was renamed to `layout_attr_defaults`.
- BlockType classes replaced with DI configuration for listed block types: `external_resource`, `input`, `link`, `meta`, `ordered_list`, `script` and `style`. Corresponding block type classes was removed.
- Added interface `Oro\Component\Layout\Extension\Theme\ResourceProvider\ResourceProviderInterface`
- Added class `Oro\Component\Layout\Extension\Theme\ResourceProvider\ThemeResourceProvider` that implements `Oro\Component\Layout\Extension\Theme\ResourceProvider\ResourceProviderInterface`
- Added interface `Oro\Component\Layout\Extension\Theme\Visitor\VisitorInterface`
- Added class `Oro\Component\Layout\Extension\Theme\Visitor\ImportVisitor` that implements `Oro\Component\Layout\Extension\Theme\Visitor\VisitorInterface`
- Added method `Oro\Component\Layout\Extension\Theme\ThemeExtension::addVisitor` for adding visitors that implements `Oro\Component\Layout\Extension\Theme\Visitor\VisitorInterface`
- Added method `Oro\Component\Layout\LayoutUpdateImportInterface::getImport`.
- Added methods `Oro\Component\Layout\Model\LayoutUpdateImport::getParent` and `Oro\Component\Layout\Model\LayoutUpdateImport::setParent` that contains parent `Oro\Component\Layout\Model\LayoutUpdateImport` for nested imports.
- Renamed option for `Oro\Component\Layout\Block\Type\BaseType` from `additional_block_prefix` to `additional_block_prefixes`, from now it contains array.
- Added methods `getRoot`, `getReplacement`, `getNamespace` and `getAdditionalBlockPrefixes` to `Oro\Component\Layout\ImportLayoutManipulator` for working with nested imports.
- Added method `Oro\Component\Layout\Templating\Helper\LayoutHelper::parentBlockWidget` for rendering parent block widget.
- Added method `getUpdateFileNamePatterns` to `Oro\Component\Layout\Loader\LayoutUpdateLoaderInterface`.
- Added method `getUpdateFilenamePattern` to `Oro\Component\Layout\Loader\Driver\DriverInterface`.
####LayoutBundle
- Removed class `Oro\Bundle\LayoutBundle\CacheWarmer\LayoutUpdatesWarmer`.
- Added class `Oro\Bundle\LayoutBundle\EventListener\ContainerListener`, register event `onKernelRequest` that helps to warm cache for layout updates resources.
- Moved layout updates from container to `oro.cache.abstract`
- Added new Twig function `parent_block_widget` to `Oro\Bundle\LayoutBundle\Twig\LayoutExtension` for rendering parent block widget.
- Added interface `Oro\Component\Layout\Form\FormRendererInterface` to add fourth argument `$renderParentBlock` to method `searchAndRenderBlock` that tells renderer to search for widget in parent theme resources.
- Added interface `Oro\Bundle\LayoutBundle\Form\TwigRendererInterface` that extends new `Oro\Component\Layout\Form\FormRendererInterface`.
- Added interface `Oro\Component\Layout\Form\RendererEngine\FormRendererEngineInterface` that extends `Symfony\Component\Form\FormRendererEngineInterface` to add new method `switchToNextParentResource` needed for `parent_block_widget`.
- Added interface `Oro\Bundle\LayoutBundle\Form\TwigRendererEngineInterface` that extends new `Oro\Component\Layout\Form\RendererEngine\FormRendererEngineInterface` for using it everywhere in LayoutBundle instead of `Symfony\Bridge\Twig\Form\TwigRendererEngineInterface`.
- Added class `Oro\Bundle\LayoutBundle\Form\BaseTwigRendererEngine` that extends `Symfony\Bridge\Twig\Form\TwigRendererEngine` and implements new `Oro\Bundle\LayoutBundle\Form\TwigRendererEngineInterface`.
- Updated class `Oro\Bundle\LayoutBundle\Form\RendererEngine\TwigRendererEngine` to extend new `Oro\Bundle\LayoutBundle\Form\BaseTwigRendererEngine`.
- Updated class `Oro\Bundle\LayoutBundle\Form\RendererEngine\TemplatingRendererEngine` that extends `Symfony\Component\Form\Extension\Templating\TemplatingRendererEngine` and implements `Oro\Component\Layout\Form\RendererEngine\FormRendererEngineInterface`.
- Updated class `Oro\Bundle\LayoutBundle\Form\TwigRendererEngine` to extend new `Oro\Bundle\LayoutBundle\Form\BaseTwigRendererEngine`.
- Updated class `Oro\Bundle\LayoutBundle\Layout\TwigLayoutRenderer` to implement `Oro\Bundle\LayoutBundle\Form\TwigRendererInterface`.
####ConfigBundle:
- Class `Oro\Bundle\ConfigBundle\Config\AbstractScopeManager` added `$scopeIdentifier` of type integer, null or object as optional parameter for next methods: `getSettingValue`, `getInfo`, `set`, `reset`, `getChanges`, `flush`, `save`, `calculateChangeSet`, `reload`
- Class `Oro\Bundle\ConfigBundle\Config\ConfigManager` added `$scopeIdentifier` of type integer, null or object as optional parameter for next methods: `get`, `getInfo`, `set`, `reset`, `flush`, `save`, `calculateChangeSet`, `reload`, `getValue`, `buildChangeSet`
- Class `Oro\Component\Config\Loader\FolderContentCumulativeLoader` now uses list of regular expressions as fourth argument instead of list of file extensions. For example if you passed as fourth argument `['yml', 'php']` you should replace it with `['/\.yml$/', '/\.php$/']`
####DatagridBundle:
- Class `Oro/Bundle/DataGridBundle/Provider/ConfigurationProvider.php`
- construction signature was changed now it takes next arguments:
- `SystemAwareResolver` $resolver,
- `CacheProvider` $cache
- method `warmUpCache` was added to fill or refresh cache.
- method `loadConfiguration` was added to set raw configuration for all datagrid configs.
- method `getDatagridConfigurationLoader` was added to get loader for datagrid.yml files.
- method `ensureConfigurationLoaded` was added to check if datagrid config need to be loaded to cache.
- You can find example of refreshing datagrid cache in `Oro/Bundle/DataGridBundle/EventListener/ContainerListener.php`
- Added parameter `split_to_cells` to layout `datagrid` block type which allows to customize grid through layouts.
####SecurityBundle
- Removed layout context configurator `Oro\Bundle\SecurityBundle\Layout\Extension\SecurityFacadeContextConfigurator`.
- Added layout context configurator `Oro\Bundle\SecurityBundle\Layout\Extension\IsLoggedInContextConfigurator`.
- Added layout data provider `\Oro\Bundle\SecurityBundle\Layout\DataProvider\CurrentUserProvider` with method `getCurrentUser`, from now use `=data['current_user'].getCurrentUser()` instead of `=context["logged_user"]`.
####EntityExtendBundle
- `Oro\Bundle\EntityExtendBundle\Migration\EntityMetadataHelper`
- `getEntityClassByTableName` deprecated, use `getEntityClassesByTableName` instead
- removed property `tableToClassMap` in favour of `tableToClassesMap`
- `Oro\Bundle\EntityExtendBundle\Migration\ExtendOptionsBuilder
- construction signature was changed now it takes next arguments:
`EntityMetadataHelper` $entityMetadataHelper,
`FieldTypeHelper` $fieldTypeHelper,
`ConfigManager` $configManager
- removed property `tableToEntityMap` in favour of `tableToEntitiesMap`
- renamed method `getEntityClassName` in favour of `getEntityClassNames`
- `Oro\Bundle\EntityExtendBundle\Migration\ExtendOptionsParser`
- construction signature was changed now it takes next arguments:
`EntityMetadataHelper` $entityMetadataHelper,
`FieldTypeHelper` $fieldTypeHelper,
`ConfigManager` $configManager
| 115.975369 | 356 | 0.815954 | eng_Latn | 0.875268 |
48f89816e58f93514ef2c6e13a6584069131cd08 | 799 | md | Markdown | public/admin/vendors/jquery-validation-unobtrusive/README.md | MohamadSyahrul/Aplikasi_lab | 262019b9bc5d31a7dadc1759c87f87af5f750471 | [
"MIT"
] | 847 | 2018-01-19T15:28:11.000Z | 2022-02-17T02:26:39.000Z | public/admin/vendors/jquery-validation-unobtrusive/README.md | MohamadSyahrul/Aplikasi_lab | 262019b9bc5d31a7dadc1759c87f87af5f750471 | [
"MIT"
] | 408 | 2015-02-25T18:51:23.000Z | 2022-03-29T21:32:22.000Z | public/admin/vendors/jquery-validation-unobtrusive/README.md | MohamadSyahrul/Aplikasi_lab | 262019b9bc5d31a7dadc1759c87f87af5f750471 | [
"MIT"
] | 552 | 2018-01-19T19:49:23.000Z | 2022-01-23T11:01:58.000Z | jQuery Unobtrusive Validation
=============================
The jQuery Unobtrusive Validation library complements jQuery Validation by adding support for specifying validation options as HTML5 `data-*` elements.
This project is part of ASP.NET Core. You can find samples, documentation and getting started instructions for ASP.NET Core at the [Home](https://github.com/aspnet/home) repo.
Remember to make your changes to only the src file. Use ".\build.cmd" to automatically generate the js file in dist directory, minify the js file, create a .nupkg and change the version in the package.json if needed.
To stage for a release, update the "version.props" file and run ".\build.cmd" (see Release Checklist [here](https://github.com/aspnet/jquery-validation-unobtrusive/wiki/Release-checklist)).
| 72.636364 | 216 | 0.762203 | eng_Latn | 0.950528 |
48f9225a5161ddd3557150e51d5b6372b3c6b4d1 | 4,506 | md | Markdown | src/contrib/persistence/Akka.Persistence.SqlServer/README.md | danbarua/akka.net | 7bf5efaae28f1b8e59bbc68e3a9ae79578b4539c | [
"Apache-2.0"
] | 1 | 2019-04-05T20:44:58.000Z | 2019-04-05T20:44:58.000Z | src/contrib/persistence/Akka.Persistence.SqlServer/README.md | danbarua/akka.net | 7bf5efaae28f1b8e59bbc68e3a9ae79578b4539c | [
"Apache-2.0"
] | null | null | null | src/contrib/persistence/Akka.Persistence.SqlServer/README.md | danbarua/akka.net | 7bf5efaae28f1b8e59bbc68e3a9ae79578b4539c | [
"Apache-2.0"
] | 1 | 2019-04-05T20:44:59.000Z | 2019-04-05T20:44:59.000Z | ## Akka.Persistence.SqlServer
Akka Persistence journal and snapshot store backed by SQL Server database.
**WARNING: Akka.Persistence.SqlServer plugin is still in beta and it's mechanics described bellow may be still subject to change**.
### Setup
To activate the journal plugin, add the following lines to actor system configuration file:
```
akka.persistence.journal.plugin = "akka.persistence.journal.sql-server"
akka.persistence.journal.sql-server.connection-string = "<database connection string>"
```
Similar configuration may be used to setup a SQL Server snapshot store:
```
akka.persistence.snasphot-store.plugin = "akka.persistence.snasphot-store.sql-server"
akka.persistence.snasphot-store.sql-server.connection-string = "<database connection string>"
```
Remember that connection string must be provided separately to Journal and Snapshot Store. To finish setup simply initialize plugin using: `SqlServerPersistence.Init(actorSystem);`
### Configuration
Both journal and snapshot store share the same configuration keys (however they resides in separate scopes, so they are definied distinctly for either journal or snapshot store):
- `class` (string with fully qualified type name) - determines class to be used as a persistent journal. Default: *Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer* (for journal) and *Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer* (for snapshot store).
- `plugin-dispatcher` (string with configuration path) - describes a message dispatcher for persistent journal. Default: *akka.actor.default-dispatcher*
- `connection-string` - connection string used to access SQL Server database. Default: *none*.
- `connection-timeout` - timespan determining default connection timeouts on database-related operations. Default: *30s*
- `schema-name` - name of the database schema, where journal or snapshot store tables should be placed. Default: *dbo*
- `table-name` - name of the table used by either journal or snapshot store. Default: *EventJournal* (for journal) or *SnapshotStore* (for snapshot store)
- `auto-initialize` - flag determining if journal or snapshot store related tables should by automatically created when they have not been found in connected database. Default: *false*
### Custom SQL data queries
SQL Server persistence plugin defines a default table schema used for both journal and snapshot store.
**EventJournal table**:
+---------------+--------+------------+-----------+---------------+----------------+
| PersistenceId | CS_PID | SequenceNr | IsDeleted | PayloadType | Payload |
+---------------+--------+------------+-----------+---------------+----------------+
| nvarchar(200) | int | bigint | bit | nvarchar(500) | varbinary(max) |
+---------------+--------+------------+-----------+---------------+----------------+
**SnapshotStore table**:
+---------------+--------+------------+-----------+-----------+---------------+-----------------+
| PersistenceId | CS_PID | SequenceNr | Timestamp | IsDeleted | SnapshotType | Snapshot |
+---------------+--------+------------+-----------+-----------+---------------+-----------------+
| nvarchar(200) | int | bigint | datetime2 | bit | nvarchar(500) | varbinary(max) |
+---------------+--------+------------+-----------+-----------+---------------+-----------------+
While most of the tables columns maps directly to persistence primitives and are required, CS_PID cached a PersistenceId checksum and server only for performance.
Underneath Akka.Persistence.SqlServer uses a raw ADO.NET commands. You may choose not to use a dedicated built in ones, but to create your own being better fit for your use case. To do so, you have to create your own versions of `IJournalQueryBuilder` and `IJournalQueryMapper` (for custom journals) or `ISnapshotQueryBuilder` and `ISnapshotQueryMapper` (for custom snapshot store) and then attach inside journal, just like in the example below:
```csharp
class MyCustomSqlServerJournal: Akka.Persistence.SqlServer.Journal.SqlServerJournal
{
public MyCustomSqlServerJournal() : base()
{
QueryBuilder = new MyCustomJournalQueryBuilder();
QueryMapper = new MyCustomJournalQueryMapper();
}
}
```
The final step is to setup your custom journal using akka config:
```
akka.persistence.journal.sql-server.class = "MyModule.MyCustomSqlServerJournal, MyModule"
``` | 59.289474 | 445 | 0.671327 | eng_Latn | 0.862109 |
48f969a05181f70a4b904de226751454058d4930 | 7,826 | md | Markdown | content/es/project/quiz/index.md | Apicazorla/academic-website | f58a604f32c0cdeaaca39cbf50f96ebc59bc945b | [
"MIT"
] | null | null | null | content/es/project/quiz/index.md | Apicazorla/academic-website | f58a604f32c0cdeaaca39cbf50f96ebc59bc945b | [
"MIT"
] | null | null | null | content/es/project/quiz/index.md | Apicazorla/academic-website | f58a604f32c0cdeaaca39cbf50f96ebc59bc945b | [
"MIT"
] | 1 | 2022-02-07T16:35:51.000Z | 2022-02-07T16:35:51.000Z | ---
title: Quiz
summary: Juego de preguntas Sistemas Web 2019-2020
tags:
- Web Development
date: ""
# Optional external URL for project (replaces project detail page).
external_link: ""
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
image:
caption: ""
focal_point: ""
preview_only: false
alt_text: "Quiz"
links:
- name: Web
url: https://wst03.000webhostapp.com/
url_code: https://github.com/juletx/quiz
url_pdf: "https://github.com/juletx/quiz/blob/master/Laborategiak/WS%20Azken%20Praktika%20Garapen%20Kuadernoa.pdf"
url_slides: ""
url_video: ""
---
# Quiz: galderen jokoa
## Irakasgaia: Web Sistemak 2019-2020
## Egileak: T03: Julen Etxaniz eta Aitor Zubillaga
## Fitxategiak
- ```Gardenkiak```: Klaseko azalpenetarako erabilitako gardenkiak.
- ```Laborategiak```: Laborategien enuntziatuak, azken lan praktikoa eta laguntza.
- ```index.php```: Proiektuaren erro-direktorioan dago, eta webgunea atzitzean lehenetsitako fitxategia da. ```Layout.php```-ra birbidalketa egiten du.
- ```google-api-php-client-2.4.0```: Google login soziala egiteko beharreko fitxategiak.
- ```html```
- ```Head.html```: Orrialde guztietan head-en jarri beharrekoak: Izenburua, meta etiketak, css estilo orriak eta JavaScript-eko fitxategi batzuk.
- ```Footer.html```: Orrialde guztietan agertzen den oina.
- ```GetUserInfo.html```: Erabiltzaileen informazioa erakusten du ```Users.xml``` fitxategitik irakurrita.
- ```images```: Erabiltzaileen eta galderen argazkiak gordetzeko karpeta.
- ```js```
- ```AddQuestionAjax.js```: Galdera gehitzeko ajax kodea.
- ```ChangeStateAjax.js```: Erabiltzaileak kudeatzeko taulan erabiltzailearen egoera aldatzeaz arduratzen da.
- ```ClientGeolocationAjax.js```: Kredituetan bezeroaren geolokalizazioa erakusten du, web zerbitzuak erabiliz.
- ```jquery-3.4.1.min.js```: JQuery liburutegia, JS-ko gainerako fitxategi gehienetan erabili duguna.
- ```LikeOrDislike.js```: Quiz jokoan erabiltzaileak galderari like edo dislike emateko kodea.
- ```PlayQuizAjax.js```: Quiz jokoan jolasteko kodea.
- ```RemoveUserAjax.js```: Erabiltzaileak kudeatzeko taulan erabiltzaileak ezabatzeaz arduratzen da.
- ```ShowActiveMenu.js```: Menuak momentuko orriaren esteka nabarmentzeko orriaren url-a begiratzen da eta kointziditzen duen estekan active klasea gehitzen da.
- ```ShowImageInForm.js```: Erregistratzeko formularioan argazkia aukeratutakoan orrian bertan erakusten du, eta reset botoiari emandakoan ezabatu.
- ```ShowQuestionCountAjax.js```: XML Galdera kopurua erakusten du.
- ```ShowQuestionsAjax.js```: XML galderak erakusten ditu.
- ```ShowUserCountAjax.js```: Erabiltzaile kopurua erakusten du.
- ```ValidateFieldsQuestion.js```: formularioko kontrolen edukien egiaztapenerako kodea web-zerbitzarira igorri aurretik
- ```VerifyChangePass.js```: Aldatutako pasahitza zuzena dela balioztatzen du.
- ```VerifyEnrollment.js```: Eposta WSn matrikulatuta dagoela egiaztatzen du.
- ```VerifyPassAjax.js```: Pasahitza baliozkoa dela egiaztatzen du.
- ```lib```: Pasahitzaren baliozkatzearen web zerbitzuaren bezeroa eta zerbitzaria garatzeko beharrezkoak diren SOAP liburutegiak dauzka.
- ```php```
- ```AddQuestionWithImage.php```: Irudia duen galdera bat gehitzeko PHP kodea. Sesioa irekia eduki behar du erabiltzaileak, bestela ezin da atzitu.
- ```ChangePassword.php```: Pasahitza aldatzeko kodea.
- ```ChangeState.php```: Erabiltzaile bati sarrera debekatu/baimentzeko. Administratzaileentzat bakarrik.
- ```ClientGeolocation.php```: Erabiltzailearen bezeroari buruzko informazioa.
- ```ClientGetQuestion.php```: Galdera eskuratzeko bezeroa.
- ```ClientVerifyEnrollment.php```: Eposta WSn matrikulatuta dagoela egiaztatzeko bezeroa.
- ```ClienVerifyPass.php```: Erabiltzaile berri bat sortzerakoan sarturiko pasahitza
baliozkoa den ala ez zehazten du. Horretarako ```VerifyPassWS.php```-taz baliatzen da.
- ```Config.php```: Google login sozialaren konfigurazioa.
- ```Credits.php```: Proiektuaren egileak, babesleak eta zerbitzariari naiz bezeroari buruzko informazioa
- ```DbConfig.php```: Datu-basearen konfigurazioako datuak gordetzen dituen fitxategia. Lokaleko eta 000webhost-eko datu-baseekin konektatzeko zerbitzaria, erabiltzailea, gakoa eta izena gordetzen dira. Datu-basea erabili behar denean include egiten ta eta aldagai horiek erabiltzen dira. Horrela aldaketak egitea errazten da, leku bakar batean aldatzea nahikoa baita.
- ```DecreaseGlobalCounter.php```: Erabiltzaile kopurua jaisten du.
- ```GetQuestionWS.php```: Galdera eskuratzeko web zerbitzua.
- ```HandlingAccounts.php```: Administratzaileek erabiltzaileak kudeatzeko (Datuak ikusi, erabiltzaileari sarrera eragotzi, erabiltzailea ezabatu...)
- ```HandlingQuizesAjax.php```: Galdetegiak kudeatzeko kodea.
- ```Layout.php```: Iragarkiak lista moduan ikustean erabiltzen den orriaren diseinu orokorra.
- ```LogGoogle.php```: Erabiltzaileek google login soziala egiteko kontrolagailua.
- ```LogIn.php```: Erabiltzaileek sesioa irekitzeko kontrolagailua.
- ```LogOut.php```: Erabiltzaileek sesioa ixteko kontrolagailua.
- ```Menus.php```: Web-orrian zehar goian agertzen den nabigazio barra.
- ```RemoveUser.php```: Erabiltzaile jakin bat eta bere iragarki guztiak datubasetik ezabatzeko. Administratzaileentzat bakarrik.
- ```SecurityAdmin.php```: Sesioko emaila begiratuta hutsa ez dagoela eta administratzailearena dela egiaztatzen du, administratzaileari bakarrik sartzen uzteko.
- ```SecurityCode.php```: Pasahitza aldatzeko segurtasun kodea.
- ```SecurityLoggedIn.php```: Sesioko emaila hutsa ez dagoela egiaztatzen du, erabiltzaile kautotuei bakarrik sartzen uzteko.
- ```SecurityLoggedOut.php```: Sesioko emaila hutsa dagoela egiaztatzen du erabiltzaile anonimoei bakarriksartzen uzteko.
- ```SecurityUsers.php```: Sesioko emaila begiratuta hutsa ez dagoela eta administratzailearena ez dela egiaztatzen du, erabiltzaile arruntei bakarrik sartzen uzteko.
- ```SendPasswordEmail.php```:
- ```ShowQuestionsWithImage.php```: DBeko galderak taula batean erakusteko PHP kodea. Taulak DBeko irudiak ditu.
- ```ShowResult.php```: Quiz jokoaren amaierako emaitza erakusten du.
- ```ShowXmlQuestions.php```:
- ```SignUp.php```: Erabiltzaile berri bat sortu ahal izateko formularioa.
- ```VerifyPassWS.php```: Jasotzen dituen pasahitzak baliozkoak diren hala ez determinatzen du. Horretarako ```txt/toppasswords.txt ``` fitxategiaz baliatzen da. Bertan munduan zehar gehien erabiltzen edota asmatzeko errazegiak diren pasahitzak daude gordeta. Jasotako pasahitza textu dokumentuan badago, ez baliozkotzat joko da.
- ```sql```
- ```quiz.sql```: Galderak, erabiltzaileak eta jokoaren emaitzak gordetzeko datu-basea.
- ```styles```
- ```Style.css```: Erabili dugun estilo orria. Hainbat elementuri estiloa emateko erabili dugu: taulak, iragarkiak, formularioetako inputak, argazkiak...
- ```txt```
- ```GetGeolocation.txt```: Geolokalizazioaren inplementazioaren azalpena.
- ```GetQuestion.txt```: Galderen web zerbitzuaren azalpena.
- ```ShowQuestionCountAjax.txt```: Galdera kopurua erakustearen azalpena.
- ```ShowUserCountAjax.txt```: Erabiltzaile kopurua erakustearen azalpena.
- ```toppasswords.txt```: Pasahitzaren baliozkatzearen web zerbitzuaren zerbitzaria garatzeko erabiltzen den pasahitz arrunten hiztegia.
- ```xml```
- ```Counter.xml```: Konektatutako erabiltzaile kopurua gordetzen du.
- ```Questions.xml```: Galderak gordetzen ditu.
- ```Questions.xsd```: Galderen eskema fitxategia.
- ```ShowXmlQuestions.xsl```: Galderak erakusteko estilo fitxategia.
- ```Users.xml```: Erabultzaileak gordetzen ditu.
| 63.112903 | 372 | 0.747253 | eus_Latn | 0.999775 |
48f9a101807b6c5d4ab8f8edde626fa7cc69ffbb | 100,033 | md | Markdown | source/index.html.md | atomicfi/docs | b4929a5279f96e59b977ff327ecb7f0a97568533 | [
"Apache-2.0"
] | null | null | null | source/index.html.md | atomicfi/docs | b4929a5279f96e59b977ff327ecb7f0a97568533 | [
"Apache-2.0"
] | 6 | 2019-12-07T14:30:03.000Z | 2021-08-03T15:22:41.000Z | source/index.html.md | atomicfi/docs | b4929a5279f96e59b977ff327ecb7f0a97568533 | [
"Apache-2.0"
] | 1 | 2021-07-26T21:16:34.000Z | 2021-07-26T21:16:34.000Z | ---
title: Atomic Developer Documentation
language_tabs:
- shell: cURL
- javascript--nodejs: Node.JS
- ruby: Ruby
- python: Python
- go: Go
# toc_footers:
# - >-
# <a href="https://mermade.github.io/shins/asyncapi.html">See AsyncAPI
# example</a>
includes: []
search: true
highlight_theme: darkula
headingLevel: 2
---
# Overview
## Introduction
> Production Base URL
```
https://api.atomicfi.com
```
> Sandbox Base URL
```
https://sandbox-api.atomicfi.com
```
### Welcome to Atomic's developer documentation!
You'll find resources to help you integrate with [Transact](#transact-sdk), our web app that handles the heavy-lifting of integration, as well as a number of [REST](https://en.wikipedia.org/wiki/Representational_state_transfer)ful API endpoints.
We've tried our best to make this documentation comprehensive and helpful. If you have suggestions or find issues, [please let us know](mailto:engineering@atomicfi.com).
To get going quickly, we recommend using an API collaboration tool called [Postman](https://www.getpostman.com). You can use the button below to import our collection of endpoints. Be sure to set your `apiUrl`, `apiKey`, and `apiSecret` [environment variables](https://learning.getpostman.com/docs/postman/environments-and-globals/manage-environments/).
<div class="postman-run-button"
data-postman-action="collection/import"
data-postman-var-1="4d830a56bc79a690a118"></div>
<script type="text/javascript">
(function (p,o,s,t,m,a,n) {
!p[s] && (p[s] = function () { (p[t] || (p[t] = [])).push(arguments); });
!o.getElementById(s+t) && o.getElementsByTagName("head")[0].appendChild((
(n = o.createElement("script")),
(n.id = s+t), (n.async = 1), (n.src = m), n
));
}(window, document, "_pm", "PostmanRunObject", "https://run.pstmn.io/button.js"));
</script>
## Authentication
Atomic uses a combination of API keys and access tokens to authenticate requests.
You can retrieve and manage your API keys in the [Atomic Dashboard](https://dashboard.atomicfi.com/).
Your API keys carry many privileges, so be sure to keep them secure! Do not share your secret API keys in publicly accessible areas such as GitHub, client-side code, etc.
## Errors
Our API returns standard HTTP success or error status codes. For errors, we will also include extra information about what went wrong encoded in the response as JSON. The various HTTP status codes we might return are listed below.
| Code | Title | Description |
| ---- | --------------------- | ------------------------------- |
| 200 | OK | The request was successful. |
| 400 | Bad Request | The request was unacceptable. |
| 401 | Unauthorized | Missing or invalid credentials. |
| 404 | Not Found | The resource does not exist. |
| 422 | Validation error | A validation error occurred. |
| 50X | Internal Server Error | An error occurred with our API. |
# Products
## Payroll Connect
### Atomic's Payroll Connect products empower users to access and update their payroll data.
When users are authenticating with their payroll account, Atomic requires that the process be facilitated through [Transact](#transact-sdk).
### `deposit`
Update the destination bank account of paychecks
### `verify`
Verify a user’s income and employment status.
### `identify`
Reduce fraud and onboarding friction with verified profile data from a user’s employer.
### Testing
To aid in testing various user experiences, you may use any of these pre-determined "test" credentials for employer authentication. Any password will work, as long as the username is found in this list. When answering MFA questions, any answer will be accepted. If the authentication requires an email, simply append `@test.com` to the end of the chosen username.
| Username | Test Case |
| --------------------------------- | ---------------------------------------------------------------------------- |
| `test-good` | Test a successful authentication. |
| `test-bad` | Test an unsuccessful authentication. |
| `test-failure` | Test a failure that occurs after a successful authentication. |
| `test-distribution-not-supported` | Test a failure that occurs due to an unsupported distribution configuration. |
| `test-code-mfa` | Test an authentication that includes a device code based MFA flow. |
| `test-question-mfa` | Test an authentication that includes a question-based MFA flow. |
## Transfer
### Atomic's Transfer products facilitate the movement of assets and liabilities between financial institutions.
### `balance`
Move a user’s credit card or loan balance to a new account.
### Deployment options
#### Transact
[Transact](#transact-sdk) allows the user to easily choose the source of their transfer, while the destination is pre-configured during [Access Token](#create-access-token) creation.
#### Atomic Dashboard
The [Atomic Dashboard](https://dashboard.atomicfi.com/) may be used to initiate a transfer, as well as monitor it's progress as it travels through various states of processing.
#### API
Atomic's API may be used to initiate a transfer by first creating an [Access Token](#create-access-token), and subsequently generating a [Task](#create-task). The status of the task may be monitored through [webhooks](#webhooks) and the [Atomic Dashboard](https://dashboard.atomicfi.com/).
### Testing
When testing `balance`, you may use `4111111111111111` as the origin account number. This will mimic a transfer.
# Transact SDK

### Atomic Transact SDK is a UI designed to securely handle interactions with our products, while performing the heavy-lifting of integration.
## Integration
> Initalize Transact via Javascript
> Sandbox URL
```html
<script src="https://cdn-sandbox.atomicfi.com/transact.js"></script>
```
> Production URL
```html
<script src="https://cdn.atomicfi.com/transact.js"></script>
```
```javascript
const startTransact = () => {
Atomic.transact({
// Replace with the server-side generated `publicToken`
publicToken: "PUBLIC_TOKEN",
// Could be either 'balance', `verify`, `identify`, or 'deposit'
product: "balance",
// Optionally theme Transact
theme: {
brandColor: "#FF0000",
overlayColor: "#000000",
},
// Optionally pass in fractional deposit settings
distribution: {
// Could be either 'total', 'fixed', or 'percent'
type: "fixed",
amount: 50,
// Could be either 'create', 'update', or 'delete'
action: "create",
},
// Optionally change the language. Could be either 'en' for English or 'es' for Spanish. Default is 'en'.
language: "en",
// Optionally pass data to Transact that will be returned to you in webhook events.
metadata: {
orderId: "123",
},
// Called when the user finishes the transaction
onFinish: function (data) {
// We recommend saving the `data` object which could be useful for support purposes
},
// Called when the user exits Transact prematurely
onClose: function () {},
});
};
// `startTransact` would typically be user-initiated, such as a button click or tap
startTransact();
```
An [AccessToken](#create-access-token) is required to initialize Transact, and is generated server-side using your API key, and the `publicToken` from the response is returned to your client-side code.
### Mobile & Web
Transact can be initialized by including our JavaScript SDK into your page, and then calling the `Atomic.start` method.
Within the context of a mobile app, we recommend loading Transact into a web view (for example, `WKWebView` on iOS) by building a simple self-hosted wrapper page. The URL used in the webview will be `https://transact.atomicfi.com/initialize/BASE64_ENCODED_STRING`. The `BASE64 Encoded String` is made up of an object containing the parameters used within the SDK (excluding `onFinish` and `onClose`).
> Initialize Transact within a Webview
> Sandbox URL
```html
https://transact-sandbox.atomicfi.com/initialize/BASE64_ENCODED_STRING
```
> Production URL
```html
https://transact.atomicfi.com/initialize/BASE64_ENCODED_STRING
```
```javascript
// Object to be BASE64 Encoded
{
"publicToken": "PUBLIC_TOKEN",
"product": "deposit",
"theme": {
"brandColor": "#1b1464",
"overlayColor": "#CCCCCC"
},
"distribution": {
"type": "fixed",
"amount": 50,
"action": "create"
},
"deeplink": {
"step": "search-company"
},
"language": "en"
}
```
Here are examples for [Swift (iOS)](https://github.com/atomicfi/transact-ios), [Kotlin (Android)](https://github.com/atomicfi/transact-android), [React Native](https://github.com/atomicfi/transact-react-native), and [Flutter](https://github.com/atomicfi/transact-flutter)
<br />
<a href="https://github.com/atomicfi/transact-android"><img src="/source/images/kotlin.png" class="transact-example-platform" /></a>
<a href="https://github.com/atomicfi/transact-ios"><img src="/source/images/swift.png" class="transact-example-platform" /></a>
<a href="https://github.com/atomicfi/transact-react-native"><img src="/source/images/react-native.png" class="transact-example-platform" /></a>
<a href="https://github.com/atomicfi/transact-flutter"><img src="/source/images/flutter.png" class="transact-example-platform" /></a>
## Javascript SDK parameters
| Attribute | Description |
| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `publicToken` <h6>required</h6> | The public token returned during [AccessToken](#create-access-token) creation. |
| `product` <h6>required</h6> | The [product](#products) to initiate. Valid values include `balance` `deposit`, `verify`, or `identify`. |
| `inSdk` | Acceps a boolean. Default is `true`. When `false` close buttons and [SDK Events](#event-listeners) are not broadcast. |
| `theme` | Optionally, provide hex or rgb values to customize Transact. |
| `theme.brandColor` | Accepts hex values. For example: `#FF0000` or `rgb(255, 255, 255)`. This property will mostly be applied to buttons. |
| `theme.overlayColor` | Accepts hex values. For example: `#000000` or `rgb(0, 0, 0)`. This property will change the overlay background color. This overlay is mainly only seen when Transact is used on a Desktop. |
| `deeplink` | Optionally, deeplink into a specific step. |
| `deeplink.step` <h6>required</h6> | Acceptable values: `search-company`, `search-payroll`, `login-company` or `login-payroll`. (If `login-company`, then the `companyId` is required. If `login-payroll`, then `connectorId` and `companyName` are required) |
| `deeplink.companyId` | Required if the step is `login-company`. Accepts the [ID](#company-search) of the company. |
| `deeplink.connectorId` | Required if the step is `login-payroll`. Accepts the [ID](#connector-search) of the connector. |
| `deeplink.companyName` | Required if the step is `search-payroll` or `login-payroll`. Accepts a string of the company name. |
| `distribution` | Optionally pass in enforced deposit settings. Enforcing deposit settings will eliminate company search results that do not support the distribution settings. |
| `distribution.type` | Can be `total` to indicate the remaining balance of their paycheck, `fixed` to indicate a specific dollar amount, or `percent` to indicate a percentage of their paycheck. |
| `distribution.amount` | When `distribution.type` is `fixed`, it indicates the dollar amount to be used. When `distribution.type` is `percent`, it indicates the percentage of their paycheck. Not required if `distribution.type` is `total`. |
| `distribution.action` | The operation to perform when updating the distribution. The default value is `create` which will add a new distribution. The value of `update` indicates an update to an existing distribution matched by the routing and account number. The value of `delete` removes a distribution matched by the routing and account number. |
| `language` | Optionally pass in a language. Acceptable values: `en` for English and `es` for Spanish. Default value is `en` |
| `linkedAccount` | Optionally pass the `_id` of a [LinkedAccount](#linkedaccount-object). When used, Transact will immediately begin authenticating upon opening. This parameter is used when the [LinkedAccount](#linkedaccount-object)'s `transactRequired` flag is set to `true`. |
| `search ` | Optionally, enforce search queries. |
| `search.tags` | Filters companies by a specific tag. Possible values include `gig-economy`, `payroll-provider`, and `unemployment`. |
| `search.excludedTags` | Exclude companies by a specific tag. Possible values include `gig-economy`, `payroll-provider`, and `unemployment`. |
| `metadata` | Optionally pass data to Transact that will be returned to you in webhook events. |
| `handoff` | Handoff allows views to be handled outside of Transact. In place of the view, corresponding SDK events will be emitted that allows apps to respond and handle these views. Accepts an array. The two string values available are `exit-prompt`, `authentication-success`, and `high-latency`. See [Handoff Pages](#handoff-pages) for more detail. |
| `onFinish` | A function that is called when the user finishes the transaction. The function will receive a `data`, which contains the `taskId` object. |
| `onClose` | Called when the user exits Transact prematurely. |
#### Handoff Pages
| Page | Description |
| ------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `exit-prompt` | If the user has reached the login page and decides to exit, they are prompted with a view that asks them to confirm their exit. When `exit-prompt` is passed into the `handoff` array, users will no longer be presented with the exit confirmation. The `atomic-transact-close` event will be triggered in its place. The event will also send the following data: `{handoff: 'exit-prompt'}` |
| `authentication-success` | When authentication is successful, the user is taken to the success view within Transact. When `authentication-success` is passed into the `handoff` array, users will no longer be taken to the success view. The `atomic-transact-finish` event will be triggered in its place. The event will also send the following data: `{handoff: 'authentication-success'}` |
| `high-latency` | When authentication takes much longer than expected, the user is taken to the the high latency view within Transact. When `high-latency` is passed into the `handoff` array, users will no longer be taken to this view. The `atomic-transact-close` event will be triggered in its place. The event will also send the following data: `{handoff: 'high-latency'}` |
## Event Listeners
When using the SDK, events will be emitted and passed to the native application. Such events allow native applications to react and perform functions as needed. Some events will be passed with a data object with additional information.
| Event | Description |
| ------------------------- | ----------------------------------------------------------------------------------------------------------- |
| `atomic-transact-close` | Triggered in several different instances. <br/><br/> 1. If a user does not find their employer and payroll provider, the data passed with the event will be `{reason: 'zero-search-results'}`. <br/><br/> 2. During the Transact process if a user is prompted to keep waiting or exit and they choose to exit, the data passed with the event will be `{reason: 'task-pending'}`.<br/><br/> 3. At any point if the user clicks on the `x` the data passed with the event will be `{reason: 'unknown'}`|
| `atomic-transact-finish` | Triggered when the user reaches the success screen and closes Transact|
| `atomic-transact-open-url` | Triggered when external links are accessed. The data passed with the event will be `{url: Full URL path}`|
| `atomic-transact-interaction` | Triggered on interactions within Transact. For example, when a user transitions to a news screen or presses the back button. The data passed with the event will be `{name: NAME OF THE EVENT, value: OBJECT CONTAINING EVENT VALUES}`. |
#### Event Interactions
```javascript
{
name: "EVENT_NAME"
value: {
//Default property
customer: "Atomic",
//Default property. Possible values are 'en' or 'es'
language: "en"
//Default property. Possible values are 'deposit', 'verify', 'identify', or 'balance'
product: "deposit"
//Additional properties will be included based on the event that has occurred.
}
}
```
Each event, by default, will have `customer`, `product`, and `language` in the `value` object. Some events have additional data.
### Updated Events 7/21/2021
#### Pages
| Name | Description |
| ---------------------------------------------------- | ---------------------------------------------------------------------- |
| `Viewed Access Unauthorized Page` | User did not have a valid token |
| `Viewed Authentication Failed Page` | User viewed the page indicating authentication failed |
| `Viewed Authentication Paused Page` | User viewed the page indicating authentication paused |
| `Viewed Authentication Success Page` | User viewed the page indicating authentication was successful |
| `Viewed Distribution Confirmation Page` | User viewed the distribution confirmation page |
| `Viewed Expired Token Page` | User has an expired token |
| `Viewed Fixed Deposit Amount Page` | User viewed the fixed deposit amount page |
| `Viewed Fractional Deposit Error Page` | User viewed the fractional deposit error page |
| `Viewed High Latency Page` | User viewed the high latency page |
| `Viewed Learn How You Are Protected Page` | User viewed the learn how you are protected page |
| `Viewed Login Page` | User viewed the login page |
| `Viewed Login Help Page` | User viewed the login help page |
| `Viewed Login Recovery Page` | User viewed the login recovery page |
| `Viewed Manual Deposit Instructions Page` | User viewed manual deposit instructions page |
| `Viewed MFA Page` | User viewed the multi factor authentication page |
| `Viewed Percentage Deposit Amount Page` | User viewed the percentage deposit amount page |
| `Viewed Search By Company Page` | User viewed the search by company page |
| `Viewed Search By Configurable Connector Page` | User viewed the search by configurable connector/payroll provider page |
| `Viewed Search By Payroll Page` | User viewed the search by payroll page |
| `Viewed Select From Deposit Options Page` | User viewed the select from deposit options page |
| `Viewed Select From Multiple Accounts Page` | User viewed the select from multiple accounts page |
| `Viewed Select From Multiple Payroll Providers Page` | User viwed the select from available payroll providers page |
| `Viewed Terms And Conditions Are Required Page` | User viewed the terms and conditions required page |
| `Viewed Terms And Conditions Page` | User viewed the terms and conditions page |
| `Viewed Under Maintenance Page` | User viewed a company/payroll provider that is under maintenance |
| `Viewed Welcome Page` | User viewed the welcome page |
#### Other events
This includes a subset of our other events.
| Name | Description |
| ------------------- | ---------------------------------------- |
| `Search By Company` | User has searched for a company/employer |
| `Search By Payroll` | User has searched for a payroll provider |
### Old Events
These events will longer be sent after 9/1/2021
| Name | Description |
| ----------------------------------------- | -------------------------------------------------------- |
| `back` | Back button pressed |
| `step-initialize-welcome` | Welcome screen is initialized |
| `step-initialize-deeplink-search-company` | User deeplinks to the company search |
| `step-welcome` | User visits the Welcome screen |
| `step-search-company` | User visits the Company Search screen |
| `step-search-payroll` | User visits the Payroll Provider Search screen |
| `step-company-uses` | User visits Company X uses Payroll Provider X screen |
| `step-phone-verification` | User prompted to verify their phone number |
| `step-phone-update` | User visits the Phone Update screen |
| `step-unauthorized` | User is unauthorized |
| `step-access-expired-token` | User is using an expired token |
| `authentication-login` | User visits the Payroll Provider Login screen |
| `authentication-login-help` | User visits the Forgot Credentials screen |
| `authentication-mfa-step` | User is presented with Multi Factor Authentication (MFA) |
| `authentication-success` | User completes the authentication process |
| `authentication-error` | User receives an error during authentication |
| `search` | User is searching an Company or Payroll Provider |
| `select-company` | User selects a company (aka employer) |
| `select-payroll` | User selects a payroll provider |
## Metadata
When initializing the [Transact SDK](#transact-sdk) or creating a Task [using a Linked Account](#use-a-linked-account) you can pass a `metadata` parameter. You can use this parameter to attach key-value data that will be returned in webhook events.
Metadata is useful for storing additional, structured information on a Task. As an example, you could store an order ID from your system to track your user's process with a direct deposit. Metadata is not used by Atomic and won't be seen by your users.
Do not store any sensitive information (bank account numbers, card details, etc.) as metadata.
# Webhooks
### Webhooks are a useful way to receive automated updates, and send timely notification to your user as a transaction progresses.
You can configure webhook endpoints via the [Atomic Dashboard](https://dashboard.atomicfi.com/). Atomic will issue `POST` requests to the designated endpoint(s). We recommend using [Request Bin](https://requestbin.com/) as a way to inspect the payload of events during development without needing to stand up a server.
## Securing webhooks
To validate a webhook request came from Atomic, we suggest verifying the payloads with the `X-HMAC-Signature-Sha-256` header (which we pass with every request).
| Header | Description |
| -------------------------- | -------------------------------------------------------------------------------------- |
| `X-HMAC-Signature-Sha-256` | A SHA256 HMAC hexdigest computed with your API secret and the raw body of the request. |
## Event object
> Sample
```json
{
"eventType": "task-status-updated",
"eventTime": "2020-01-28T22:04:18.778Z",
"publicToken": "PUBLIC_TOKEN",
"product": "deposit",
"user": {
"_id": "5d8d3fecbf637ef3b11a877a",
"identifier": "YOUR_INTERNAL_GUID"
},
"company": {
"_id": "5d9a3fecbf637ef3b11ab442",
"name": "Home Depot",
"branding": {
"logo": {
"url": "https://atomicfi-public-production.s3.amazonaws.com/979115f4-34a0-44f5-901e-753a33337444_atomic-logo-dark.png"
}
}
},
"metadata": {
"orderId": "123"
},
"task": "5e30afde097146a8fc3d5cec",
"data": {
"previousStatus": "processing",
"status": "completed",
"distributionType": "fixed",
"distributionAmount": 25
}
}
```
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `eventType` | Webhook event type for a task status change or authentication status change. Value will be one of the following values: <table><tr><th>Value/Description</th></tr><tr><td>`task-status-updated`<br />Signifies an update to the task status. For example processing to failed or processing to completed.</td></tr><tr><td>`task-status-patched`<br />Signifies a patch to a task's final status. For example, a task's status is reversed from completed to a status of failed.</td></tr><tr><td>`task-authentication-status-updated`<br />Signifies that an update to the authentication status. For example if authentication updates to true or false.</td></tr></table> |
| `eventTime` | The date and time of the event creation. |
| |
| `publicToken` | [Public AccessToken](#create-access-token) used when initializing the Transact SDK |
| `user` | Object containing `_id` and `identifier`. `Identifier` will be your internal GUID. |
| `company` | Object containing `_id`, `name`, and `branding`. |
| `metadata` | Object containing the `metadata` transmitted to Atomic during [Transact initialization](#transact-sdk) or [using a Linked Account](#use-a-linked-account). |
| `task` | Contains the task ID. |
| `data` | Payload object containing `reason`, `previousStatus`, `status`, and `distributionType`(if applicable), `distributionAmount`(if applicable), `amount`(if applicable), and [outputs](#outputs)(Outputs will differ depending on the product). |
## Event types
This is a list of all the types of events we currently send. We may add more at any time, so you shouldn't rely on only these types existing in your code.
Each event object includes a `data` property which contains unique data points for each type of event. Each event type below describes these unique data points.
## Task status updated
> Example `task-status-updated` event
```json
{
"_id": "5c1821dbc6b7baf3435e1d23",
"eventType": "task-status-updated",
"eventTime": "2021-01-13T19:43:26.097Z",
"product": "deposit",
"user": {
"_id": "5c17c632e1d8ca3b08b2586f",
"identifier": "YOUR_UNIQUE_GUID"
},
"company": {
"name": "Home Depot",
"branding": {
"logo": {
"url": "https://atomicfi-public-production.s3.amazonaws.com/979115f4-34a0-44f5-901e-753a33337444_atomic-logo-dark.png"
}
}
},
"task": "5d97e4abc90a0a0007993e9c",
"data": {
"previousStatus": "processing",
"status": "completed",
"distributionType": "fixed",
"distributionAmount": 25
}
}
```
The status of a [Task](#create-task) was changed. Possible statuses include:
| Status | Description |
| ------------ | -------------------------------- |
| `processing` | The task has started processing. |
| `failed` | The task failed to process. |
| `completed` | The task completed successfully. |
## Task authentication status updated
> Example `task-authentication-status-updated` event
```json
{
"_id": "5c1821dbc6b7baf3435e1d23",
"eventType": "task-authentication-status-updated",
"eventTime": "2021-01-13T19:43:26.097Z",
"product": "deposit",
"user": {
"_id": "5c17c632e1d8ca3b08b2586f",
"identifier": "YOUR_UNIQUE_GUID"
},
"company": {
"name": "Home Depot",
"branding": {
"logo": {
"url": "https://atomicfi-public-production.s3.amazonaws.com/979115f4-34a0-44f5-901e-753a33337444_atomic-logo-dark.png"
}
}
},
"task": "5d97e4abc90a0a0007993e9c",
"data": {
"authenticated": true
}
}
```
The authentication status of a [Task](#create-task) was updated. Possible `authenticated` statuses include:
| Status | Description |
| ------- | ------------------------------------ |
| `true` | The task authenticated successfully. |
| `false` | The task failed to authenticate. |
## Task status patched
> Example `task-status-patched` event
```json
{
"_id": "5c1821dbc6b7baf3435e1d23",
"eventType": "task-status-patched",
"eventTime": "2021-01-13T19:43:26.097Z",
"product": "deposit",
"user": {
"_id": "5c17c632e1d8ca3b08b2586f",
"identifier": "YOUR_UNIQUE_GUID"
},
"company": {
"name": "Home Depot",
"branding": {
"logo": {
"url": "https://atomicfi-public-production.s3.amazonaws.com/979115f4-34a0-44f5-901e-753a33337444_atomic-logo-dark.png"
}
}
},
"task": "5d97e4abc90a0a0007993e9c",
"data": {
"previousStatus": "failed",
"status": "completed",
"distributionType": "fixed",
"distributionAmount": 25
}
}
```
The status of a [Task](#create-task) was patched. Possible statuses include:
| Status | Description |
| ----------- | -------------------------------- |
| `failed` | The task failed to process. |
| `completed` | The task completed successfully. |
## Outputs
> Sample response for `verify`
```json
{
"task": "60abefde8a4445000956e30a",
"company": {
"branding": {
"logo": {
"url": "https://cdn-public.atomicfi.com/979115f4-34a0-44f5-901e-753a33337444_atomic-logo-dark.png"
}
},
"name": "Mocky"
},
"product": "verify",
"publicToken": "0cbb06d9-1610-4fa6-ae04-230e9ac3a60d",
"user": {
"_id": "5ebc3977cb64830007f48ca4",
"identifier": "YOUR_INTERNAL_GUID"
},
"data": {
"previousStatus": "processing",
"status": "completed",
"authenticated": true,
"outputs": {
"w2s": [
{
"_id": "60abeff60836730008616fb2",
"year": "2020-01-01T00:00:00.000Z",
"totalWages": 50000,
"form": {
"_id": "60abeff60836730008616faf",
"url": "https://atomicfi-task-output-file-sandbox.s3.amazonaws.com/customer-5e978dcf3abbf90008a00b7a/task-60abefde8a4445000956e30a/62ca2574-21d2-4151-a171-bffebf80ab1c.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIA6EYFQTOWMIY2TUSM%2F20210524%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210524T182709Z&X-Amz-Expires=3600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEFsaCXVzLWVhc3QtMSJGMEQCIBCAiYKuUVIYjCFolA%2Feeu%2BDu%2FNtlADYOOX7Uy09CVWuAiAT1umLzk5Xbs6vANPMFS9sRWWyLIRWfmZXbCLtXeW5QCrnAQj0%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F8BEAIaDDk3MjI4NDc5NTgyMCIMssXdL%2FxgLWwt8%2BgnKrsB4jVv4rWWhviSPha5qwh1WPasjt%2FhLKr4dRI9%2FgUSQercXwg37%2FYjdK3%2BcVrvEgfnYkhw3U5YHUc9Aja4baMLONbQemmcZ5%2FI0ehwlkcTDdDspTHXDPacceD%2BcHBkczNZvgzp6j6RfZRiynhZVVBTWnSzJfhzmLkAR4uq8%2BaeCdA7chTGCuBxx3BbhUAAeEcPHS2c8t6mqsuwkPlqgSiFb%2FoGxisqRAZRCtIfNEHNiTHXpR8oFGVqjeLuBDDk36%2BFBjrhATRrp5dhRdXu5VgYUpRXrIFqg8FU5J8iDvAZT6WTzC60djT4cuGKUebJmnpF%2BJbzJ0NAHDnAXylnE%2F5UpHBjwKMZtgouvvHsgxYWIWddTQWP1ihySIaZuAotqjYSjLhBsIKpznabeWoWW5Zu5VJOcb1pfyzBYJVv7JQAxKglgR6L5y3cNuLRrs2cnNMmYKVGcitUd9roccBGhMh5ph3ChaLadrxEJPiyHOtIZcDRTDZDrMvTvLP1GCXtbirK8LDNWRs6KS2ldx4nHTirXGyUPO2k6MZJxImiXJCmShcmLP1XKg%3D%3D&X-Amz-Signature=1592e0c681cbba2f2c8e812a75f18bc4af238231cd9f76ad2994bad9d9eee026&X-Amz-SignedHeaders=host"
}
},
{
"_id": "60abeff60836730008616fb3",
"year": "2019-01-01T00:00:00.000Z",
"totalWages": 50000,
"form": {
"_id": "60abeff50836730008616faa",
"url": "https://atomicfi-task-output-file-sandbox.s3.amazonaws.com/customer-5e978dcf3abbf90008a00b7a/task-60abefde8a4445000956e30a/587c09a2-1b05-47ee-9ea7-fb49d587119f.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIA6EYFQTOWMIY2TUSM%2F20210524%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210524T182709Z&X-Amz-Expires=3600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEFsaCXVzLWVhc3QtMSJGMEQCIBCAiYKuUVIYjCFolA%2Feeu%2BDu%2FNtlADYOOX7Uy09CVWuAiAT1umLzk5Xbs6vANPMFS9sRWWyLIRWfmZXbCLtXeW5QCrnAQj0%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F8BEAIaDDk3MjI4NDc5NTgyMCIMssXdL%2FxgLWwt8%2BgnKrsB4jVv4rWWhviSPha5qwh1WPasjt%2FhLKr4dRI9%2FgUSQercXwg37%2FYjdK3%2BcVrvEgfnYkhw3U5YHUc9Aja4baMLONbQemmcZ5%2FI0ehwlkcTDdDspTHXDPacceD%2BcHBkczNZvgzp6j6RfZRiynhZVVBTWnSzJfhzmLkAR4uq8%2BaeCdA7chTGCuBxx3BbhUAAeEcPHS2c8t6mqsuwkPlqgSiFb%2FoGxisqRAZRCtIfNEHNiTHXpR8oFGVqjeLuBDDk36%2BFBjrhATRrp5dhRdXu5VgYUpRXrIFqg8FU5J8iDvAZT6WTzC60djT4cuGKUebJmnpF%2BJbzJ0NAHDnAXylnE%2F5UpHBjwKMZtgouvvHsgxYWIWddTQWP1ihySIaZuAotqjYSjLhBsIKpznabeWoWW5Zu5VJOcb1pfyzBYJVv7JQAxKglgR6L5y3cNuLRrs2cnNMmYKVGcitUd9roccBGhMh5ph3ChaLadrxEJPiyHOtIZcDRTDZDrMvTvLP1GCXtbirK8LDNWRs6KS2ldx4nHTirXGyUPO2k6MZJxImiXJCmShcmLP1XKg%3D%3D&X-Amz-Signature=6f4e330438cb12c5cc7226342377b7bb610691c74f8def6d56b7b7c4db455988&X-Amz-SignedHeaders=host"
}
}
],
"statements": [
{
"_id": "60abeff60836730008616fb4",
"date": "2020-06-15T12:00:00.000Z",
"grossAmount": 1000,
"paystub": {
"_id": "60abeff50836730008616fad",
"url": "https://atomicfi-task-output-file-sandbox.s3.amazonaws.com/customer-5e978dcf3abbf90008a00b7a/task-60abefde8a4445000956e30a/a4ac9f90-902c-4ae4-8008-95f7ffff2159.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIA6EYFQTOWMIY2TUSM%2F20210524%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210524T182709Z&X-Amz-Expires=3600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEFsaCXVzLWVhc3QtMSJGMEQCIBCAiYKuUVIYjCFolA%2Feeu%2BDu%2FNtlADYOOX7Uy09CVWuAiAT1umLzk5Xbs6vANPMFS9sRWWyLIRWfmZXbCLtXeW5QCrnAQj0%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F8BEAIaDDk3MjI4NDc5NTgyMCIMssXdL%2FxgLWwt8%2BgnKrsB4jVv4rWWhviSPha5qwh1WPasjt%2FhLKr4dRI9%2FgUSQercXwg37%2FYjdK3%2BcVrvEgfnYkhw3U5YHUc9Aja4baMLONbQemmcZ5%2FI0ehwlkcTDdDspTHXDPacceD%2BcHBkczNZvgzp6j6RfZRiynhZVVBTWnSzJfhzmLkAR4uq8%2BaeCdA7chTGCuBxx3BbhUAAeEcPHS2c8t6mqsuwkPlqgSiFb%2FoGxisqRAZRCtIfNEHNiTHXpR8oFGVqjeLuBDDk36%2BFBjrhATRrp5dhRdXu5VgYUpRXrIFqg8FU5J8iDvAZT6WTzC60djT4cuGKUebJmnpF%2BJbzJ0NAHDnAXylnE%2F5UpHBjwKMZtgouvvHsgxYWIWddTQWP1ihySIaZuAotqjYSjLhBsIKpznabeWoWW5Zu5VJOcb1pfyzBYJVv7JQAxKglgR6L5y3cNuLRrs2cnNMmYKVGcitUd9roccBGhMh5ph3ChaLadrxEJPiyHOtIZcDRTDZDrMvTvLP1GCXtbirK8LDNWRs6KS2ldx4nHTirXGyUPO2k6MZJxImiXJCmShcmLP1XKg%3D%3D&X-Amz-Signature=8181652f0d8af4ab7d3cf3e2aab8cdf563384134acdd4fa709e21edb805287b4&X-Amz-SignedHeaders=host"
}
},
{
"_id": "60abeff60836730008616fb5",
"date": "2020-06-30T12:00:00.000Z",
"grossAmount": 1000,
"paystub": {
"_id": "60abeff50836730008616fae",
"url": "https://atomicfi-task-output-file-sandbox.s3.amazonaws.com/customer-5e978dcf3abbf90008a00b7a/task-60abefde8a4445000956e30a/4ba670fe-378e-4228-a6df-d0708b73e5fa.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIA6EYFQTOWMIY2TUSM%2F20210524%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210524T182709Z&X-Amz-Expires=3600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEFsaCXVzLWVhc3QtMSJGMEQCIBCAiYKuUVIYjCFolA%2Feeu%2BDu%2FNtlADYOOX7Uy09CVWuAiAT1umLzk5Xbs6vANPMFS9sRWWyLIRWfmZXbCLtXeW5QCrnAQj0%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F8BEAIaDDk3MjI4NDc5NTgyMCIMssXdL%2FxgLWwt8%2BgnKrsB4jVv4rWWhviSPha5qwh1WPasjt%2FhLKr4dRI9%2FgUSQercXwg37%2FYjdK3%2BcVrvEgfnYkhw3U5YHUc9Aja4baMLONbQemmcZ5%2FI0ehwlkcTDdDspTHXDPacceD%2BcHBkczNZvgzp6j6RfZRiynhZVVBTWnSzJfhzmLkAR4uq8%2BaeCdA7chTGCuBxx3BbhUAAeEcPHS2c8t6mqsuwkPlqgSiFb%2FoGxisqRAZRCtIfNEHNiTHXpR8oFGVqjeLuBDDk36%2BFBjrhATRrp5dhRdXu5VgYUpRXrIFqg8FU5J8iDvAZT6WTzC60djT4cuGKUebJmnpF%2BJbzJ0NAHDnAXylnE%2F5UpHBjwKMZtgouvvHsgxYWIWddTQWP1ihySIaZuAotqjYSjLhBsIKpznabeWoWW5Zu5VJOcb1pfyzBYJVv7JQAxKglgR6L5y3cNuLRrs2cnNMmYKVGcitUd9roccBGhMh5ph3ChaLadrxEJPiyHOtIZcDRTDZDrMvTvLP1GCXtbirK8LDNWRs6KS2ldx4nHTirXGyUPO2k6MZJxImiXJCmShcmLP1XKg%3D%3D&X-Amz-Signature=5fcf0eaca8c461eb3e1479ca15e23931ab0c9ed9e38eed3bb1292dffeb436a91&X-Amz-SignedHeaders=host"
}
}
],
"accounts": [
{
"_id": "60abeff60836730008616fb0",
"routingNumber": "123123123",
"accountNumber": "XXXX0000",
"type": "checking",
"distributionType": "percentage",
"distributionAmount": 80
},
{
"_id": "60abeff60836730008616fb1",
"routingNumber": "456456456",
"accountNumber": "XXXX1111",
"type": "savings",
"distributionType": "percentage",
"distributionAmount": 20
}
],
"employeeType": "fulltime",
"employmentStatus": "active",
"income": 45000,
"incomeType": "Biweekly",
"jobTitle": "Product Manager",
"startDate": "2017-04-19T12:00:00.000Z",
"address": "123 Street St.",
"city": "Provo",
"dateOfBirth": "1984-04-12T12:00:00.000Z",
"email": "jappleseed@example.org",
"firstName": "Jane",
"lastName": "Appleseed",
"phone": "8015551111",
"postalCode": 84606,
"ssn": "1112223333",
"state": "UT",
"weeklyHours": 40,
"payCycle": "weekly",
"employer": "ABC Company"
}
},
"eventType": "task-status-updated",
"eventTime": "2021-05-24T18:27:09.610Z"
}
```
### Outputs for the `verify` product
| Attribute | Description |
| ------------------ | ------------------------------------------------------------------------------------------------- |
| `income` | Employee's income, represented as a number. |
| `incomeType` | Employee's income type. Possible values are `yearly`, `monthly`, `weekly`, `daily`, and `hourly`. |
| `employeeType` | Employee type. Possible values are `parttime` and `fulltime`. |
| `employmentStatus` | Employment status. Possible values are `active` and `terminated`. |
| `jobTitle` | Employee's job title. |
| `startDate` | Employee's hire date. |
| `weeklyHours` | Number of hours worked per week. |
| `payCycle` | Payment period. Possible values are `monthly`, `semimonthly`, `biweekly`, and `weekly`. |
| `statements` | An array of [statements](#statement). |
| `accounts` | An array of bank [accounts](#deposit-account) on file for paycheck distributions. |
| `w2s` | An array of [w2s](#w2). |
### Nested object properties
#### Statement
> Sample statement
```json
{
"date": "2020-01-01T00:00:00.000Z",
"grossAmount": 1266.11,
"netAmount": 1014.29,
"paymentMethod": "deposit",
"paystub": {
"_id": "602414d84f9a1980cf5eafcc",
"url": "https://private-bucket.s3.amazonaws.com/5e978dcf3abbf90008a00b7a/602414d84f9a1980cf5eafcc/7773bc56-71b0-40a1-bbed-03d7b1434850.pdf?Expires=1612991865&Signature=jwSGIZEnOsNJrrQaUhLRnaNg5uQ%3D"
},
"hours": 32
}
```
##### Properties
| Name | Type | Description |
| ------------------------ | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `date` <h6>required</h6> | string | Date of the deposit. |
| `grossAmount` | number | Gross dollar amount of the deposit. |
| `netAmount` | number | Net dollar amount of the deposit. |
| `paymentMethod` | string | Method used for the payment. Possible values include `deposit` or `check` |
| `paystub` | string | An object containing fileds of `_id` and `url`. The `url` field can be used to download a PDF and is valid for 1 hour. If the `url` expires, you can get a new one using the [generate file url](#generate-file-url) endpoint. |
| `hours` | number | Hours worked within the pay period. |
#### Deposit account
> Sample deposit account
```json
{
"accountNumber": "220000000",
"routingNumber": "110000000",
"type": "checking",
"distributionType": "percent",
"distributionAmount": "75"
}
```
##### Properties
| Name | Type | Description |
| --------------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `accountNumber` <h6>required</h6> | string | Account number. |
| `routingNumber` | string | When account the account is bank account, this is the ABA routing number. |
| `type` | string | Type of account. Possible values include `checking` or `savings` |
| `distributionType` | string | The type of distribution for the account. Possible values include `total`, `percent`, or `fixed`.. |
| `distributionAmount` | number | The amount being distributed to the account. When `distributionType` is `percent`, the number represents a percentage of the total pay. When `distributionType` is `fixed`, this number represents a fixed dollar amount. This value is not set when `distributionType` is `total`. |
#### W2
> Sample W2
```json
{
"totalWages": "23226.8",
"year": "2019",
"form": {
"_id": "3046cb8222cb33122dbcb65c",
"url": "https://private-bucket.s3.amazonaws.com/5e978dcf3abbf90008a00b7a/602414d84f9a1980cf5eafcc/7773bc56-71b0-40a1-bbed-03d7b1434850.pdf?Expires=1612991865&Signature=jwSGIZEnOsNJrrQaUhLRnaNg5uQ%3D"
}
}
```
##### Properties
| Name | Type | Description |
| ------------ | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `totalWages` | number | Wages, tips and other compensation. Box 1 of the W2 form. |
| `year` | string | The tax year |
| `form` | object | An object containing fileds of `_id` and `url`. The `url` field can be used to download a PDF and is valid for 1 hour. If the `url` expires, you can get a new one using the [generate file url](#generate-file-url) endpoint. |
> Sample response for `identify`
```json
{
"eventType": "task-status-updated",
"eventTime": "2020-01-28T22:04:18.778Z",
"product": "identify",
"user": {
"_id": "5d8d3fecbf637ef3b11a877a",
"identifier": "YOUR_INTERNAL_GUID"
},
"task": "5e30afde097146a8fc3d5cec",
"data": {
"previousStatus": "processing",
"status": "completed",
"outputs": {
"firstName": "Jane",
"lastName": "Appleseed",
"dateOfBirth": "6/4/98",
"email": "jane@doe.com",
"phone": "5558881111",
"ssn": "111223333",
"address": "123 Example St.",
"city": "Salt Lake City",
"state": "UT",
"postalCode": "84111"
}
}
}
```
### Outputs for the `identify` product
| Attribute | Description |
| ------------- | ----------------------- |
| `firstName` | First name. |
| `lastName` | Last name. |
| `dateOfBirth` | Date of birth. |
| `email` | Email address. |
| `phone` | Phone number. |
| `ssn` | Social security number. |
| `address` | Address street address. |
| `city` | Address city. |
| `state` | Address state. |
| `postalCode` | Address postal code. |
## Fail Reasons
| Attribute | Description |
| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `account-lockout` | The account is locked out, most likely you have too many failed attempts. |
| `account-setup-incomplete` | The user's account setup is not complete and will require additional information from the user. |
| `bad-credentials` | Either the username or password was incorrect. |
| `connection-error` | A network error occurred. |
| `distribution-not-supported` | The account did not support the distribution, e.g. they requested to add an account for a percentage of their paycheck, but can only do fixed amounts and remainder/net balance. |
| `enrolled-in-paycard` | The user is enrolled in a paycard program instead of direct deposit via their bank. |
| `expired` | The user's password has expired and they must create a new one. |
| `product-not-supported` | The account did not support the product. |
| `routing-number-not-supported` | The account did not support the routing number entered. |
| `session-timeout` | The session timed out. |
| `system-unavailable` | The system was unavailable, e.g. the site is undergoing maintenance or it is outside the window of scheduled availability for the site. |
| `transaction-pending` | There is a transaction in progress. Only one transaction may be run at a time. |
| `unknown-failure` | We encountered an unexpected error. |
| `user-abandon` | The user was asked an MFA question, but did not answer the question. |
# API Reference
## Create Access Token
> Code samples
```shell
# You can also use wget
curl --location --request POST "https://api.atomicfi.com/access-token" \
--header "Content-Type: application/json" \
--header "x-api-key: f0d0a166-96de-4898-8879-da309801968b" \
--header "x-api-secret: afcce08f-95bd-4317-9119-ecb8debae4f2" \
--data "{
\"identifier\": \"YOUR_INTERNAL_GUID\",
\"phone\": \"8016554444\",
\"email\": \"jdoe@example.org\",
\"names\": [
{
\"firstName\": \"Jane\",
\"lastName\": \"Doe\"
}
],
\"addresses\": [
{
\"line1\": \"123 Atomic Ave.\",
\"line2\": \"Apt. #1\",
\"city\": \"Salt Lake City\",
\"state\": \"UT\",
\"zipcode\": \"84044\",
\"country\": \"US\"
}
],
\"accounts\": [
{
\"accountNumber\": \"220000000\",
\"routingNumber\": \"110000000\",
\"type\": \"checking\",
\"title\": \"Premier Plus Checking\"
}
]
}"
```
```javascript--nodejs
var https = require('https');
var options = {
'method': 'POST',
'hostname': 'https://api.atomicfi.com',
'path': '/access-token',
'headers': {
'Content-Type': 'application/json',
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
};
var req = https.request(options, function (res) {
var chunks = [];
res.on("data", function (chunk) {
chunks.push(chunk);
});
res.on("end", function (chunk) {
var body = Buffer.concat(chunks);
console.log(body.toString());
});
res.on("error", function (error) {
console.error(error);
});
});
var postData = "{\n\t\"identifier\": \"YOUR_INTERNAL_GUID\",\n\t\"phone\": \"8016554444\",\n\t\"email\": \"jdoe@example.org\",\n\t\"names\": [\n\t\t{\n\t\t\t\"firstName\": \"Jane\",\n\t\t\t\"lastName\": \"Doe\"\n\t\t}\n\t],\n\t\"addresses\": [\n\t\t{\n\t\t\t\"line1\": \"123 Atomic Ave.\",\n\t\t\t\"line2\": \"Apt. #1\",\n\t\t\t\"city\": \"Salt Lake City\",\n\t\t\t\"state\": \"UT\",\n\t\t\t\"zipcode\": \"84044\",\n\t\t\t\"country\": \"US\"\n\t\t}\n\t],\n\t\"accounts\": [\n\t\t{\n\t\t\t\"accountNumber\": \"220000000\",\n\t\t\t\"routingNumber\": \"110000000\",\n\t\t\t\"type\": \"checking\",\n\t\t\t\"title\": \"Premier Plus Checking\"\n\t\t}\n\t]\n}";
req.write(postData);
req.end();
```
```ruby
require "uri"
require "net/http"
url = URI("https://api.atomicfi.com/access-token")
http = Net::HTTP.new(url.host, url.port)
request = Net::HTTP::Post.new(url)
request["Content-Type"] = "application/json"
request["x-api-key"] = "f0d0a166-96de-4898-8879-da309801968b"
request["x-api-secret"] = "afcce08f-95bd-4317-9119-ecb8debae4f2"
request.body = "{\n\t\"identifier\": \"YOUR_INTERNAL_GUID\",\n\t\"phone\": \"8016554444\",\n\t\"email\": \"jdoe@example.org\",\n\t\"names\": [\n\t\t{\n\t\t\t\"firstName\": \"Jane\",\n\t\t\t\"lastName\": \"Doe\"\n\t\t}\n\t],\n\t\"addresses\": [\n\t\t{\n\t\t\t\"line1\": \"123 Atomic Ave.\",\n\t\t\t\"line2\": \"Apt. #1\",\n\t\t\t\"city\": \"Salt Lake City\",\n\t\t\t\"state\": \"UT\",\n\t\t\t\"zipcode\": \"84044\",\n\t\t\t\"country\": \"US\"\n\t\t}\n\t],\n\t\"accounts\": [\n\t\t{\n\t\t\t\"accountNumber\": \"220000000\",\n\t\t\t\"routingNumber\": \"110000000\",\n\t\t\t\"type\": \"checking\",\n\t\t\t\"title\": \"Premier Plus Checking\"\n\t\t}\n\t]\n}"
response = http.request(request)
puts response.read_body
```
```python
import requests
url = 'https://api.atomicfi.com/access-token'
payload = "{\n\t\"identifier\": \"YOUR_INTERNAL_GUID\",\n\t\"phone\": \"8016554444\",\n\t\"email\": \"jdoe@example.org\",\n\t\"names\": [\n\t\t{\n\t\t\t\"firstName\": \"Jane\",\n\t\t\t\"lastName\": \"Doe\"\n\t\t}\n\t],\n\t\"addresses\": [\n\t\t{\n\t\t\t\"line1\": \"123 Atomic Ave.\",\n\t\t\t\"line2\": \"Apt. #1\",\n\t\t\t\"city\": \"Salt Lake City\",\n\t\t\t\"state\": \"UT\",\n\t\t\t\"zipcode\": \"84044\",\n\t\t\t\"country\": \"US\"\n\t\t}\n\t],\n\t\"accounts\": [\n\t\t{\n\t\t\t\"accountNumber\": \"220000000\",\n\t\t\t\"routingNumber\": \"110000000\",\n\t\t\t\"type\": \"checking\",\n\t\t\t\"title\": \"Premier Plus Checking\"\n\t\t}\n\t]\n}"
headers = {
'Content-Type': 'application/json',
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
response = requests.request('POST', url, headers = headers, data = payload, allow_redirects=False, timeout=undefined, allow_redirects=false)
print(response.text)
```
```go
package main
import (
"fmt"
"strings"
"os"
"path/filepath"
"net/http"
"io/ioutil"
)
func main() {
url := "https://api.atomicfi.com/access-token"
method := "POST"
payload := strings.NewReader("{\n \"identifier\": \"YOUR_INTERNAL_GUID\",\n \"phone\": \"8016554444\",\n \"email\": \"jdoe@example.org\",\n \"names\": [\n {\n \"firstName\": \"Jane\",\n \"lastName\": \"Doe\"\n }\n ],\n \"addresses\": [\n {\n \"line1\": \"123 Atomic Ave.\",\n \"line2\": \"Apt. #1\",\n \"city\": \"Salt Lake City\",\n \"state\": \"UT\",\n \"zipcode\": \"84044\",\n \"country\": \"US\"\n }\n ],\n \"accounts\": [\n {\n \"accountNumber\": \"220000000\",\n \"routingNumber\": \"110000000\",\n \"type\": \"checking\",\n \"title\": \"Premier Plus Checking\"\n }\n ]\n}")
client := &http.Client {
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
req, err := http.NewRequest(method, url, payload)
if err != nil {
fmt.Println(err)
}
req.Header.Add("Content-Type", "application/json")
req.Header.Add("x-api-key", "f0d0a166-96de-4898-8879-da309801968b")
req.Header.Add("x-api-secret", "afcce08f-95bd-4317-9119-ecb8debae4f2")
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
fmt.Println(string(body))
}
```
> Example request
```json
{
"identifier": "YOUR_INTERNAL_GUID",
"phone": "8016554444",
"email": "jdoe@example.org",
"names": [
{
"firstName": "Jane",
"lastName": "Doe"
}
],
"addresses": [
{
"line1": "123 Atomic Ave.",
"line2": "Apt. #1",
"city": "Salt Lake City",
"state": "UT",
"zipcode": "84044",
"country": "US"
}
],
"accounts": [
{
"accountNumber": "220000000",
"routingNumber": "110000000",
"type": "checking",
"title": "Premier Plus Checking"
}
]
}
```
An `AccessToken` grants access to Atomic's API resources for a specific user.
### HTTP Request
`POST /access-token`
### Authentication headers
| Name | Description |
| -------------- | ---------------------------------- |
| `x-api-key` | API Key for your Atomic account |
| `x-api-secret` | API Secret for your Atomic account |
### Request properties
| Name | Type | Description |
| ------------------------------ | --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `identifier` <h6>required</h6> | string | A unique identifier (GUID) from your system that will be used to reference this user. |
| `accounts` <h6>required</h6> | [[Account](#account)] | An array of bank and/or credit card accounts. At least one bank account is required for an [Deposit](#payroll-connect) transaction, and at least one card account is required for an [Transfer](#transfer) transaction. |
| `names` | [[Name](#name)] | Account holder names, for reference. |
| `addresses` | [[Address](#address)] | Account holder addresses, for reference. |
### Nested object properties
#### Account
> Bank account example
```json
{
"accountNumber": "220000000",
"routingNumber": "110000000",
"type": "checking",
"title": "Premier Plus Checking"
}
```
> Card account example
```json
{
"accountNumber": "4111111111111111",
"title": "ABC Bank Platinum Card",
"type": "card",
"transferLimit": "50000"
}
```
##### Properties
| Name | Type | Description |
| --------------------------------- | ------ | ------------------------------------------------------------------------------------------------------------------------------- |
| `accountNumber` <h6>required</h6> | string | Account number. |
| `routingNumber` | string | When account the account is bank account, this is the ABA routing number. |
| `type` <h6>required</h6> | string | Type of account. Possible values include `card`, `checking`, or `savings`. This field is required for creating an access token. |
| `title` | string | A friendly name for the account that could be shown to the user. |
| `transferLimit` | string | A balance transfer limit (in dollars) that may be optionally imposed when executing an [Transfer](#transfer) transaction. |
#### Name
> Name example
```json
{
"firstName": "Jane",
"lastName": "Doe"
}
```
##### Properties
| Name | Type | Description |
| ----------- | ------ | ----------------------------- |
| `firstName` | string | First name of account holder. |
| `lastName` | string | Last name of account holder. |
#### Address
> Address example
```json
{
"line1": "123 Atomic Ave.",
"line2": "Apt. #1",
"city": "Salt Lake City",
"state": "UT",
"zipcode": "84044",
"country": "US"
}
```
##### Properties
| Name | Type | Description |
| --------- | ------ | ------------------------ |
| `line1` | string | First line of address. |
| `line2` | string | Second line of address . |
| `city` | string | State of address. |
| `state` | string | Second line of address |
| `zipcode` | string | 5-digit zipcode. |
| `country` | string | Country of address. |
### Response
> Example response
```json
{
"data": {
"publicToken": "6e93549e-3571-4f57-b0f7-77b7cb0b5e48",
"token": "c00f869e-0fce-4d32-9277-a68658578ba2"
}
}
```
Successfully creating an `Access Token` will return a payload with a `data` object containing both a public `publicToken` and a private `token`. The `publicToken` may be used to initialize the [Transact SDK](#transact-sdk).
### Response Properties
| Name | Type | Description |
| ------------- | ------ | ---------------------------------------------------------------------------------- |
| `publicToken` | string | Public `AccessToken` that can be used to initialize [Transact SDK](#transact-sdk). |
| `token` | string | Private token that should be kept secret. |
| |
## List Linked Accounts
> Code samples
```shell
curl --location --request GET "https://api.atomicfi.com/linked-account/list/{identifier}" \
--header "x-api-key: f0d0a166-96de-4898-8879-da309801968b" \
--header "x-api-secret: afcce08f-95bd-4317-9119-ecb8debae4f2" \
```
```javascript--nodejs
var https = require('https');
var options = {
'method': 'GET',
'hostname': 'https://api.atomicfi.com',
'path': '/linked-account/list/{identifier}',
'headers': {
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
};
var req = https.request(options, function (res) {
var chunks = [];
res.on("data", function (chunk) {
chunks.push(chunk);
});
res.on("end", function (chunk) {
var body = Buffer.concat(chunks);
console.log(body.toString());
});
res.on("error", function (error) {
console.error(error);
});
});
req.end();
```
```ruby
require "uri"
require "net/http"
url = URI("https://api.atomicfi.com/linked-account/list/{identifier}")
http = Net::HTTP.new(url.host, url.port)
request = Net::HTTP::Get.new(url)
request["x-api-key"] = "f0d0a166-96de-4898-8879-da309801968b"
request["x-api-secret"] = "afcce08f-95bd-4317-9119-ecb8debae4f2"
response = http.request(request)
puts response.read_body
```
```python
import requests
url = 'https://api.atomicfi.com/linked-account/list/{identifier}'
payload = {}
headers = {
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
response = requests.request('GET', url, headers = headers, data = payload, allow_redirects=False, timeout=undefined, allow_redirects=false)
print(response.text)
```
```go
package main
import (
"fmt"
"os"
"path/filepath"
"net/http"
"io/ioutil"
)
func main() {
url := "https://api.atomicfi.com/linked-account/list/{identifier}"
method := "GET"
client := &http.Client {
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
req, err := http.NewRequest(method, url, nil)
if err != nil {
fmt.Println(err)
}
req.Header.Add("x-api-key", "f0d0a166-96de-4898-8879-da309801968b")
req.Header.Add("x-api-secret", "afcce08f-95bd-4317-9119-ecb8debae4f2")
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
fmt.Println(string(body))
}
```
If enabled on your account, can be used to list accounts linked to a particular user.
### HTTP Request
`GET /linked-account/list/{identifier}`
### Authentication headers
| Name | Description |
| -------------- | ---------------------------------- |
| `x-api-key` | API Key for your Atomic account |
| `x-api-secret` | API Secret for your Atomic account |
| |
### Request properties
| Name | Type | Description |
| ------------ | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `identifier` | string | The user's identifier, as sent to Atomic during [AccessToken](#create-access-token) creation. Passed with the URL as a path parameter (e.g. `/linked-account/list/{identifier}`), replacing ` {identifier}` with your user's `identifier`. |
### Response
> Example response
```json
{
"data": [
{
"_id": "5f7212103a40e91f95ba376d",
"valid": true,
"connector": {
"_id": "5d77f95207085632a58945c3",
"name": "ADP",
"branding": {}
},
"company": {
"_id": "5d77f9e1070856f3828945c6",
"name": "DoTerra",
"branding": {
"logo": {
"_id": "5eb62781b4b83f0008f638cc",
"url": "https://atomicfi-public-production.s3.amazonaws.com/979115f4-34a0-44f5-901e-753a33337444_atomic-logo-dark.png"
}
}
},
"lastSuccess": "2020-09-28T16:40:48.009Z",
"transactRequired": false
}
]
}
```
### LinkedAccount object
| Name | Type | Description |
| ------------------ | ------- | -------------------------------------------------------------------------------------------------- |
| `_id` | string | Unique identifier. |
| `valid` | boolean | Whether or not the account credentials were valid after the last attempted use. |
| `transactRequired` | boolean | Whether or not using the account requires the user to be present within [Transact](#transact-sdk). |
| `lastSuccess` | date | The datetime of the last successful usage of the account. |
| `lastFailure` | date | The datetime of the last failed usage of the account. |
| `company` | object | The [Company](#company-object) to which the account is linked. |
| `connector` | object | The [Connector](#connector-object) to which the account is linked. |
## Use a Linked Account
> Verify samples
```shell
curl --location --request POST "https://api.atomicfi.com/task/create" \
--header "Content-Type: application/json" \
--header "x-api-key: f0d0a166-96de-4898-8879-da309801968b" \
--header "x-api-secret: afcce08f-95bd-4317-9119-ecb8debae4f2" \
--data "{
\"product\": \"verify\",
\"linkedAccount\": \"5d77f9e1070856f3828945c6\"
}"
curl --location --request POST "https://api.atomicfi.com/task/create" \
--header "Content-Type: application/json" \
--header "x-api-key: f0d0a166-96de-4898-8879-da309801968b" \
--header "x-api-secret: afcce08f-95bd-4317-9119-ecb8debae4f2" \
--data "{
\"product\": \"deposit\",
\"linkedAccount\": \"5d77f9e1070856f3828945c6\",
\"distribution\": {
\"type\": \"fixed\",
\"amount\": 50,
\"action\": \"create\"
}
}"
```
```javascript--nodejs
var https = require('https');
var options = {
'method': 'POST',
'hostname': 'https://api.atomicfi.com',
'path': '/task/create',
'headers': {
'Content-Type': 'application/json',
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
};
var req = https.request(options, function (res) {
var chunks = [];
res.on("data", function (chunk) {
chunks.push(chunk);
});
res.on("end", function (chunk) {
var body = Buffer.concat(chunks);
console.log(body.toString());
});
res.on("error", function (error) {
console.error(error);
});
});
var postData = "{\n \"product\": \"verify\",\n \"linkedAccount\": \"5d77f9e1070856f3828945c6\",\n }\n}";
req.write(postData);
req.end();
```
```ruby
require "uri"
require "net/http"
url = URI("https://api.atomicfi.com/task/create")
http = Net::HTTP.new(url.host, url.port)
request = Net::HTTP::Post.new(url)
request["Content-Type"] = "application/json"
request["x-api-key"] = "f0d0a166-96de-4898-8879-da309801968b"
request["x-api-secret"] = "afcce08f-95bd-4317-9119-ecb8debae4f2"
request.body = "{\n \"product\": \"verify\",\n \"linkedAccount\": \"5d77f9e1070856f3828945c6\"\n }\n}"
response = http.request(request)
puts response.read_body
```
```python
import requests
url = 'https://api.atomicfi.com/task/create'
payload = "{\n \"product\": \"verify\",\n \"linkedAccount\": \"5d77f9e1070856f3828945c6\"\n }\n}"
headers = {
'Content-Type': 'application/json',
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
response = requests.request('POST', url, headers = headers, data = payload, allow_redirects=False, timeout=undefined, allow_redirects=false)
print(response.text)
```
```go
package main
import (
"fmt"
"strings"
"os"
"path/filepath"
"net/http"
"io/ioutil"
)
func main() {
url := "https://api.atomicfi.com/task/create"
method := "POST"
payload := strings.NewReader("{\n \"product\": \"verify\",\n \"linkedAccount\": \"5d77f9e1070856f3828945c6\"\n }\n}")
client := &http.Client {
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
req, err := http.NewRequest(method, url, payload)
if err != nil {
fmt.Println(err)
}
req.Header.Add("Content-Type", "application/json")
req.Header.Add("x-api-key", "f0d0a166-96de-4898-8879-da309801968b")
req.Header.Add("x-api-secret", "afcce08f-95bd-4317-9119-ecb8debae4f2")
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
fmt.Println(string(body))
}
```
> Request body example
```json
{
"product": "verify",
"linkedAccount": "5d77f9e1070856f3828945c6"
}
```
To generate a task using a Linked Account, a `Task` request is created that contains the `product` and the `_id` of the [LinkedAccount](#linkedaccount-object). A `Task` is then created if the request is successful.
### HTTP Request
`POST /task/create`
### Authentication headers
| Name | Description |
| -------------- | ---------------------------------- |
| `x-api-key` | API Key for your Atomic account |
| `x-api-secret` | API Secret for your Atomic account |
### Request properties
| Name | Type | Description |
| --------------------------------- | ------ | ----------------------------------------------------------------------------------------------- |
| `product` <h6>required</h6> | string | One of `verify`, `identify`, or `deposit`. |
| `linkedAccount` <h6>required</h6> | string | The `_id` of a [LinkedAccount](#linked-account-object) object. |
| `distribution` | object | The distribution of the deposit as defined in the [SDK parameters](#javascript-sdk-parameters). |
| `metadata` | object | Optionally pass [Metadata](#metadata) that will be returned to you in webhook events. |
### Response
> Example responses
```json
{
"data": {
"id": "5d3b23b155f500465c895f60",
"status": "queued"
}
}
```
Successfully creating a `Task` will return a payload with a `data` object containing both a public `_id_` and a private `status` of `queued`. The `status` will change as the task progresses towards completion. The [Atomic Dashboard](https://dashboard.atomicfi.com/) can be used to track task progress. If you plan on implementing [webhooks](#webhooks), we recommend saving the task `_id` for reference.
### Response Properties
| Name | Type | Description |
| -------- | ------ | --------------------------------------------- |
| `_id` | string | Unique identifier for the `Task`. |
| `status` | string | [Status](#task-status-updated) of the `Task`. |
## Company Search
> Code samples
```shell
curl --location --request POST "https://api.atomicfi.com/company/search" \
--header "x-api-key: f0d0a166-96de-4898-8879-da309801968b" \
--header "x-api-secret: afcce08f-95bd-4317-9119-ecb8debae4f2" \
--data "{
\"product\": \"deposit\",
\"query\": \"ADP\"
}"
```
```javascript--nodejs
var https = require('https');
var options = {
'method': 'POST',
'hostname': 'https://api.atomicfi.com',
'path': '/company/search',
'headers': {
'Content-Type': 'application/json',
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
};
var req = https.request(options, function (res) {
var chunks = [];
res.on("data", function (chunk) {
chunks.push(chunk);
});
res.on("end", function (chunk) {
var body = Buffer.concat(chunks);
console.log(body.toString());
});
res.on("error", function (error) {
console.error(error);
});
});
var postData = "{\n \"product\": \"deposit\",\n \"query\": \"ADP\",\n }\n}";
req.write(postData);
req.end();
```
```ruby
require "uri"
require "net/http"
url = URI("https://api.atomicfi.com/company/search")
http = Net::HTTP.new(url.host, url.port)
request = Net::HTTP::Post.new(url)
request["Content-Type"] = "application/json"
request["x-api-key"] = "e0d2f67e-dc98-45d8-8b22-db76cb52f732"
request["x-api-secret"] = "e0d2f67e-dc98-45d8-8b22-db76cb52f732"
request.body = "{\n \"product\": \"deposit\",\n \"query\": \"ADP\"\n }\n}"
response = http.request(request)
puts response.read_body
```
```python
import requests
url = 'https://api.atomicfi.com/company/search'
payload = "{\n \"product\": \"deposit\",\n \"query\": \"ADP\"\n }\n}"
headers = {
'x-api-key': 'e0d2f67e-dc98-45d8-8b22-db76cb52f732',
'x-api-secret': 'e0d2f67e-dc98-45d8-8b22-db76cb52f732'
}
response = requests.request('POST', url, headers = headers, data = payload, allow_redirects=False, timeout=undefined, allow_redirects=false)
print(response.text)
```
```go
package main
import (
"fmt"
"os"
"path/filepath"
"net/http"
"io/ioutil"
)
func main() {
url := "https://api.atomicfi.com/company/search"
method := "POST"
payload := strings.NewReader("{\n \"product\": \"verify\",\n \"linkedAccount\": \"5d77f9e1070856f3828945c6\"\n }\n}")
client := &http.Client {
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
req, err := http.NewRequest(method, url, nil)
if err != nil {
fmt.Println(err)
}
req.Header.Add("x-api-key", "e0d2f67e-dc98-45d8-8b22-db76cb52f732")
req.Header.Add("x-api-secret", "e0d2f67e-dc98-45d8-8b22-db76cb52f732")
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
fmt.Println(string(body))
}
```
Searches for a `Company` using a text `query`. Searches can also be narrowed by passing in a specific `product`. The primary use case of this endpoint is for an autocomplete search component.
### HTTP Request
`POST /company/search`
### Authentication headers
#### Using API credentials (BACKEND)
| Name | Description |
| -------------- | ---------------------------------- |
| `x-api-key` | API Key for your Atomic account |
| `x-api-secret` | API Secret for your Atomic account |
#### Using public token (CLIENT-SIDE)
| Name | Description |
| ---------------- | ---------------------------------------------------------------------------- |
| `x-public-token` | Public token generated during [access token creation](#create-access-token). |
### Request properties
| Name | Type | Description |
| ------------------------- | ------ | -------------------------------------------------------------------------------------------------------------------- |
| `query` <h6>required</h6> | string | Filters companies by name. Uses fuzzy matching to narrow results. |
| `product` | string | Filters companies by a specific product. Possible values include `verify`, `identify`, and `deposit`. |
| `tags` | array | Filters companies by a specific tag. Possible values include `gig-economy`, `payroll-provider`, and `unemployment`. |
| `excludedTags` | array | Excludes companies by a specific tag. Possible values include `gig-economy`, `payroll-provider`, and `unemployment`. |
### Response
> Example response
```json
{
"data": [
{
"_id": "5d38f1e8512bbf71fb776015",
"name": "DoorDash",
"status": "operational",
"connector": {
"_id": "5d38f182512bbf0c06776013",
"availableProducts": ["deposit"]
},
"branding": {
"logo": {
"url": "https://atomicfi-public-production.s3.amazonaws.com/a8d7e778-b718-45e0-b639-2305e33e7f95_ADP.svg"
},
"color": "#E20000"
},
"tags": ["gig-economy"]
}
]
}
```
Successfully querying the `Company` search endpoint will return a payload with a `data` array of `Company` objects.
When a company is `under-maintenance`, users are unable to initiate an authentication until it has been restored.
### Company object
| Name | Type | Description |
| ----------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| `_id` | string | Unique identifier. |
| `name` | string | Company name. |
| `status` | string | Possible values include `operational` or `under-maintenance`. |
| `connector.availableProducts` | array | A list of compatible products. |
| `branding.logo.url` | string | Logo for the company, typically an `svg` if available. |
| `branding.color` | string | Branding color for the company. |
| `tags` | array | Categories to which a company is associated. Possible values include `gig-economy`, `payroll-provider`, `unemployment`, and `manual-deposits`. |
## Generate File URL
Generate a URL in order to download a user file (ex: paystubs pdfs). Each URL is valid for 1 hour.
> Code samples
```shell
# You can also use wget
curl --location --request GET "https://api.atomicfi.com/task/602414d84f9a1980cf5eafcc/file/602415224f9a1980cf5eb0a1/generate-url" \
--header "Content-Type: application/json" \
--header "x-api-key: f0d0a166-96de-4898-8879-da309801968b" \
--header "x-api-secret: afcce08f-95bd-4317-9119-ecb8debae4f2" \
```
```javascript--nodejs
var https = require('https');
var options = {
'method': 'GET',
'hostname': 'https://api.atomicfi.com',
'path': '/task/602414d84f9a1980cf5eafcc/file/602415224f9a1980cf5eb0a1/generate-url',
'headers': {
'Content-Type': 'application/json',
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
};
var req = https.request(options, function (res) {
var chunks = [];
res.on("data", function (chunk) {
chunks.push(chunk);
});
res.on("end", function (chunk) {
var body = Buffer.concat(chunks);
console.log(body.toString());
});
res.on("error", function (error) {
console.error(error);
});
});
req.end();
```
```ruby
require "uri"
require "net/http"
url = URI("https://api.atomicfi.com/task/602414d84f9a1980cf5eafcc/file/602415224f9a1980cf5eb0a1/generate-url")
http = Net::HTTP.new(url.host, url.port)
request = Net::HTTP::Get.new(url)
request["Content-Type"] = "application/json"
request["x-api-key"] = "f0d0a166-96de-4898-8879-da309801968b"
request["x-api-secret"] = "afcce08f-95bd-4317-9119-ecb8debae4f2"
response = http.request(request)
puts response.read_body
```
```python
import requests
url = 'https://api.atomicfi.com/task/602414d84f9a1980cf5eafcc/file/602415224f9a1980cf5eb0a1/generate-url'
headers = {
'Content-Type': 'application/json',
'x-api-key': 'f0d0a166-96de-4898-8879-da309801968b',
'x-api-secret': 'afcce08f-95bd-4317-9119-ecb8debae4f2'
}
response = requests.request('GET', url, headers = headers, data = undefined, allow_redirects=False, timeout=undefined, allow_redirects=false)
print(response.text)
```
```go
package main
import (
"fmt"
"strings"
"os"
"path/filepath"
"net/http"
"io/ioutil"
)
func main() {
url := "https://api.atomicfi.com/task/602414d84f9a1980cf5eafcc/file/602415224f9a1980cf5eb0a1/generate-url"
method := "GET"
client := &http.Client {
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
req, err := http.NewRequest(method, url)
if err != nil {
fmt.Println(err)
}
req.Header.Add("Content-Type", "application/json")
req.Header.Add("x-api-key", "f0d0a166-96de-4898-8879-da309801968b")
req.Header.Add("x-api-secret", "afcce08f-95bd-4317-9119-ecb8debae4f2")
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
fmt.Println(string(body))
}
```
### HTTP Request
`GET /task/{taskId}/file/{fileId}/generate-url`
### Authentication headers
| Name | Description |
| -------------- | ---------------------------------- |
| `x-api-key` | API Key for your Atomic account |
| `x-api-secret` | API Secret for your Atomic account |
### Request properties
| Name | Type | Description |
| -------------------------- | ------ | ------------------- |
| `taskId` <h6>required</h6> | string | The id of the task. |
| `fileId` <h6>required</h6> | string | The id of the file. |
### Response
> Example response
```json
{
"url": "https://private-bucket.s3.amazonaws.com/5e978dcf3abbf90008a00b7a/602414d84f9a1980cf5eafcc/7773bc56-71b0-40a1-bbed-03d7b1434850.pdf?Expires=1612991865&Signature=jwSGIZEnOsNJrrQaUhLRnaNg5uQ%3D"
}
```
A successful request will return a URL that can be used to download the file.
### Response Properties
| Name | Type | Description |
| ----- | ------ | ------------------------------------------------------------------------- |
| `url` | string | A URL that can be used to download the file. The URL is valid for 1 hour. |
| 52.955532 | 1,297 | 0.48335 | eng_Latn | 0.654835 |
48f9f21373f04fa025cb4a8375e5c8b604866619 | 1,630 | md | Markdown | site/content/adventures/2016-03-21-norfolk-js.md | fvcproductions/website-v3 | cf99cde36e00942534d7677f8dcf2ef10b91d4e8 | [
"MIT"
] | null | null | null | site/content/adventures/2016-03-21-norfolk-js.md | fvcproductions/website-v3 | cf99cde36e00942534d7677f8dcf2ef10b91d4e8 | [
"MIT"
] | 1 | 2018-09-25T10:50:59.000Z | 2018-09-25T10:50:59.000Z | site/content/adventures/2016-03-21-norfolk-js.md | fvcproductions/website-v3 | cf99cde36e00942534d7677f8dcf2ef10b91d4e8 | [
"MIT"
] | null | null | null | ---
title: "Norfolk.js"
date: 2016-12-17
description: "An awesome Meetup group in Norfolk focusing on all things JS!"
categories:
- program
banner: https://fvcproductions.files.wordpress.com/2015/11/norfolkjs-jan-2016-0041.jpg
---
## Name
[NorfolkJS](https://www.meetup.com/NorfolkJS "NorfolkJS")
## Location
Various - Norfolk, VA
## Date
December 15th, 2014 to March 21st, 2016
## Role
Member, Speaker
## About
I attended 9 Meetups in total.
1. [Don't commit your ugly code!](https://www.meetup.com/NorfolkJS/events/229326804/) - Mar 21, 2016
2. [JS Coding Bootcamps - wait, come again?](https://www.meetup.com/NorfolkJS/events/227490794/) - Jan 18, 2016
3. [It’s Components All The Way Down](https://www.meetup.com/NorfolkJS/events/226152804/) - Nov 16, 2015
4. [Test Your Code (Yes, even in Node.js)](https://www.meetup.com/NorfolkJS/events/225329829/) - Oct 19, 2015
5. [React.js and Tacos](https://www.meetup.com/NorfolkJS/events/222358449/) - May 18, 2015
6. [OMFG, Streams are Awesome!](https://www.meetup.com/NorfolkJS/events/221239139/) - Apr 23, 2015
7. [Sharpen Your Saw: Part Deux](https://www.meetup.com/NorfolkJS/events/219185162/) - Feb 24, 2015
8. [JavaScript Next: ECMAScript 6](https://www.meetup.com/NorfolkJS/events/219184709/) - Jan 19, 2015
9. [Introduction to Testing on Travis-CI](https://www.meetup.com/NorfolkJS/events/213364882/) - Dec 15, 2014
I presented on JS Coding Bootcamps for the [January 2016 Meetup](meetup.com/NorfolkJS/events/227490794).
{% include figure url="https://fvcproductions.files.wordpress.com/2015/11/norfolkjs1.png" title="NorfolkJS - Circular Logo" %}
| 37.906977 | 126 | 0.728834 | yue_Hant | 0.654502 |
48fa21e63c70bba39aad97fb1918490021c0f895 | 27 | md | Markdown | README.md | neginsdii/GAME2014-A1-NeginSaeidi-p2 | 897086d51c3a0fbf87a9fdd22c3ed0d46412fc51 | [
"MIT"
] | null | null | null | README.md | neginsdii/GAME2014-A1-NeginSaeidi-p2 | 897086d51c3a0fbf87a9fdd22c3ed0d46412fc51 | [
"MIT"
] | null | null | null | README.md | neginsdii/GAME2014-A1-NeginSaeidi-p2 | 897086d51c3a0fbf87a9fdd22c3ed0d46412fc51 | [
"MIT"
] | null | null | null | GAME2014-A1-NeginSaeidi-p2
| 13.5 | 26 | 0.851852 | deu_Latn | 0.148527 |
48fa8af54a0771d3ea59d91c714e95edb840963c | 319 | md | Markdown | README.md | Njs2/njs2-mongo | acfc3b1e105a8dfedcf3b32f9e359303ba454f99 | [
"MIT"
] | null | null | null | README.md | Njs2/njs2-mongo | acfc3b1e105a8dfedcf3b32f9e359303ba454f99 | [
"MIT"
] | null | null | null | README.md | Njs2/njs2-mongo | acfc3b1e105a8dfedcf3b32f9e359303ba454f99 | [
"MIT"
] | null | null | null | Njs2 MONGO - An Mongo database library for Njs2 Framework
================================================
The `@njs2/mongo` is a library package for Njs2 framework that helps to make Mongo database operations from the Njs2 project
## Installation
Install the `@njs2/mongo` using below command.
```
npm i @njs2/mongo | 35.444444 | 124 | 0.655172 | eng_Latn | 0.885424 |
48fa946368457cde7e749eac83af181b11b8a3c5 | 2,288 | md | Markdown | README.md | MSIhub/Vive_SpatialTrackingStudy | 6542dbf2ee8d4b70d10f40b47e431daaaff6edf8 | [
"MIT"
] | 2 | 2020-11-07T23:35:27.000Z | 2022-03-14T10:44:28.000Z | README.md | MSIhub/Vive_SpatialTrackingStudy | 6542dbf2ee8d4b70d10f40b47e431daaaff6edf8 | [
"MIT"
] | null | null | null | README.md | MSIhub/Vive_SpatialTrackingStudy | 6542dbf2ee8d4b70d10f40b47e431daaaff6edf8 | [
"MIT"
] | null | null | null | # Vive_SpatialTrackingStudy
This repository contains the source code for the architecture of the system developed to study the spatial tracking performance of HTC Vive lighthouse tracking system under dynamic conditions using a Comau NS16. This repository includes the C++ source code of the data collection API, the Python path to PDL parser script, and the MATLAB scripts developed for post-processing, transformation, generation of the performance metrics, and also for the registration algorithm tested.

The architecture of the experimental setup to evaluate the VLTS is elucidated in Fig. SystemArchitecture.jpg. It describes the integration of two systems (COMAU NS 16 and VLTS) in four primary blocks. The requirement of this system is to provide pose and time data feedback synchronized in the same reference frame with a time latency of less than 22 ms.
1) The role of the first block is to prepare our ground truth system C16 to perform the required motion. The trajectories are designed as per the design of experiments. With the use of the Universal Robot Description Format (URDF) file, the generated trajectory was simulated in MATLAB to ensure the trajectories are within the workspace. After verification, our Python PDL Parser generates the path file and send it to the C16.
2) The second block is the server setup for VLTS and C16. The custom application built with OpenVR SDK (C++) and WinC4G API (PDL-Comau custom scripting) acts as a server of a TCP/IP network to stream the data of VLTS and C16 respectively.
3) The third block highlights the data collection software developed. The data collection API receives both the data stream in parallel, inserts the time data for synchronization, and logs it in a ".txt" file.
4) The final block features the data post-processing. This process is done offline after the experimentation to reduce computation time and thereby latency while collecting the data. The stored data from both C16 and VLTS are synchronized and re-sampled at 100 Hz. Then the relative transformation and pose error are calcullated. Finally, the performance metrics are evaluated.
| 152.533333 | 479 | 0.815122 | eng_Latn | 0.997311 |
48fad7eefa8b06dc5ea1d679a69b0cfb6ee30c60 | 12,833 | md | Markdown | doc/handbook.md | exograd/erl-jsv | 44fd625d7f4c6b7014a8abd42257c577d1373ae5 | [
"0BSD"
] | 2 | 2020-12-09T12:48:33.000Z | 2020-12-25T11:03:49.000Z | doc/handbook.md | exograd/erl-jsv | 44fd625d7f4c6b7014a8abd42257c577d1373ae5 | [
"0BSD"
] | 2 | 2021-01-12T14:40:01.000Z | 2021-01-13T06:43:52.000Z | doc/handbook.md | exograd/erl-jsv | 44fd625d7f4c6b7014a8abd42257c577d1373ae5 | [
"0BSD"
] | 1 | 2021-01-12T11:04:58.000Z | 2021-01-12T11:04:58.000Z | % erl-jsv
# Introduction
JSV, for JSON Structural Validator, provides a way to validate the structure
and content of JSON data structures.
# Definitions
A definition indicates the data type and attributes of a value, modelled as a
set of constraints. Definitions are represented either as a `{Type,
Constraints}` tuple or simply as a `Type` atom when there is no constraint to
define.
Definitions are represented as one of the following values:
- `Type`: a type assertion without any constraint.
- `{Type, Constraints}`: a type assertion with a set of constraints.
- `{ref, DefinitionName}`: a reference to a definition in the current catalog.
- `{ref, CatalogName, DefinitionName}`: a reference to a definition from a
catalog.
- `{any, Definitions}`: a combination of multiple definitions where at least
one of the combination must match the value.
For example, the following definition represents arrays containing at least 3
strictly positive integers:
```erlang
{array, #{element => {integer, #{min => 1}},
min_length => 3}}
```
A value is said to be valid according to a definition if and only if:
- it matches the data type expected by the definition type;
- it satisfies all definition constraints.
# Types
Types are implemented as Erlang modules which implement the following
fundamental operations:
- Verifying constraints name and values.
- Formatting constraint violations.
- Validating value types.
- Validating values against constraints.
Types are organized in type maps which associate the atom representing the
type to the module implementing the type.
The default type map, returned by `jsd:default_type_map/0`, contains a set of
default types useful for various kinds of JSON data. Users are free to extend
it or replace it altogether.
# Catalogs
Catalogs are collection of definitions which can be referenced from any other
definitions.
Example:
```erlang
#{user_id => uuid,
user_name => {string, #{min_length => 2, max_length => 64}}}
```
Catalogs are passed to verification and validation functions.
Example:
```erlang
jsv:validate(42, {ref, api, score},
#{catalogs => #{api => #{score => integer}}}).
```
## Default types
### any
Represents any JSON value.
Constraints:
- `value`: a JSON value the value must be equal to.
### null
Represents a `null` JSON constant. This type does not support any constraint.
### boolean
Represents a JSON boolean value. This type does not support any constraint.
### integer
Represents a JSON number which was parsed as an integer.
Constraints:
- `min`: the minimal value of the integer.
- `max`: the maximal value of the integer.
### number
Represents a JSON number.
Constraints:
- `min`: the minimal value of the number.
- `max`: the maximal value of the number.
### string
Represents a JSON string.
Constraints:
- `min_length`: the minimal length of the string.
- `max_length`: the maximum length of the string.
- `prefix`: a prefix the string must start with.
- `suffix`: a suffix the string must end with.
- `values`: a list of possible values of the string.
Note that the length of a string is the number of characters in the string,
and not the number of bytes in its representation. Since erl-jsv relies on the
erl-json library, all JSON strings are represented by UTF-8 binaries: the
length of a string is the number of Unicode codepoints.
### array
Represents a JSON array.
Constraints:
- `element`: the definition that must apply to all elements.
- `unique_elements`: indicates that elements must be unique or have at least
one duplicate value.
- `min_length`: the minimal number of elements in the array.
- `max_length`: the maximum number of elements in the array.
### object
Represents a JSON object.
Constraints:
- `value`: the definition that must apply to all member values.
- `min_size`: the minimal number of members in the object.
- `max_size`: the maximum number of members in the object.
- `required`: a list of member names the object must contain.
- `members`: a map associating member name to definition; member values must
match the associated definitions if they are present in the value.
### uuid
Represents a [UUID](https://tools.ietf.org/html/rfc4122) string. This type
does not support any constraint.
Example: `"21fba787-4471-422d-bc94-63521e1181da"`.
### ksuid
Represents a [KSUID](https://github.com/segmentio/ksuid) string. This type
does not support any constraint.
Example: `"1l0UE6izCgIw533MOupkAowglGJ"`.
### uri
Represents a [URI](https://tools.ietf.org/html/rfc3986) string. This type
does not support any constraint.
Example: `"http://example.com/foo/bar?a=1"`.
### time
Represents a partial time (or time of day) as defined in [RFC
3339](https://tools.ietf.org/html/rfc3339), without any fractional seconds.
Example: `"14:45:00"`.
Constraints:
- `min`: the minimal time of day represented by the string.
- `max`: the maximal time of day represented by the string.
### date
Represents a full date as defined in [RFC
3339](https://tools.ietf.org/html/rfc3339).
Example: `"2010-08-01"`.
Constraints:
- `min`: the minimal date represented by the string.
- `max`: the maximal date represented by the string.
### datetime
Represents a full date and time string as defined in [RFC
3339](https://tools.ietf.org/html/rfc3339).
Example: `"2010-08-01T14:45:00+01:00"`.
Note that due to limitations of the Erlang `calendar` module, fractional
seconds in the input string are ignored.
Constraints:
- `min`: the minimal datetime represented by the string.
- `max`: the maximal datetime represented by the string.
## Writing new types
**TODO**
# Canonical values
Validation functions return a canonical version of the input value. Canonical
versions are easier to work in Erlang with, and having them remove the need
for lots of the usual data massaging steps.
Values are canonicalized as follows:
- Date values: the canonical form is an Erlang value of type
`calendar:date()`. Example: `"2010-08-01"` is canonicalized to `{2010, 8,
1}`.
- Time values: the canonical form is an Erlang value of type
`calendar:time()`. Example: `"20:10:30"` is canonicalized to `{20, 10, 30}`.
- Datetime values: the canonical form is an Erlang value of type
`calendar:datetime()`. Example: `"2010-08-01T14:45:00+01:00"` is
canonicalized to `{{2010, 8, 1}, {13, 45, 0}}`.
- UUID and KSUID values: the canonical form is the binary representation.
- Arrays: the canonical form is an Erlang list containing the canonical form
of each element of the array.
- Objects: the canonical form is an Erlang map. For each member:
- If the key is part of the set of keys defined by a `members` constraint,
it is converted to an atom; if not, it is converted to a binary.
- If the definition contains a `value` constraint or if the key is part of
the set of keys defined by a `members` constraint, the value is converted
to its canonical version; if not, it is unaffected by canonicalization.
- Strings: the canonical form is either a binary, or an atom if the value has
a `values` constraint.
Other values are unaffected by canonicalization.
If the definition is of the form `{any, Definitions}`, the canonical value
produced by the first matching definition is returned.
# Errors
Validation errors are reported as maps containing information about the
precise value being incorrect and its location as a [JSON
pointer](https://tools.ietf.org/html/rfc6901). The `jsv:format_value_error/2`
and `jsv:format_value_errors/2` can be used to add textual information to
errors.
# Interface
## Definition verification
The `jsv:verify_definition/2` function verifies a definition, making sure
that:
- its structure is correct;
- referenced types exist in the type map passed to the function;
- constraints exist for the associated type;
- constraint values are valid.
Example:
```erlang
Definition = {array, #{element => {integer, #{min => 1}}}},
jsv:verify_definition(Definition, jsv:default_type_map()).
```
The `jsv:verify_catalog/2` function verifies all the definitions stored in a
catalog.
## Validation
The `jsv:validate/2` and `jsv:validate/3` functions validate a value against a
definition.
Example:
```erlang
Definition = {array, #{element => {integer, #{min => 1}}}},
jsv:validate([1, 1, 3, 5], Definition).
```
Validation functions start by verifying the definition. If verification fails,
a `{invalid_definition, Errors}` error is signaled. Verification can be
disabled with the `disable_verification` option. Note that invalid definitions
can cause obscure errors; definitions should always be verified, either
automatically by validation functions or separately by calling
`jsv:verify_definition/2`.
If validation succeeds, these functions return `{ok, CanonicalValue}` where
`CanonicalValue` is the canonized version of the value passed in input.
### Options
For `jsv:validate/3`, the following options are available:
- `type_map`: the type map to use (default: `jsv:default_type_map()`).
- `format_value_errors`: if validation fails, add textual information to error
values.
- `disable_verification`: do not verify the definition before validation.
- `catalogs`: a set of catalogs to use during verification and validation.
- `unknown_member_handling`: an atom indicating how to handle invalid keys in
objects with a `members` constraint; valid values are:
- `error`: return an error when at least one invalid member is found.
- `keep`: return invalid members in the canonical value.
- `remove`: silently remove invalid members from the canonical value.
The default value is `error`.
- `null_member_handling`: an atom indicating how to null object members; valid
values are:
- `keep`: keep null object members.
- `remove`: remove null object members before verification or generation.
The default value is `keep`.
## Generation
The `jsv:generate/2` and `jsv:generate/3` functions generate JSON values from
Erlang terms using a definition. This makes it easier to convert canonical
representations to valid JSON strings.
For example:
```erlang
Definition = {object, #{members => #{a => integer,
b => {string, #{values => [foo, bar]}},
c => ksuid,
d => date}}},
jsv:generate(#{a => 2,
b => foo,
c => <<12,87,174,185,56,8,84,101,32,110,233,137,39,77,248,128,10,113,46,87>>,
d => {2020, 12, 4}},
Definition).
```
Returns:
```erlang
{ok, #{a => 2,
b => <<"foo">>,
c => <<"1lBaURQi3YcGvvNkAD6vVrp6mGN">>,
d => <<"2020-12-04">>}}
```
During generation, value are handled according to their definition type:
- For type `any`, values are returned without any validation or conversion.
- For types `boolean`, `null`, `number` and `integer`, values are type checked
and returned without any conversion.
- For type `string`, values are type checked and converted to binaries. Atoms,
strings and binaries are accepted.
- For type `array`, values are type checked and returned with each array
element generated using the `value` constraint if it exists, or the `any`
type if it does not.
- For type `object`, values are type checked and returned with each key keep
unmodified and each value generated using the corresponding `members` or
`values` constraint if they exist, or the `any` type if they do not.
- For types `uuid` and `ksuid`, values are type checked, formatted and
converted to binaries.
- For type `uri`, values are type checked and returned without any
conversion. Only binaries are accepted.
- For types `time`, `date` and `datetime`, values are type checked, formatted,
and converted to binaries.
Note that generation function only check types and a minimal amount of
constraints, for two reasons:
- It would be quite complex due to the multiple necessary type conversions.
- The impact on performances would be significant. Paying this price to ensure
data are validated in input makes sense, but doing it for the output is not
as beneficial.
Constraints currently checked during generation are `members` and `required`
for objects.
### Options
For `jsv:generate/3`, the following options are available:
- `type_map`: the type map to use (default: `jsv:default_type_map()`).
- `disable_verification`: do not verify the definition before generation.
- `catalogs`: a set of catalogs to use during verification and generation.
- `unknown_member_handling`: an atom indicating how to handle unknown keys in
objects with a `members` constraint; valid values are:
- `error`: return an error when at least one invalid member is found.
- `keep`: return invalid members in the canonical value.
- `remove`: silently remove invalid members from the canonical value.
The default value is `error`.
| 35.846369 | 92 | 0.735292 | eng_Latn | 0.994628 |
48fb062fb7d35f42532cd2497312dd3218dabb21 | 7,017 | md | Markdown | README.md | ryanleh/secure-xgboost | e0244157140b200f7e92d6e1fe4e9eee64f37527 | [
"Apache-2.0"
] | null | null | null | README.md | ryanleh/secure-xgboost | e0244157140b200f7e92d6e1fe4e9eee64f37527 | [
"Apache-2.0"
] | null | null | null | README.md | ryanleh/secure-xgboost | e0244157140b200f7e92d6e1fe4e9eee64f37527 | [
"Apache-2.0"
] | null | null | null | # Secure XGBoost
[](https://travis-ci.org/mc2-project/secure-xgboost)


[](https://opensource.org/licenses/Apache-2.0)
[<img src="https://img.shields.io/badge/slack-contact%20us-blueviolet?logo=slack">](https://join.slack.com/t/mc2-project/shared_invite/zt-rt3kxyy8-GS4KA0A351Ysv~GKwy8NEQ)
[](CODE_OF_CONDUCT.md)
Secure XGBoost is a library that leverages secure enclaves and data-oblivious algorithms to enable the **collaborative training of and inference using [XGBoost](https://github.com/dmlc/xgboost) models on encrypted data**.
Data owners can use Secure XGBoost to train a model on a remote server, e.g., the cloud, _without_ revealing the underlying data to the remote server. Collaborating data owners can use the library to jointly train a model on their collective data without exposing their individual data to each other.

This project is currently under development as part of the broader [**MC<sup>2</sup>** effort](https://github.com/mc2-project/mc2) (i.e., **M**ultiparty **C**ollaboration and **C**oopetition) by the UC Berkeley [RISE Lab](https://rise.cs.berkeley.edu/).
**NOTE:** The Secure XGBoost library is a research prototype, and has not yet received independent code review.
## Table of Contents
* [Installation](#installation)
* [Usage](#usage)
* [Documentation](#documentation)
* [Additional Resources](#additional-resources)
* [Getting Involved](#getting-involved)
## Installation
The following instructions will create an environment from scratch. Note that Secure XGBoost has only been tested on Ubuntu 18.04, so **we recommend that you install everything on Ubuntu 18.04**.
Alternatively, you can use the provided [Docker image](https://hub.docker.com/repository/docker/mc2project/ubuntu-oe0.9) if you want to run everything in simulation mode locally. If you use Docker, you'll need to clone Secure XGBoost locally and mount it to the container's `/root/secure-xgboost/` directory [using the `-v` flag](https://stackoverflow.com/questions/23439126/how-to-mount-a-host-directory-in-a-docker-container) when starting the container.
1. Install the Open Enclave SDK (0.12.0) and the Intel SGX DCAP driver by following [these instructions](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Ubuntu_18.04.md). In Step 3 of the instructions, install Open Enclave version 0.12.0 by specifying the version:
```sh
sudo apt -y install clang-7 libssl-dev gdb libsgx-enclave-common libsgx-enclave-common-dev libprotobuf10 libsgx-dcap-ql libsgx-dcap-ql-dev az-dcap-client open-enclave=0.12.0
```
2. Configure the required environment variables.
```sh
source /opt/openenclave/share/openenclave/openenclaverc
```
3. Install CMake and other Secure XGBoost dependencies.
```sh
wget https://github.com/Kitware/CMake/releases/download/v3.15.6/cmake-3.15.6-Linux-x86_64.sh
sudo bash cmake-3.15.6-Linux-x86_64.sh --skip-license --prefix=/usr/local
sudo apt-get install -y libmbedtls-dev python3-pip
pip3 install numpy pandas sklearn numproto grpcio grpcio-tools requests
```
4. Clone Secure XGBoost.
```sh
git clone https://github.com/mc2-project/secure-xgboost.git
```
5. Before building, you may choose to configure the [build parameters](https://mc2-project.github.io/secure-xgboost/build.html#building-the-targets) in `CMakeLists.txt`, e.g., whether to perform training and inference obliviously. In particular, if running Secure XGBoost on a machine without enclave support, you'll have to set the `SIMULATE` parameter to `ON`.
6. Build Secure XGBoost and install the Python package.
```sh
cd secure-xgboost
mkdir build
cd build
cmake ..
make -j4
cd ../python-package
sudo python3 setup.py install
```
## Usage
To use Secure XGBoost, replace the XGBoost import.
```python
# import xgboost as xgb
import securexgboost as xgb
```
For ease of use, the Secure XGBoost API mirrors that of XGBoost as much as possible. While the below block demonstrates usage on a single machine, Secure XGBoost is meant for the client-server model of computation. More information can be found [here](https://mc2-project.github.io/secure-xgboost/about.html#system-architecture).
**Note**: If running Secure XGBoost in simulation mode, pass in `verify=False` to the `attest()` function.
```python
# Generate a key and use it to encrypt data
KEY_FILE = "key.txt"
xgb.generate_client_key(KEY_FILE)
xgb.encrypt_file("demo/data/agaricus.txt.train", "demo/data/train.enc", KEY_FILE)
xgb.encrypt_file("demo/data/agaricus.txt.test", "demo/data/test.enc", KEY_FILE)
# Initialize client and connect to enclave
xgb.init_client(user_name="user1",
sym_key_file="key.txt",
priv_key_file="config/user1.pem",
cert_file="config/user1.crt")
xgb.init_server(enclave_image="build/enclave/xgboost_enclave.signed", client_list=["user1"])
# Remote attestation to authenticate enclave
# If running in simulation mode, pass in `verify=False` below
xgb.attest(verify=True)
# Load the encrypted data and associate it with your user
dtrain = xgb.DMatrix({"user1": "demo/data/train.enc"})
dtest = xgb.DMatrix({"user1": "demo/data/test.enc"})
params = {
"objective": "binary:logistic",
"gamma": "0.1",
"max_depth": "3"
}
# Train a model
num_rounds = 5
booster = xgb.train(params, dtrain, num_rounds)
# Get encrypted predictions and decrypt them
predictions, num_preds = booster.predict(dtest)
```
## Documentation
For more background on enclaves and data-obliviousness, additional tutorials, and more details on build parameters and usage, please refer to the [documentation](https://mc2-project.github.io/secure-xgboost/).
## Additional Resources
* [CCS PPMLP Paper](https://arxiv.org/pdf/2010.02524.pdf)
* [Blog Post](https://towardsdatascience.com/secure-collaborative-xgboost-on-encrypted-data-ac7bc0ec7741)
* RISE Camp 2020 [Tutorial](https://github.com/mc2-project/risecamp/tree/risecamp2020) and [Walkthrough](https://youtu.be/-kK-YCjqABs?t=312)
* [Docker image for development](https://hub.docker.com/repository/docker/mc2project/ubuntu-oe0.12)
## Getting Involved
* mc2-dev@googlegroups.com: For questions and general discussion
* [Slack](https://join.slack.com/t/mc2-project/shared_invite/zt-rt3kxyy8-GS4KA0A351Ysv~GKwy8NEQ): A more informal setting for discussion
* [GitHub Issues](https://github.com/mc2-project/secure-xgboost/issues): For bug reports and feature requests.
* [Pull Requests](https://github.com/mc2-project/secure-xgboost/pulls): For code contributions.
| 51.218978 | 456 | 0.761009 | eng_Latn | 0.720734 |
48fb213591c32e2cd9462e5decb5d3bff0f96071 | 970 | md | Markdown | src/README.md | ilaoniu/docs-next-zh-cn | ad16175d7b338f4449fe5d033a91a775ec58b782 | [
"MIT"
] | 1,002 | 2020-09-08T01:05:48.000Z | 2022-03-30T14:12:27.000Z | src/README.md | ilaoniu/docs-next-zh-cn | ad16175d7b338f4449fe5d033a91a775ec58b782 | [
"MIT"
] | 391 | 2020-09-08T01:27:50.000Z | 2022-03-08T02:57:07.000Z | src/README.md | ilaoniu/docs-next-zh-cn | ad16175d7b338f4449fe5d033a91a775ec58b782 | [
"MIT"
] | 995 | 2020-09-08T02:54:14.000Z | 2022-03-31T09:23:01.000Z | ---
home: true
heroImage: /logo.png
heroText: Vue.js
tagline: 渐进式<br> JavaScript 框架
actionButtons:
- text: Why Vue.js?
link: /
extraClass: vuemastery-trigger primary
icon: fa fa-play-circle
- text: 起步
link: /guide/introduction
- text: GitHub
link: https://github.com/vuejs/vue-next
extraClass: github grey
icon: fa fa-github
target: _blank
features:
- title: 易用
details: 已经会了 HTML、CSS、JavaScript?即刻阅读指南开始构建应用!
- title: 灵活
details: 不断繁荣的生态系统,可以在一个库和一套完整框架之间自如伸缩。
- title: 高效
details: |
20kB min+gzip 运行大小<br>
超快虚拟 DOM<br>
最省心的优化
footer: |
遵循<a href="https://opensource.org/licenses/MIT" target="_blank" rel="noopener"> MIT 开源协议</a><br>
Copyright © 2014-2021 Evan You
socialIcons:
- type: GitHub
link: https://github.com/vuejs/vue-next
- type: Twitter
link: https://twitter.com/vuejs
- type: Medium
link: https://medium.com/the-vue-point
---
<common-vuemastery-video-modal/>
| 23.658537 | 98 | 0.668041 | yue_Hant | 0.249708 |
48fbfe8d5f817535ca290bb1454aa551e2d12984 | 770 | md | Markdown | database/mongodb/duplicate.md | YxxY/docs | 65a318a0ac79dbf204a2107095ee206e342a27f3 | [
"MIT"
] | null | null | null | database/mongodb/duplicate.md | YxxY/docs | 65a318a0ac79dbf204a2107095ee206e342a27f3 | [
"MIT"
] | 7 | 2021-03-01T20:08:57.000Z | 2022-02-26T01:34:09.000Z | database/mongodb/duplicate.md | YxxY/docs | 65a318a0ac79dbf204a2107095ee206e342a27f3 | [
"MIT"
] | null | null | null | ## 查询重复文档
查询重复记录, 以mongo shell语法为例
```shell
db.jobs.aggregate([
{
$group: { _id: {id: '$unique_id'},
count: {$sum: 1},
dups: {$addToSet: '$_id'}}
},
{
$match: {count: {$gt: 1}}
}
])
```
原理为使用聚合函数分组,分组的key值为document记录的唯一索引,即能分辨出文档有重复的字段,这里以`unique_id`为例,视具体情况替换。
然后把该文档记录的默认索引`_id`存在一个数组中并计数,最后列出统计大于1的记录,即为重复的记录,如果输出为空,则表明没有重复。
## 去重
```shell
db.jobs.aggregate([
{
$group: { _id: {id: '$unique_id'},
count: {$sum: 1},
dups: {$addToSet: '$_id'}}
},
{
$match: {count: {$gt: 1}}
}
]).forEach(function(doc){
doc.dups.shift();
db.jobs.remove({_id: {$in: doc.dups}});
})
```
去重的处理就是根据之前存储的重复文档默认索引,然后保留一个,删除其他的重复的即可。
上面的操作是保留第一个文档,当然也可以保留最后一个,改成`doc.dups.pop()`即可 | 22 | 77 | 0.567532 | yue_Hant | 0.420278 |
48fc27801bba4efa9e7ae859928f21d78e1550ca | 770 | md | Markdown | _episodes/text-lesson_2.md | lhuntneuro/win-task-sharing-workshop | f8e3b5e75772d873bb989c31b1fac44197304d3c | [
"CC-BY-4.0"
] | null | null | null | _episodes/text-lesson_2.md | lhuntneuro/win-task-sharing-workshop | f8e3b5e75772d873bb989c31b1fac44197304d3c | [
"CC-BY-4.0"
] | null | null | null | _episodes/text-lesson_2.md | lhuntneuro/win-task-sharing-workshop | f8e3b5e75772d873bb989c31b1fac44197304d3c | [
"CC-BY-4.0"
] | null | null | null | ---
title: Make your own repository (part 1)
teaching: 0
exercises: 40
duration: null
summary: Start to create a simple repository for your task, put a copy of the
current version of your task in the repository, and make a simple readme file
questions: null
objectives:
- By the end of this section, you will have begun to create a repository for
your task on GitLab
- You will also add a simple readme file to help researchers who haven't used
the task before
keypoints: null
is-break: null
ukrn_wb_rules:
- allow-multiple
day: 1
order: 325000
missingDependencies: []
dependencies: []
originalRepository: UKRN-Open-Research/ukrn-wb-lesson-templates
---
## Content placeholder: create your own repository
Now it's time to create your own repositories!
| 27.5 | 79 | 0.763636 | eng_Latn | 0.996251 |
48fc54acbba06816152980e3beef3f088d8d8023 | 667 | md | Markdown | README.md | mtmccrea/sparta-site | e59ba5ab72cdbeb63e5465f73b267a281b6b2e66 | [
"MIT"
] | null | null | null | README.md | mtmccrea/sparta-site | e59ba5ab72cdbeb63e5465f73b267a281b6b2e66 | [
"MIT"
] | 15 | 2021-08-17T13:18:33.000Z | 2021-10-02T09:28:30.000Z | README.md | mtmccrea/sparta-site | e59ba5ab72cdbeb63e5465f73b267a281b6b2e66 | [
"MIT"
] | 1 | 2022-03-01T16:22:38.000Z | 2022-03-01T16:22:38.000Z |
# sparta-site
A website for the spatial audio VST plug-ins installer, which includes the SPARTA suite, COMPASS suite, HO-DirAC suite, CroPaC-Binaural decoder, and the HO-SIRR application.
## Deployment
A github action is automatically run every time a commit is pushed to the master branch, which builds and deploys the website.
The website is hosted here: [https://leomccormack.github.io/sparta-site/](https://leomccormack.github.io/sparta-site/)
## Contributing
We are by no means exports in website design, so if you happen to be wesite, Hugo or Doks guru and spot anything that can be improved, then please feel free to do so and submit a pull request : )
| 44.466667 | 195 | 0.775112 | eng_Latn | 0.995446 |
48fc6011adad21bd1f028eeaddcf3dd67705e94f | 778 | md | Markdown | _posts/2004-07-19-o-winxp-sp2-zdnet-ru.md | alexriabtsev/alexriabtsev.github.io | be2bdeaf4c983cb1a531a9dc39d17a6250d4a6e3 | [
"MIT"
] | null | null | null | _posts/2004-07-19-o-winxp-sp2-zdnet-ru.md | alexriabtsev/alexriabtsev.github.io | be2bdeaf4c983cb1a531a9dc39d17a6250d4a6e3 | [
"MIT"
] | null | null | null | _posts/2004-07-19-o-winxp-sp2-zdnet-ru.md | alexriabtsev/alexriabtsev.github.io | be2bdeaf4c983cb1a531a9dc39d17a6250d4a6e3 | [
"MIT"
] | null | null | null | ---
id: 140
title: о WinXP SP2 @ Zdnet.RU
date: 2004-07-19T10:12:00+02:00
author: alexrb
layout: post
guid: http://alexrb.name/?p=140
permalink: /2004/07/o-winxp-sp2-zdnet-ru/
lj_itemid:
- "134"
lj_permalink:
- http://alexrb-aka-ral.livejournal.com/34457.html
lj_current_music:
- 'Rachmaninoff - I. Allegro agitato'
ljID:
- "1831"
ljURL:
- http://alexrb-aka-ral.livejournal.com/468925.html
post_views_count:
- "1"
categories:
- Lost-and-found
tags:
- "2004"
---
[Service Pack 2: исправление неисправимого](http://zdnet.ru/?ID=453940) и там же ссылка на [Startup Phases for x86-based Systems](http://www.microsoft.com/resources/documentation/Windows/XP/all/reskit/en-us/Default.asp?url=/resources/documentation/Windows/XP/all/reskit/en-us/prmc_str_reii.asp) | 29.923077 | 300 | 0.732648 | yue_Hant | 0.363294 |
48fcd9dff78b957a824ecb0ba8bbfcaf8e283393 | 3,093 | md | Markdown | README.md | aleksey-nikolaev/endianness | f549d979c37f2fb7d06abb145ba6f34a05c92f08 | [
"MIT"
] | 2 | 2020-10-30T14:24:50.000Z | 2020-10-30T16:05:54.000Z | README.md | aleksey-nikolaev/endianness | f549d979c37f2fb7d06abb145ba6f34a05c92f08 | [
"MIT"
] | null | null | null | README.md | aleksey-nikolaev/endianness | f549d979c37f2fb7d06abb145ba6f34a05c92f08 | [
"MIT"
] | 2 | 2021-09-29T09:49:04.000Z | 2022-01-21T11:42:38.000Z | # Endianness
Endianness is very simple, light, header-only library designed to reduce conversions between big-endian and little-endian data.
* **Requires STL C++17**
### Build and Install
#### Usage after install
Endianness library can be built and installed as any other cmake project. After that add to your CMake project file `CMakeLists.txt`:
```cmake
find_package(Endianness)
```
And [link](https://cmake.org/cmake/help/latest/command/target_link_libraries.html) to your target [imported target](https://cmake.org/cmake/help/latest/manual/cmake-buildsystem.7.html#imported-targets) `ENDIAN::Endian`
```
target_link_libraries(<target> ENDIAN::Endian)
```
#### Usage as submodule
If you prefer submodule instead of install add to your `CMakeLists.txt`
```cmake
add_subdirectory(<Endianness source realative path>)
target_link_libraries(<target> ENDIAN::Endian)
```
### Debug Visualisation
With happy debugging on MSVS you can use [Endian.natvis](https://github.com/aleksey-nikolaev/natvis-collection/blob/master/Endian.natvis)
This `natvis` file can swap bytes of big-endian data for better view.
### Usage
* You can use the `Endian` in `constexpr`
* It is possible to use the `Endian` as a key in [unordered associative containers](https://en.cppreference.com/w/cpp/container). Avoid to use with containers that need the less operator for comparision - only `==` and `!=` operators have no byte swap.
* Not necessary, but useful aliases:
```cpp
#include <endianness.h>
#include <stdint.h>
using int16le = endianness::LittleEndian<int16_t>;
using uint16le = endianness::LittleEndian<uint16_t>;
using int32le = endianness::LittleEndian<int32_t>;
using uint32le = endianness::LittleEndian<uint32_t>;
using int64le = endianness::LittleEndian<int64_t>;
using uint64le = endianness::LittleEndian<uint64_t>;
using int16be = endianness::BigEndian<int16_t>;
using uint16be = endianness::BigEndian<uint16_t>;
using int32be = endianness::BigEndian<int32_t>;
using uint32be = endianness::BigEndian<uint32_t>;
using int64be = endianness::BigEndian<int64_t>;
using uint64be = endianness::BigEndian<uint64_t>;
```
### Example
* No byte swap type conversion:
```cpp
constexpr uint32be MASK = 0xF0F01234; //Compile-time expressions
#pragma pack(push, 1)
struct FileHeader
{
static constexpr uint32be SIGNATURE = 'A123'; //Compile-time expressions
uint32be signature;
uint32be tag;
};
#pragma pop()
...
std::unordered_set<uint32be> tags{'qwer','tyui','opas','dfgh'};
std::unordered_map<uint32be, int> hashmap;
auto fileHeader = reinterpret_cast<const FileHeader*>(pointer);
if (fileHeader && fileHeader->signature == FileHeader::SIGNATURE){
uint32be tag = fileHeader->tag;
if(tags.count(tag)){
tag &= MASK;
++hashmap[tag];
}
}
```
* With byte swap:
```cpp
// no byte swap on big-endian platform
uint16be a = 7;
uint32be b = a;// uint16be -> uint16_t -> uint32_t -> uint32be
int c = a + b;
bool d = (a < 32) // ok: uint16be -> uint16_t : d = true
//bool e = (a != 3) // error: need explicitly cast a to int or 3 to uint16be
bool e = (a != (uint16be)3) // ok: e = true
```
| 36.388235 | 252 | 0.738118 | eng_Latn | 0.65146 |
48fd20d434c1660bbd510992e230e2c9a3a849de | 3,603 | md | Markdown | core/encoding/cyrillic-to-latin/vb/readme.md | joseocampo/samples | 777603ce078af205ef915de5082cf15f30fa30a9 | [
"CC-BY-4.0",
"MIT"
] | 2,197 | 2018-04-01T06:11:52.000Z | 2022-03-31T15:51:15.000Z | core/encoding/cyrillic-to-latin/vb/readme.md | joseocampo/samples | 777603ce078af205ef915de5082cf15f30fa30a9 | [
"CC-BY-4.0",
"MIT"
] | 2,376 | 2018-04-01T06:19:23.000Z | 2022-03-24T13:04:46.000Z | core/encoding/cyrillic-to-latin/vb/readme.md | joseocampo/samples | 777603ce078af205ef915de5082cf15f30fa30a9 | [
"CC-BY-4.0",
"MIT"
] | 4,415 | 2018-03-31T03:34:29.000Z | 2022-03-31T16:09:27.000Z | ---
languages:
- vb
products:
- dotnet-core
page_type: sample
name: ".NET Core Cyrillic to Latin Transliteration Utility (Visual Basic)"
urlFragment: "cyrillic-transliteration-vb"
description: "A .NET Core console application written in Visual Basic that uses the encoding fallback functionality to transliterate Cyrillic to Latin characters."
---
cyrillic-to-latin is a command-line utility that transliterates modern Cyrillic characters
to their Latin equivalents. It uses a modified Library of Congress system for
transliteration. Its syntax is:
```
CyrillicToLatin <sourceFile> <destinationFile>
```
where *sourceFile* is the path and filename of a text file that contains modern Cyrillic
characters, and *destinationFile* is the name of the text file that will store the
original text with its Cyrillic characters replaced by transliterated Latin characters.
If a file path is included in *destinationFile* and any portion of that path does
not exist, the utility terminates.
The specific mappings of upper- and lower-case Cyrillic characters
to Latin characters are listed in the constructor of the `CyrillicToLatinFallback`
class, where the entries of a case mapping table named `table` are defined.
The utility illustrates the extensibility of character encoding in the .NET
Framework. An encoding system consists of an encoder and a decoder. The encoder is
responsible for translating a sequence of characters into a sequence of bytes. The
decoder is responsible for translating the sequence of bytes into a sequence of
characters. .NET Core supports ASCII as well as the standard Unicode
encodings and allows the [Encoding](https://docs.microsoft.com/dotnet/api/system.text.encoding) class to be overridden to support otherwise
unsupported encodings. It also allows an encoder and a decoder's handling of
unmapped characters and bytes to be customized. Broadly, an encoder or a decoder can handle data that it cannot map by throwing an exception or by using some alternate mapping. For more information, see [Character Encoding in .NET Framework](https://docs.microsoft.com/dotnet/standard/base-types/character-encoding).
The transliteration utility works by instantiating an [Encoding](https://docs.microsoft.com/dotnet/api/system.text.encoding) object that represents ASCII encoding, which supports ASCII characters in the range from U+00 to U+FF. Because modern Cyrillic characters occupy the range from U+0410 to U+044F, they do not automatically map to ASCII encoding. When the utility instantiates its Encoding object, it passes its constructor an instance of a class named `CyrillicToLatinFallback` that is derived from [EncoderFallback](https://docs.microsoft.com/dotnet/api/system.text.encoderfallback). This class maintains an internal table that maps modern Cyrillic characters to one or more Latin characters.
When the encoder encounters a character that it cannot encode, it calls the fallback
object's [CreateFallbackBuffer](https://docs.microsoft.com/dotnet/api/system.text.encoderfallback.createfallbackbuffer) method. This method instantiates a `CyrillicToLatinFallbackBuffer` object (a subclass of the [EncoderFallbackBuffer](https://docs.microsoft.com/dotnet/api/system.text.encoderfallbackbuffer) class) and passes its constructor
the modern Cyrillic character mapping table. It then passes the `CyrillicToLatinFallbackBuffer`
object's [Fallback](https://docs.microsoft.com/dotnet/api/system.text.encoderfallbackbuffer.fallback) method each character that it is unable to encode, and if a mapping is available, the method can provide a suitable replacement.
| 81.886364 | 699 | 0.815709 | eng_Latn | 0.995222 |
48fd64cce1f1118400cbe90ce19356bbbbcb3a41 | 3,661 | md | Markdown | translations/pt-BR/content/developers/webhooks-and-events/creating-webhooks.md | szunami/docs | a31eaa337bfc62d825ce7bb30fc3fc9042e49e72 | [
"CC-BY-4.0",
"MIT"
] | 9 | 2021-04-02T15:46:42.000Z | 2022-03-06T17:21:45.000Z | translations/pt-BR/content/developers/webhooks-and-events/creating-webhooks.md | HBTG789/docs | 8b20583382eda88f8a479fd4754e56f9ec260f5c | [
"CC-BY-4.0",
"MIT"
] | 52 | 2022-01-28T19:09:04.000Z | 2022-03-30T18:16:41.000Z | translations/pt-BR/content/developers/webhooks-and-events/creating-webhooks.md | HBTG789/docs | 8b20583382eda88f8a479fd4754e56f9ec260f5c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-07-26T21:08:36.000Z | 2021-07-26T21:08:36.000Z | ---
title: Criar webhooks
intro: 'Aprenda a criar um webhook, escolhendo os eventos que seu webhook irá ouvir em {% data variables.product.prodname_dotcom %} e como configurar um servidor para receber e gerenciar a carga de webhook.'
redirect_from:
- /webhooks/creating
versions:
free-pro-team: '*'
enterprise-server: '*'
github-ae: '*'
---
Agora que entendemos [os conceitos básicos de webhooks][webhooks-overview], vamos analisar o processo de criação da nossa própria integração com o webhook. Neste tutorial, vamos criar um webhook de repositório que será responsável por listar quão popular é o nosso repositório, com base no número de problemas que recebe por dia.
Criar um webhook é um processo de duas etapas. Primeiro, você deverá configurar como deseja que seu webhook se comporte através do {% data variables.product.product_name %} - quais eventos devem ser ouvidos. Em seguida, você irá configurar seu servidor para receber e gerenciar a carga.
{% data reusables.webhooks.webhooks-rest-api-links %}
### Configurar um Webhook
É possível instalar webhooks em uma organização ou em um repositório específico.
Para configurar um webhook, vá para a página de configurações do seu repositório ou organização. Nessa página, clique em **Webhooks** e, em seguida, **Adicionar webhook**.
Como alternativa, você pode optar por criar e gerenciar um webhook [através da API de Webhooks][webhook-api].
Os webhooks exigem algumas opções de configuração antes de você poder usá-los. Iremos analisar cada uma destas configurações abaixo.
### URL de carga
{% data reusables.webhooks.payload_url %}
Já que estamos desenvolvendo localmente para nosso tutorial, vamos definir como `http://localhost:4567/payload`. Vamos explicar o porquê na documentação [Configurando seu servidor](/webhooks/configuring/).
### Tipo de conteúdo
{% data reusables.webhooks.content_type %} Para este tutorial, o tipo de conteúdo-padrão para `application/json` está certo.
### Segredo
{% data reusables.webhooks.secret %}
### Verificação de SSL
{% data reusables.webhooks.webhooks_ssl %}
### Ativo
Por padrão, as entregas de webhook estão "Ativas". Você pode optar por desativar a entrega das cargas de webhook, desmarcando "Ativo".
### Eventos
Os eventos encontram-se no núcleo dos webhooks. Esses webhooks são acionados sempre que uma determinada ação é tomada no repositório, a qual a URL da carga do seu servidor intercepta e e sobre a qual atua.
Uma lista completa de eventos de webhook e quando são executados pode ser encontrada na referência [da API de webhooks][hooks-api].
Como nosso webhook está gerenciando problemas em um repositório, clicaremos em **Deixe-me selecionar eventos individuais** e, em seguida, **problemas**. Certifique-se de selecionar **Ativo** para receber eventos de problemas para webhooks acionados. Você também pode selecionar todos os eventos usando a opção-padrão.
Ao terminar, clique em **Adicionar webhook**. Ufa! Agora que você criou o webhook, é hora de configurar nosso servidor local para testar o webhook. Vá até [Configurar seu servidor](/webhooks/configuring/) para aprender como fazê-lo.
#### Evento curinga
Para configurar um webhook para todos os eventos, use o caractere curinga (`*`) para especificar os eventos de webhook. Ao adicionar o evento curinga, substituiremos todos os eventos existentes que você tenha configurado pelo evento curinga e enviaremos todas as cargas para os eventos compatíveis. Você também obterá automaticamente todos os novos eventos que possamos adicionar no futuro.
[webhooks-overview]: /webhooks/
[webhook-api]: /rest/reference/repos#hooks
[hooks-api]: /webhooks/#events
| 52.3 | 390 | 0.781754 | por_Latn | 0.999335 |
48fe07fd0e655c652e0ecf0be4017f811da5c93b | 91 | md | Markdown | README.md | pocc/faster-ui | 37cc02ec7d9bc4bb1815da28974e06631a7d4fa7 | [
"Apache-2.0"
] | null | null | null | README.md | pocc/faster-ui | 37cc02ec7d9bc4bb1815da28974e06631a7d4fa7 | [
"Apache-2.0"
] | null | null | null | README.md | pocc/faster-ui | 37cc02ec7d9bc4bb1815da28974e06631a7d4fa7 | [
"Apache-2.0"
] | null | null | null | # faster-ui
[Website](https://pocc.github.io/faster-ui) for Faster Login chrome extension
| 22.75 | 77 | 0.758242 | yue_Hant | 0.235185 |
48fec2d0c248e8d02b87e1b0cb6bd5cc55c0c9d0 | 1,599 | md | Markdown | _posts/2017-05-28-Mostra 820 anni di fondazione di Villafranca.md | BandaVillaSCecilia/BandaVillaSCecilia.github.io | afe49796dc9273bfa7714f458c4c84ea29662847 | [
"MIT"
] | null | null | null | _posts/2017-05-28-Mostra 820 anni di fondazione di Villafranca.md | BandaVillaSCecilia/BandaVillaSCecilia.github.io | afe49796dc9273bfa7714f458c4c84ea29662847 | [
"MIT"
] | null | null | null | _posts/2017-05-28-Mostra 820 anni di fondazione di Villafranca.md | BandaVillaSCecilia/BandaVillaSCecilia.github.io | afe49796dc9273bfa7714f458c4c84ea29662847 | [
"MIT"
] | 1 | 2018-02-20T10:52:30.000Z | 2018-02-20T10:52:30.000Z | ---
layout: single
---
Nel quadro della mostra per celebrare gli 820 anni di fondazione di Villafranca Piemonte, la Banda Musicale partecipa con un cartellone che illustra la storia della nostra Associazione, dagli albori dei primi documenti storici fino ai giorni nostri. Sono esposte anche alcune divise storiche di musici e majorettes insieme a strumenti di repertorio.
{: .text-justify}
{% include figure image_path="/assets/images/posts/20170528/edit.jpg" alt="Estratto de 'L'Edit' riguardante la Banda" caption="Estratto de 'L'Edit' riguardante la Banda" %}
La mostra può essere visitata presso i locali dell'ex Monastero a Villafranca Piemonte.
{: .text-justify}
<style>
.map-responsive{
overflow:hidden;
padding-bottom:56.25%;
position:relative;
height:0;
}
.map-responsive iframe{
left:0;
top:0;
height:100%;
width:100%;
position:absolute;
}
</style>
<div class="map-responsive">
<iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d707.9995295961988!2d7.504339429266105!3d44.78084369869369!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x47881e4359d6d5b3%3A0xdd5ab44c7c47771c!2sVia+S.+Francesco+d'Assisi%2C+2%2C+10068+Villafranca+Piemonte+TO!5e0!3m2!1sen!2sit!4v1497002819784" width="600" height="450" frameborder="0" style="border:0" allowfullscreen></iframe>
</div>
Se non l'hai ancora fatto, unisciti al **canale Telegram** [@BandaVillafrancaPiemonte](https://t.me/BandaVillafrancaPiemonte) per rimanere aggiornato sulle ultime novità! Seguiteci sui nostri social, ogni like e condivisione è apprezzata!
{: .text-justify}
| 45.685714 | 403 | 0.764853 | ita_Latn | 0.86076 |
48fec669cd9b34306ee1a2e7c35d53fc3ce7fd1a | 422 | md | Markdown | markdown/org/docs/patterns/jaeger/options/frontpocketdepth/fr.md | SeaZeeZee/freesewing | 8589e7b7ceeabd738c4b69ac59980acbebfea537 | [
"MIT"
] | null | null | null | markdown/org/docs/patterns/jaeger/options/frontpocketdepth/fr.md | SeaZeeZee/freesewing | 8589e7b7ceeabd738c4b69ac59980acbebfea537 | [
"MIT"
] | 2 | 2022-02-04T13:28:21.000Z | 2022-02-04T14:07:38.000Z | markdown/org/docs/patterns/jaeger/options/frontpocketdepth/fr.md | SeaZeeZee/freesewing | 8589e7b7ceeabd738c4b69ac59980acbebfea537 | [
"MIT"
] | null | null | null | ---
title: "Front pocket depth"
---

La profondeur des poches avant, comme facteur de l'espace entre la taille et l'ourlet.
## Effet de cette option sur le motif

| 35.166667 | 203 | 0.781991 | fra_Latn | 0.984519 |
5b00c6d4afc69cc265e6dcea0aadf74384c6c4bd | 337 | markdown | Markdown | _posts/2014-07-18-project-10.markdown | spgarn/Dona-ana2 | 0f60d848dd808e55485a0ebb2a5f8ffe7fbff1c6 | [
"MIT"
] | null | null | null | _posts/2014-07-18-project-10.markdown | spgarn/Dona-ana2 | 0f60d848dd808e55485a0ebb2a5f8ffe7fbff1c6 | [
"MIT"
] | null | null | null | _posts/2014-07-18-project-10.markdown | spgarn/Dona-ana2 | 0f60d848dd808e55485a0ebb2a5f8ffe7fbff1c6 | [
"MIT"
] | null | null | null | ---
layout: default
modal-id: 10
title: Bénie Bendo
date: 2014-07-15
img: backup.png
alt: image-alt
project-date: 2019 -
client: Medicin
category: Kongo
description: Genom bidrag till dona-ana har "namn" lyckats få studera och på så sätt gynna samhället på lång sikt. Vi behöver mer av detta för att bidra till en stabilera region.
---
| 24.071429 | 178 | 0.756677 | swe_Latn | 0.999274 |
5b01287c3f2e809241d844490220dbe256858ad0 | 105 | md | Markdown | _posts/0000-01-02-mawe-24.md | mawe-24/github-slideshow | 88b7bf5955d26c01a7c3f8b19c928d967feaead5 | [
"MIT"
] | null | null | null | _posts/0000-01-02-mawe-24.md | mawe-24/github-slideshow | 88b7bf5955d26c01a7c3f8b19c928d967feaead5 | [
"MIT"
] | 3 | 2021-04-09T18:59:15.000Z | 2021-04-09T20:33:00.000Z | _posts/0000-01-02-mawe-24.md | mawe-24/github-slideshow | 88b7bf5955d26c01a7c3f8b19c928d967feaead5 | [
"MIT"
] | null | null | null | ---
layout: slide
title: "Welcome to our second slide!"
---
This Interesting, enjoyimg this. having fun!
| 17.5 | 44 | 0.714286 | eng_Latn | 0.992793 |
5b012f5d0a1006d791869be0030cf729725600de | 23,843 | md | Markdown | README.md | todd-a-jacobs/ruby-next | 54082af37c60d4a077807b199c289c1c31948b54 | [
"MIT"
] | null | null | null | README.md | todd-a-jacobs/ruby-next | 54082af37c60d4a077807b199c289c1c31948b54 | [
"MIT"
] | null | null | null | README.md | todd-a-jacobs/ruby-next | 54082af37c60d4a077807b199c289c1c31948b54 | [
"MIT"
] | null | null | null | [](https://cultofmartians.com/tasks/ruby-next-cli-rewriters.html#task)
[](https://rubygems.org/gems/ruby-next) [](https://github.com/ruby-next/ruby-next/actions)
[](https://github.com/ruby-next/ruby-next/actions?query=workflow%3A%22TruffleRuby+Build%22)
[](https://github.com/ruby-next/ruby-next/actions?query=workflow%3A%22TruffleRuby+Build%22)
# Ruby Next
<img align="right" height="184"
title="Ruby Next logo" src="./assets/images/logo.svg">
Ruby Next is a **transpiler** and a collection of **polyfills** for supporting the latest and upcoming Ruby features (APIs and syntax) in older versions and alternative implementations. For example, you can use pattern matching and `Kernel#then` in Ruby 2.5 or [mruby][].
Who might be interested in Ruby Next?
- **Ruby gems maintainers** who want to write code using the latest Ruby version but still support older ones.
- **Application developers** who want to give new features a try without waiting for the final release (or, more often, for the first patch).
- **Users of non-MRI implementations** such as [mruby][], [JRuby][], [TruffleRuby][], [Opal][], [RubyMotion][], [Artichoke][], [Prism][].
Ruby Next also aims to help the community to assess new, _experimental_, MRI features by making it easier to play with them.
That's why Ruby Next implements the `master` features as fast as possible.
Read more about the motivation behind the Ruby Next in this post: [Ruby Next: Make all Rubies quack alike](https://evilmartians.com/chronicles/ruby-next-make-all-rubies-quack-alike).
<table style="border:none;">
<tr>
<td>
<a href="https://evilmartians.com/?utm_source=ruby-next">
<img src="https://evilmartians.com/badges/sponsored-by-evil-martians.svg" alt="Sponsored by Evil Martians" width="236" height="54">
</a>
</td>
<td>
<a href="http://www.digitalfukuoka.jp/topics/169">
<img src="http://www.digitalfukuoka.jp/javascripts/kcfinder/upload/images/excellence.jpg" width="200">
</a>
</td>
</tr>
</table>
## Posts
- [Ruby Next: Make all Rubies quack alike](https://evilmartians.com/chronicles/ruby-next-make-all-rubies-quack-alike)
## Talks
- [Ruby Next: Make old Rubies quack like a new one](https://noti.st/palkan/j3i2Dr/ruby-next-make-old-rubies-quack-like-a-new-one) (RubyConf 2019)
## Examples
- Ruby gems
- [action_policy](https://github.com/palkan/action_policy)
- [anyway_config](https://github.com/palkan/anyway_config)
- [graphql-fragment_cache](https://github.com/DmitryTsepelev/graphql-ruby-fragment_cache)
- Rails applications
- [anycable_rails_demo](https://github.com/anycable/anycable_rails_demo)
- mruby
- [ACLI](https://github.com/palkan/acli)
_Please, submit a PR to add your project to the list!_
## Table of contents
- [Overview](#overview)
- [Quick Start](#quick-start)
- [Polyfills](#using-only-polyfills)
- [Transpiling](#transpiling)
- [Modes](#transpiler-modes)
- [CLI](#cli)
- [Using in gems](#integrating-into-a-gem-development)
- [Runtime usage](#runtime-usage)
- [Bootsnap integration](#using-with-bootsnap)
- [`ruby -ruby-next`](#uby-next)
- [Logging & Debugging](#logging-and-debugging)
- [RuboCop](#rubocop)
- [Using with EOL Rubies](#using-with-eol-rubies)
- [Proposed & edge features](#proposed-and-edge-features)
## Overview
Ruby Next consists of two parts: **core** and **language**.
Core provides **polyfills** for Ruby core classes APIs via Refinements (default strategy) or core extensions (optionally or for refinement-less environments).
Language is responsible for **transpiling** edge Ruby syntax into older versions. It could be done
programmatically or via CLI. It also could be done in runtime.
Currently, Ruby Next supports Ruby versions 2.2+, including JRuby 9.2.8+ and TruffleRuby 20.1+ (with some limitations). Support for EOL versions (<2.5) slightly differs though ([see below](#using-with-eol-rubies)).
Please, [open an issue](https://github.com/ruby-next/ruby-next/issues/new/choose) or join the discussion in the existing ones if you would like us to support older Ruby versions.
## Quick start
The quickest way to start experimenting with Ruby Next is to install the gem and run a sample script. For example:
```sh
# Install Ruby Next globally
$ gem install ruby-next
# Call ruby with -ruby-next flag
$ ruby -ruby-next -e "
def greet(val) =
case val
in hello: hello if hello =~ /human/i
'🙂'
in hello: 'martian'
'👽'
end
puts greet(hello: 'martian')
"
=> 👽
```
## Using only polyfills
First, install a gem:
```ruby
# Gemfile
gem "ruby-next-core"
# gemspec
spec.add_dependency "ruby-next-core"
```
**NOTE:** we use the different _distribution_ gem, `ruby-next-core`, to provide zero-dependency, polyfills-only version.
Then, all you need is to load the Ruby Next:
```ruby
require "ruby-next"
```
And activate the refinement in every file where you want to use it\*:
```ruby
using RubyNext
```
Ruby Next only refines core classes if necessary; thus, this line wouldn't have any effect in the edge Ruby.
**NOTE:** Even if the runtime already contains a monkey-patch with the backported functionality, we consider the method as _dirty_ and activate the refinement for it. Thus, you always have predictable behaviour. That's why we recommend using refinements for gems development.
Alternatively, you can go with monkey-patches. Just add this line:
```ruby
require "ruby-next/core_ext"
```
The following _rule of thumb_ is recommended when choosing between refinements and monkey-patches:
- Use refinements for libraries development (to avoid conflicts with others code)
- Using core extensions could be considered for application development (no need to think about `using RubyNext`); this approach could potentially lead to conflicts with dependencies (if these dependencies are not using refinements 🙂)
- Use core extensions if refinements are not supported by your platform
**NOTE:** _Edge_ APIs (i.e., from the Ruby's master branch) are included by default.
[**The list of supported APIs.**][features_core]
## Transpiling
Ruby Next allows you to transpile\* edge Ruby syntax to older versions.
Transpiler relies on two libraries: [parser][] and [unparser][].
**NOTE:** The "official" parser gem only supports the latest stable Ruby version, while Ruby Next aims to support edge and experimental Ruby features. To enable them, you should use our version of Parser (see [instructions](#using-ruby-next-parser) below).
Installation:
```ruby
# Gemfile
gem "ruby-next"
# gemspec
spec.add_dependency "ruby-next"
```
```sh
# or install globally
gem install ruby-next
```
[**The list of supported syntax features.**][features_syntax]
### Transpiler modes
Ruby Next currently provides two different modes of generating transpiled code: _AST_ and _rewrite_.
In the AST mode, we parse the source code into AST, modifies this AST and **generate a new code from AST** (using [unparser][unparser]). Thus, the transpiled code being identical in terms of functionality has different formatting.
In the rewrite mode, we apply changes to the source code itself, thus, keeping the original formatting of the unaffected code (in a similar way to RuboCop's autocorrect feature).
The main benefit of the rewrite mode is that it preserves the original code line numbers and layout, which is especially useful in debugging.
By default, we use the rewrite mode. If you found a bug with rewrite mode which is not reproducible in the AST mode, please, let us know.
You can change the transpiler mode:
- From code by setting `RubyNext::Language.mode = :ast` or `RubyNext::Language.mode = :rewrite`.
- Via environmental variable `RUBY_NEXT_TRANSPILE_MODE=ast`.
- Via CLI option ([see below](#cli)).
**NOTE:** For the time being, Unparser doesn't support Ruby 3.0 AST nodes, so we always use rewrite mode in Ruby 3.0+.
## CLI
Ruby Next ships with the command-line interface (`ruby-next`) which provides the following functionality:
### `ruby-next nextify`
This command allows you to transpile a file or directory into older Rubies (see, for example, the "Integrating into a gem development" section above).
It has the following interface:
```sh
$ ruby-next nextify
Usage: ruby-next nextify DIRECTORY_OR_FILE [options]
-o, --output=OUTPUT Specify output directory or file or stdout
--min-version=VERSION Specify the minimum Ruby version to support
--single-version Only create one version of a file (for the earliest Ruby version)
--edge Enable edge (master) Ruby features
--proposed Enable proposed/experimental Ruby features
--transpile-mode=MODE Transpiler mode (ast or rewrite). Default: ast
--[no-]refine Do not inject `using RubyNext`
--list-rewriters List available rewriters
--rewrite=REWRITERS... Specify particular Ruby features to rewrite
-h, --help Print help
-V Turn on verbose mode
--dry-run Print verbose output without generating files
```
The behaviour depends on whether you transpile a single file or a directory:
- When transpiling a directory, the `.rbnext` subfolder is created within the target folder with subfolders for each supported Ruby versions (e.g., `.rbnext/2.6`, `.rbnext/2.7`, `.rbnext/3.0`). If you want to create only a single version (the smallest), you can also pass `--single-version` flag. In that case, no version directory is created (i.e., transpiled files go into `.rbnext`).
- When transpiling a file and providing the output path as a _file_ path, only a single version is created. For example:
```sh
$ ruby-next nextify my_ruby.rb -o my_ruby_next.rb -V
Ruby Next core strategy: refine
Generated: my_ruby_next.rb
```
### `ruby-next core_ext`
This command could be used to generate a Ruby file with a configurable set of core extensions.
Use this command if you want to backport new Ruby features to Ruby implementations not compatible with RubyGems.
It has the following interface:
```sh
$ ruby-next core_ext
Usage: ruby-next core_ext [options]
-o, --output=OUTPUT Specify output file or stdout (default: ./core_ext.rb)
-l, --list List all available extensions
--min-version=VERSION Specify the minimum Ruby version to support
-n, --name=NAME Filter extensions by name
-h, --help Print help
-V Turn on verbose mode
--dry-run Print verbose output without generating files
```
The most common use-case is to backport the APIs required by pattern matching. You can do this, for example,
by including only monkey-patches containing the `"deconstruct"` in their names:
```sh
ruby-next core_ext -n deconstruct -o pattern_matching_core_ext.rb
```
To list all available (are matching if `--min-version` or `--name` specified) monkey-patches, use the `-l` switch:
```sh
$ ruby-next core_ext -l --name=filter --name=deconstruct
2.6 extensions:
- ArrayFilter
- EnumerableFilter
- HashFilter
2.7 extensions:
- ArrayDeconstruct
- EnumerableFilterMap
- EnumeratorLazyFilterMap
- HashDeconstructKeys
- StructDeconstruct
```
### CLI configuration file
You can define CLI options in the `.rbnextrc` file located in the root of your project to avoid adding them every time you run `ruby-next`.
Configuration file is a YAML with commands as keys and options as multiline strings:
```yml
# ./.rbnextrc
nextify: |
--transpiler-mode=rewrite
--edge
```
## Integrating into a gem development
We recommend _pre-transpiling_ source code to work with older versions before releasing it.
This is how you can do that with Ruby Next:
- Write source code using the modern/edge Ruby syntax.
- Generate transpiled code by calling `ruby-next nextify ./lib` (e.g., before releasing or pushing to VCS).
This will produce `lib/.rbnext` folder containing the transpiled files, `lib/.rbnext/2.6`, `lib/.rbnext/2.7`. The version in the path indicates which Ruby version is required for the original functionality. Only the source files containing new syntax are added to this folder.
**NOTE:** Do not edit these files manually, either run linters/type checkers/whatever against these files.
- Add the following code to your gem's _entrypoint_ (the file that is required first and contains other `require`-s):
```ruby
require "ruby-next/language/setup"
RubyNext::Language.setup_gem_load_path
```
The `setup_gem_load_path` does the following:
- Resolves the current ruby version.
- Checks whether there are directories corresponding to the current and earlier\* Ruby versions within the `.rbnext` folder.
- Add the path to this directory to the `$LOAD_PATH` before the path to the gem's directory.
That's why need an _entrypoint_: all the subsequent `require` calls will load the transpiled files instead of the original ones
due to the way feature resolving works in Ruby (scanning the `$LOAD_PATH` and halting as soon as the matching file is found).
**NOTE:** `require_relative` should be avoided due to the way we _hijack_ the features loading mechanism.
If you're using [runtime mode](#runtime-usage) a long with `setup_gem_load_path` (e.g., in tests), the transpiled files are ignored (i.e., we do not modify `$LOAD_PATH`).
\* Ruby Next avoids storing duplicates; instead, only the code for the earlier version is created and is assumed to be used with other versions. For example, if the transpiled code is the same for Ruby 2.5 and Ruby 2.6, only the `.rbnext/2.7/path/to/file.rb` is kept. That's why multiple entries are added to the `$LOAD_PATH` (`.rbnext/2.6`, `.rbnext/2.7`, and `.rbnext/3.0` in the specified order for Ruby 2.5, and `.rbnext/2.7` and `.rbnext/3.0` for Ruby 2.6).
### Transpiled files vs. VCS vs. installing from source
It's a best practice to not keep generated files in repositories. In case of Ruby Next, it's a `lib/.rbnext` folder.
We recommend adding this folder only to the gem package (i.e., it should be added to your `spec.files`) and ignore it in your VCS (e.g., `echo ".rbnext/" >> .gitignore`). That would make transpiled files available in releases without polluting your repository.
What if someone decides to install your gem from the VCS source? They would likely face some syntax errors due to the missing transpiled files.
To solve this problem, Ruby Next _tries_ to transpile the source code when you call `#setup_gem_load_path`. It does this by calling `bundle exec ruby-next nextify <lib_dir> -o <next_dir>`. We make the following assumptions:
- We are in the Bundler context (since that's the most common way of installing gems from source).
- Our Gemfile contains `ruby-next` gem.
- We use [`.rbnextrc`](#CLI-configuration-file) for transpiling options.
If the command fails we warn the end user.
This feature, _auto-transpiling_, is **disabled** by default (will likely be enabled in future versions). You can enable it by calling `RubyNext::Language.setup_gem_load_path(transpile: true)`.
## Runtime usage
It is also possible to transpile Ruby source code in run-time via Ruby Next.
All you need is to `require "ruby-next/language/runtime"` as early as possible to hijack `Kernel#require` and friends.
You can also automatically inject `using RubyNext` to every\* loaded file by also adding `require "ruby-next/core/runtime"`.
Since the runtime mode requires Kernel monkey-patching, it should be used carefully. For example, we use it in Ruby Next tests—works perfectly. But think twice before enabling it in production.
Consider using [Bootsnap](#using-with-bootsnap) integration, 'cause its monkey-patching has been bullet-proofed 😉.
\* Ruby Next doesn't hijack every required file but _watches_ only the configured directories: `./app/`, `./lib/`, `./spec/`, `./test/` (relative to the `pwd`). You can configure the watch dirs:
```ruby
RubyNext::Language.watch_dirs << "path/to/other/dir"
```
### Eval & similar
By default, we do not hijack `Kernel.eval` and similar methods due to some limitations (e.g., there is no easy and efficient way to access the caller's scope, or _binding_, and some evaluations relies on local variables).
If you want to support transpiling in `eval`-like methods, opt-in explicitly by activating the refinement:
```ruby
using RubyNext::Language::Eval
```
## Using with Bootsnap
[Bootsnap][] is a great tool to speed-up your application load and it's included into the default Rails Gemfile. It patches Ruby mechanism of loading source files to make it possible to cache the intermediate representation (_iseq_).
Ruby Next provides a specific integration which allows to add a transpiling step to this process, thus making the transpiler overhead as small as possible, because the cached and **already transpiled** version is used if no changes were made.
To enable this integration, add the following line after the `require "bootsnap/setup"`:
```ruby
require "ruby-next/language/bootsnap"
```
**NOTE:** There is no way to invalidate the cache when you upgrade Ruby Next (e.g., due to the bug fixes), so you should do this manually.
## `uby-next`
_This is [not a typo](https://github.com/ruby-next/ruby-next/pull/8), that’s the way `ruby -ruby-next` works: it’s equal to `ruby -r uby-next`, and [`uby-next.rb`](https://github.com/ruby-next/ruby-next/blob/master/lib/uby-next.rb) is a special file that activates the runtime mode._
You can also enable runtime mode by requiring `uby-next` while running a Ruby executable:
```sh
ruby -ruby-next my_ruby_script.rb
# or
RUBYOPT="-ruby-next" ruby my_ruby_script.rb
# or
ruby -ruby-next -e "puts [2, 4, 5].tally"
```
**NOTE:** running Ruby scripts directly or executing code via `-e` option is not supported in TruffleRuby. You can still use `-ruby-next` to transpile required files, e.g.:
```sh
ruby -ruby-next -r my_ruby_script.rb -e "puts my_method"
```
## Logging and debugging
Ruby Next prints some debugging information when fails to load a file in the runtime mode (and fallbacks to the built-in loading mechanism).
You can disable these warnings either by providing the `RUBY_NEXT_WARN=false` env variable or by setting `RubyNext.silence_warnings = true` in your code.
You can also enable transpiled source code debugging by setting the `RUBY_NEXT_DEBUG=true` env variable. When it's set, Ruby Next prints the transpiled code before loading it.
You can use a file pattern as the value for the env var to limit the output: for example, `RUBY_NEXT_DEBUG=my_script.rb`.
## RuboCop
Since Ruby Next provides support for features not available in RuboCop yet, you need to add a patch for compatibility.
In you `.rubocop.yml` add the following:
```yml
require:
- ruby-next/rubocop
```
You must set `TargetRubyVersion: next` to make RuboCop use a Ruby Next parser.
Alternatively, you can load the patch from the command line by running: `rubocop -r ruby-next/rubocop ...`.
We recommend using the latest RuboCop version, 'cause it has support for new nodes built-in.
Also, when pre-transpiling source code with `ruby-next nextify`, we suggest ignoring the transpiled files:
```yml
AllCops:
Exclude:
- 'lib/.rbnext/**/*'
```
**NOTE:** you need `ruby-next` gem available in the environment where you run RuboCop (having `ruby-next-core` is not enough).
## Using with EOL Rubies
We currently provide support for Ruby 2.2, 2.3 and 2.4.
**NOTE:** By "support" here we mean using `ruby-next` CLI and runtime transpiling. Transpiled code may run on Ruby 2.0+.
Ruby Next itself relies on 2.5 features and contains polyfills only for version 2.5+ (and that won't change).
Thus, to make it work with <2.5 we need to backport some APIs ourselves.
The recommended way of doing this is to use [backports][] gem. You need to load backports **before Ruby Next**.
When using runtime features, you should do the following:
```ruby
# first, require backports upto 2.5
require "backports/2.5"
# then, load Ruby Next
require "ruby-next"
# if you need 2.6+ APIs, add Ruby Next core_ext
require "ruby-next/core_ext"
# then, load runtime transpiling
require "ruby-next/language/runtime"
# or
require "ruby-next/language/bootsnap"
```
To load backports while using `ruby-next nextify` command, you must configure the environment variable:
```sh
RUBY_NEXT_CORE_STRATEGY=backports ruby-next nextify lib/
```
**NOTE:** Make sure you have `backports` gem installed globally or added to your bundle (if you're using `bundle exec ruby-next ...`).
**NOTE:** For Ruby 2.2, safe navigation operator (`&.`) and squiggly heredocs (`<<~TXT`) support is provided.
**IMPORTANT:** Unparser `~> 0.4.8` is required to run the transpiler on Ruby <2.4.
## Proposed and edge features
Ruby Next aims to bring edge and proposed features to Ruby community before they (hopefully) reach an official Ruby release.
This includes:
- Features already merged to [master](https://github.com/ruby/ruby) (_edge_)
- Features proposed in [Ruby bug tracker](https://bugs.ruby-lang.org/) (_proposed_)
- Features once merged to master but got reverted.
These features are disabled by default, you must opt-in in one of the following ways:
- Add `--edge` or `--proposed` option to `nextify` command when using CLI.
- Enable programmatically when using a runtime mode:
```ruby
# It's important to load language module first
require "ruby-next/language"
require "ruby-next/language/edge"
# or
require "ruby-next/language/proposed"
# and then activate the runtime mode
require "ruby-next/language/runtime"
# or require "ruby-next/language/bootsnap"
```
- Set `RUBY_NEXT_EDGE=1` or `RUBY_NEXT_PROPOSED=1` environment variable.
### Supported edge features
- `Array#intersect?` ([#15198](https://bugs.ruby-lang.org/issues/15198))
- Shorthand Hash/kwarg notation (`data = {x:, y:}` or `foo(x:, y:)`) ([#15236](https://bugs.ruby-lang.org/issues/15236)).
- Pinning instance, class and global variables and expressions in pattern matching ([#17724](https://bugs.ruby-lang.org/issues/17724), [#17411](https://bugs.ruby-lang.org/issues/17411)).
- Anonymous blocks `def b(&); c(&); end` ([#11256](https://bugs.ruby-lang.org/issues/11256)).
### Supported proposed features
- _Method reference_ operator (`.:`) ([#13581](https://bugs.ruby-lang.org/issues/13581)).
## Contributing
Bug reports and pull requests are welcome on GitHub at [https://github.com/ruby-next/ruby-next](ttps://github.com/ruby-next/ruby-next).
See also the [development guide](./DEVELOPMENT.md).
## License
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
[features]: ./SUPPORTED_FEATURES.md
[features_core]: ./SUPPORTED_FEATURES.md#Core
[features_syntax]: ./SUPPORTED_FEATURES.md#Syntax
[mruby]: https://mruby.org
[JRuby]: https://www.jruby.org
[TruffleRuby]: https://github.com/oracle/truffleruby
[Opal]: https://opalrb.com
[RubyMotion]: http://www.rubymotion.com
[Artichoke]: https://github.com/artichoke/artichoke
[Prism]: https://github.com/prism-rb/prism
[parser]: https://github.com/whitequark/parser
[unparser]: https://github.com/mbj/unparser
[next_parser]: https://github.com/ruby-next/parser
[Bootsnap]: https://github.com/Shopify/bootsnap
[rubocop]: https://github.com/rubocop-hq/rubocop
[backports]: https://github.com/marcandre/backports
| 42.349911 | 462 | 0.735604 | eng_Latn | 0.979664 |
5b0156ebda7d62e8c3238b6568722a53e561f5cb | 2,689 | md | Markdown | README.md | miaulightouch/svg-icon | d3595b2b46e6b1f167597a7e1f13908d21a10bcf | [
"MIT"
] | 4 | 2020-10-09T15:14:41.000Z | 2022-01-20T08:36:40.000Z | README.md | miaulightouch/svg-icon | d3595b2b46e6b1f167597a7e1f13908d21a10bcf | [
"MIT"
] | 1 | 2021-09-06T15:49:08.000Z | 2021-09-10T16:33:50.000Z | README.md | miaulightouch/svg-icon | d3595b2b46e6b1f167597a7e1f13908d21a10bcf | [
"MIT"
] | 2 | 2021-05-14T09:36:40.000Z | 2022-03-22T02:29:46.000Z | # SVG Icon WebComponent
[](https://www.npmjs.com/package/@jamescoyle/svg-icon)
[](https://www.npmjs.com/package/@jamescoyle/svg-icon)
A basic webcomponent for rendering a single path SVG icon. This component makes it easy to use SVG path based icon packs such as [MaterialDesignIcons](https://materialdesignicons.com/) and [SimpleIcons](https://simpleicons.org/).
# Usage
1. Install the package from NPM
```
npm install @jamescoyle/svg-icon
```
2. Import the component into your application
```
import '@jamescoyle/svg-icon'
```
3. Use the icon in your markup
```
<svg-icon type="mdi" path="M...z"></svg-icon>
```
# Attributes
| Name | Default | Description |
| ------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| type | null | This sets the size and viewbox to match the recommended size for the icon pack specified. |
| path | null | Required. An SVG path to render as an icon |
| size | 24 | The width and height of the SVG element |
| viewbox | "0 0 24 24" | The `viewBox` of the SVG element |
| flip | null | One of "horizontal", "vertical", or "both". Flips the icon in the specified direction(s). |
| rotate | 0deg | Rotates the icon by the specified value. Can be any valid [CSS angle](https://developer.mozilla.org/en-US/docs/Web/CSS/angle) value. |
# Styling
By default the icon will inherit the current font color of the container it is placed within. You can easily provide a specific color using an inline style on the element (`style="color: red"`) or can target the tag as normal with CSS rules.
# Accessibility
You should make use of aria attributes to improve accessibility for users that use screen reading technology. You can use `aria-labelledby` to create a link between an icon and its label. A descriptive `aria-label` can be used to allow screen readers to announce an icon if there is no visual label to accompany it.
| 62.534884 | 315 | 0.532912 | eng_Latn | 0.931392 |
5b025323e0cd39a4f7fcebb3a6ff44dbb7a61f9d | 2,809 | md | Markdown | README.md | vikramvaibhav/cs50w-project0-homepage | ba0855c4b4a4b2ff7962be723b8e9f1554d8348b | [
"MIT"
] | null | null | null | README.md | vikramvaibhav/cs50w-project0-homepage | ba0855c4b4a4b2ff7962be723b8e9f1554d8348b | [
"MIT"
] | null | null | null | README.md | vikramvaibhav/cs50w-project0-homepage | ba0855c4b4a4b2ff7962be723b8e9f1554d8348b | [
"MIT"
] | null | null | null | # Project 0
Web Programming with Python and JavaScript
# Project Title
Project Title - project0
This website is about myself and created using HTML5,SASS,Bootstrap & Font Awesome.
It has four .html pages and one .css and .scss file.
1. index.html
2. experience.html
3. about.html
4. contact.html
5. project0.css
6. project0.scss
1. index.html
home page(index.html)has small information about myself and my programming skills, displayed using bootstraps jumbotron and unordered list. In the header section I used bootstraps navbar component to display names of all the pages and linked them using <a> anchor tag. In the footer section I used fontawesome to display social-media icons and their links. In the bottom right corner used id tag to go back to top also used media query to show less text on small devices.
2. experience.html
experience page(experience.html) contains information about projects done by me and a table which contains programming languages and software used by me in various projects.
3. about.html
about page(about.html) is created using bootstraps cards and responsive image class and has media query to display content on smaller devices.
4. contact.html
contact page contains address information and iframe tag is used to embed map.
a form is included for aesthetic.
5. project0.css
.css file contains various CSS properties( background-color, margin, border-collapse, background-size, display etc.), CSS selectors(#id, .class, ~, >, h1 etc) and mobile-responsive @media query.
6. project0.scss
.scss file is created using all the features of SCSS like nesting, inheritance and SCSS variable,.
## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
* A browser is needed to run project0
* open project0 folder.
* Right-click (Windows) or double-click (Mac) "index.html" file and select "Open with" from the
action menu.
* Choose between any of the browsers that are installed on your computer.
### Prerequisites
Latest Version of any following Browser which supports HTML5 features.
```
Apple Safari
Google Chrome
Mozilla Firefox
Opera
```
## Built With
* [HTML5](https://www.w3schools.com/html/html5_intro.asp) - Markup Language
* [CSS](https://www.w3schools.com/css/) - Style Sheet Language
* [Bootstrap](https://getbootstrap.com/docs/4.2/getting-started/introduction/) - The web framework used
* [Font Awesome](https://fontawesome.com/icons) - Font and Icon Toolkit
* [SASS](https://sass-lang.com/guide) - Style Sheet Language
## Authors
* **Vikram Vaibhav** - *Initial work* - [vikramvaibhav](https://github.com/vikramvaibhav)
## License
This project is licensed under the MIT License.
| 37.959459 | 472 | 0.773585 | eng_Latn | 0.986631 |
5b0253bb7b2435638cae261c92f200afcec602dd | 151 | md | Markdown | README.md | herberthenrique/learn-rustlings | 351fc1cf11bb6ea600a9e604c1bd96f3c742da5b | [
"MIT"
] | null | null | null | README.md | herberthenrique/learn-rustlings | 351fc1cf11bb6ea600a9e604c1bd96f3c742da5b | [
"MIT"
] | null | null | null | README.md | herberthenrique/learn-rustlings | 351fc1cf11bb6ea600a9e604c1bd96f3c742da5b | [
"MIT"
] | null | null | null | 
# rustlings solutions 🦀❤️
My answers for [rustlings](https://github.com/rust-lang/rustlings) questions | 30.2 | 76 | 0.728477 | yue_Hant | 0.146959 |
5b0272da5702d9543b3fdbeae21c4ea41436803b | 1,952 | md | Markdown | knowledge-base/grid-prevent-ungrouping-columns-even-from-the-group-panel.md | wugithub/ajax-docs | 7c72cee5536377c65e29c2e567ac9b608f1aed21 | [
"Apache-2.0"
] | 22 | 2015-07-21T10:33:39.000Z | 2022-02-21T09:17:40.000Z | knowledge-base/grid-prevent-ungrouping-columns-even-from-the-group-panel.md | wugithub/ajax-docs | 7c72cee5536377c65e29c2e567ac9b608f1aed21 | [
"Apache-2.0"
] | 132 | 2015-07-14T13:56:12.000Z | 2022-01-28T10:04:56.000Z | knowledge-base/grid-prevent-ungrouping-columns-even-from-the-group-panel.md | wugithub/ajax-docs | 7c72cee5536377c65e29c2e567ac9b608f1aed21 | [
"Apache-2.0"
] | 355 | 2015-07-14T02:38:17.000Z | 2021-11-30T13:22:18.000Z | ---
title: How to prevent ungrouping columns even from the group panel
description: How to prevent ungrouping columns even from the group panel
type: how-to
page_title: How to prevent ungrouping columns even from the group panel - RadGrid
slug: grid-prevent-ungrouping-columns-even-from-the-group-panel
res_type: kb
---
## DESCRIPTION
Usually, implementing grouping for RadGrid goes along with enabling the **ShowGroupPanel** property. This container functions as the placeholder where the user can drag columns inside or outside in order to group or ungroup the grid content by the corresponding field: [GridGroupPanel](https://docs.telerik.com/devtools/aspnet-ajax/controls/grid/functionality/grouping/overview#gridgrouppanel)
The groups can also be set initially on the [mark-up](https://docs.telerik.com/devtools/aspnet-ajax/controls/grid/functionality/grouping/group-by-expressions/declarative-definition) or in the [code-behind](https://docs.telerik.com/devtools/aspnet-ajax/controls/grid/functionality/grouping/group-by-expressions/programmatic-definition). You can prefer this grouping structure to remain unchanged and hence the user shouldn't be able to drag a field from the group panel to ungroup.

Currently, even if the following properties remain disabled, the user can drag a column and ungroup the grid content:
````XML
<ClientSettings AllowDragToGroup="false">
</ClientSettings>
<GroupingSettings ShowUnGroupButton="false" />
````
## SOLUTION
Override the built-in drag handler to remove the ungrouping functionality
````JavaScript
<script type="text/javascript">
Telerik.Web.UI.GridGroupPanelItem.prototype._onMouseDownHandler = function (e) { }
function pageLoad(app, args) {
// this part is to remove the dragging message and cursor
$(".rgGroupItem").removeAttr("title style");
}
</script>
````
| 46.47619 | 482 | 0.776639 | eng_Latn | 0.953302 |
5b0274cf4e5776caee1592856db02d87128c9176 | 3,025 | md | Markdown | README.md | sandrosantoszup/cdk-dia | a485d8ee85e424451510b455acacf2e33025c03a | [
"MIT"
] | null | null | null | README.md | sandrosantoszup/cdk-dia | a485d8ee85e424451510b455acacf2e33025c03a | [
"MIT"
] | null | null | null | README.md | sandrosantoszup/cdk-dia | a485d8ee85e424451510b455acacf2e33025c03a | [
"MIT"
] | null | null | null | [](https://github.com/kolomied/awesome-cdk)

[](https://badge.fury.io/js/cdk-dia)
# 🎡 CDK-Dia - Automated diagrams for CDK infrastructure
_Cdk-dia diagrams your CDK provisioned infrastructure using the Grpahviz dot lanuguage._
## Example
This Diagram was automatically generated from an AWS CDK stack
<p align="center">
<img src="docs/diagram.png" />
</p>
## Getting started - Typescript / Javascript
Add cdk-dia to your CDK project
```sh
$ npm install cdk-dia
```
Install Graphviz
```sh
$ brew install graphviz
```
* If you don't use brew: Graphviz installation in many environments is [well documented](https://graphviz.org/download/).
* make sure Graphviz's dot binary is available in your PATH.
Synthesize your CDK application
```sh
$ cdk synth
```
Generate a CDK-DIA diagram
```sh
$ npx cdk-dia
```
<br/>
## Getting started - any other CDK language
Globally install cdk-dia
```sh
$ npm install cdk-dia -g
```
Install Graphviz
```sh
$ brew install graphviz
```
* If you don't use brew: Graphviz installation in many environments is [well documented](https://graphviz.org/download/).
* make sure Graphviz's dot binary is available in your PATH.
Synthesize your CDK application
```sh
$ cdk synth
```
Generate a CDK-DIA diagram
```sh
$ cdk-dia
```
<br/>
## Customize diagrams
In some cases it is useful to be able to tweak a diagram. For this purpose CDK-DIA includes customizers/decorators
you can use with your CDK constructs in order to tweak the diagram.
### Limitations:
* Customization and decorators are currently only support for Typescript/Javascript CDK projects.
* In order to customize you have to add cdk-dia as a npm project dependency (globally installing it using `npm i -g` won't allow you to use the `CdkDiaDecorator` class)
### Example:
Consider the following diagram of a 3-Tier CDK Stack:
<img src="docs/decorator_example_collapsed.png" />
In this diagram CDK-DIA collapsed the DBTier (done automatically to any CDK Level 2 (L2) construct) in order to
create a diagram which contains the most important details.
One can use a decorator in order to customize the diagram and prevent CDK-DIA from collapsing the Construct.
This is done by implementing CDK's IInpectable's interface and using CDK-DIA's decorator. example:
<img src="docs/decoration_example_diff.png" />
This results in a Diagram where the DB-Tier was not collapsed providing more details:
<img src="docs/decorator_example_non-collapsed.png" />
* a full example or the above can be found at [examples/decoration-example](examples/decoration-example)
## CLI arguments
* ```npx cdk-dia --help``` - Get possible arguments
* ```npx cdk-dia --stacks stackOne stackFour``` - only diagram chosen aws-cdk stacks
* ```npx cdk-dia --stacks pipelinestack/prod/database``` - choose stacks by path (nested stacks, pipeline stacks)
| 31.510417 | 168 | 0.74843 | eng_Latn | 0.962588 |
5b02d37979970c0c1bf94ab5a018a78415a0f847 | 879 | md | Markdown | README.md | dk14/machine-learning-experiments | 65d4b1ca4549f70825a9f8e6c000ad15eec7bba4 | [
"MIT"
] | null | null | null | README.md | dk14/machine-learning-experiments | 65d4b1ca4549f70825a9f8e6c000ad15eec7bba4 | [
"MIT"
] | null | null | null | README.md | dk14/machine-learning-experiments | 65d4b1ca4549f70825a9f8e6c000ad15eec7bba4 | [
"MIT"
] | 1 | 2019-11-09T18:46:33.000Z | 2019-11-09T18:46:33.000Z | # machine-learning-exercises
Draft: Machine Learning (Deep vs Reinforcement) experiments presented as Jupyter-notebooks (Scala)
[Example1](https://github.com/dk14/machine-learning-exercises/blob/master/octaveAndRintegrationExample.ipynb). Integration with Octave and R
[Example2](http://nbviewer.jupyter.org/github/dk14/machine-learning-exercises/blob/4b74493bf2f3c854ad79652e548a8dd000eb989a/MultiBandit.ipynb). Simple bandit (slot-machine) algorithm for reinforcement learning
[Example3](https://github.com/dk14/machine-learning-exercises/blob/master/MNIST.ipynb). Multi-layer network trained on MNIST digits (using deeplearning4j library)
Resources:
- "Reinforcement Learning: An Introduction" Richard S. Sutton and Andrew G. Barto
- "Deep Learning" Ian Goodfellow et al.
P.S. It's best to view examples in nbviewer, so you could get plot renderings for Scala-scripts.
| 58.6 | 209 | 0.812287 | eng_Latn | 0.429628 |
5b02e61227f367839db40f9740d002f0e309c54c | 294 | md | Markdown | docs/WaterBody.md | Syncsort/PreciselyAPIsSDK-Android | 752d575414ca06f04c1499c86a5cc569c5043437 | [
"Apache-2.0"
] | null | null | null | docs/WaterBody.md | Syncsort/PreciselyAPIsSDK-Android | 752d575414ca06f04c1499c86a5cc569c5043437 | [
"Apache-2.0"
] | null | null | null | docs/WaterBody.md | Syncsort/PreciselyAPIsSDK-Android | 752d575414ca06f04c1499c86a5cc569c5043437 | [
"Apache-2.0"
] | null | null | null |
# WaterBody
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | **String** | | [optional]
**distance** | [**ShoreLineDistance**](ShoreLineDistance.md) | | [optional]
**type** | [**Type**](Type.md) | | [optional]
| 18.375 | 77 | 0.459184 | yue_Hant | 0.330689 |
5b02e7381b145108c4c9677892c54d6e2fb0e130 | 1,027 | md | Markdown | docs/director/03-fetch-requests.md | inaand/compass | dcd1b1202f1c55a6ac0093141a88658ed6dad83b | [
"Apache-2.0"
] | 47 | 2019-05-15T07:47:27.000Z | 2022-02-22T19:48:21.000Z | docs/director/03-fetch-requests.md | inaand/compass | dcd1b1202f1c55a6ac0093141a88658ed6dad83b | [
"Apache-2.0"
] | 1,774 | 2019-05-14T09:53:10.000Z | 2022-03-31T20:46:33.000Z | docs/director/03-fetch-requests.md | inaand/compass | dcd1b1202f1c55a6ac0093141a88658ed6dad83b | [
"Apache-2.0"
] | 101 | 2019-05-14T09:36:08.000Z | 2022-01-06T09:43:44.000Z | # Fetch Requests
Bundles API supports providing documentation for API Definitions. You can provide the specification
during the call to the Director or use a Fetch Request. Fetch Request is a type in the Director API that contains all the information needed to
fetch specification from the given URL. It can be provided during the `addBundle`, `addAPIDefinitionToBundle` and `updateAPIDefinition` mutations.
If a Fetch Request is specified, the Director makes a synchronous call to the specified URL and downloads the specification.
You can find information on whether the call is successful in the `Status` field. It contains the condition of the
Fetch Request, a timestamp, and a message that states whether something went wrong when fetching the specification.
If there is an error while fetching the specification, the mutation is continued, but an appropriate Fetch Request status is set.
In case the specification must be fetched again, you can use one of the `refetchSpec` mutations to update the specification.
| 73.357143 | 146 | 0.812074 | eng_Latn | 0.999673 |
5b031f14592f345530f093dcc124aab589c4e97e | 1,676 | md | Markdown | assignments.md | 16726-image-synthesis/16726-image-synthesis.github.io | b513ffe2e7fb3d87d3a8d9a71c5b0560bac183c8 | [
"MIT"
] | null | null | null | assignments.md | 16726-image-synthesis/16726-image-synthesis.github.io | b513ffe2e7fb3d87d3a8d9a71c5b0560bac183c8 | [
"MIT"
] | null | null | null | assignments.md | 16726-image-synthesis/16726-image-synthesis.github.io | b513ffe2e7fb3d87d3a8d9a71c5b0560bac183c8 | [
"MIT"
] | 1 | 2022-01-19T05:19:32.000Z | 2022-01-19T05:19:32.000Z | ---
layout: assignments
title: Assignments
permalink: /assignments/
---
## Evaluation
Your final grade will be made up of:
- Five homework assignments (65%).
- Final project (25%).
- Course presentation (10%).
**Homework assignments & Lateday policy**: students will be allowed a total of five late days per semester;
each additional late day will incur a 10% penalty. For each assignment, students need to build a website describing the results
and submit the source code to Canvas. The code should be easy to run by Tas.
**Final project (2-3 people per group)**:, students are required to build a website describing the results and
algorithm of the final project. A final project presentation is also needed.
**Course presentation (2-3 people per group)**:, students will choose one paper from the suggested
readings and present it to the entire class (feel free to reuse the slides available online but please credit it
accordingly). If other students have questions about the paper, the presenters need to answer questions in the Q &
A session (live) and on piazza (online).
**Academic Integrity & Collaboration**: Students are encouraged to discuss the assignment with peers, but
each student must write their own code and submit their own work. If your work benefits from another
student’s discussion or idea, please credit it on your website/README. You absolutely should not share or
copy code. Additionally, you should not use any external code unless explicitly permitted. Plagiarism is strongly prohibited and will lead to failure of this course.
---
You can download the assignments here. Also check out each assignment page for any additional info.
| 52.375 | 165 | 0.782816 | eng_Latn | 0.999551 |
5b033111cc0ffc59232f72ee9ec9de9d47877539 | 1,103 | md | Markdown | script.skinshortcuts/resources/docs/advanced/Required shortcuts.md | Reapercrew666/crypt | e1e0994f5323c6b454ac0f65fb2e579f7bea8e5a | [
"Beerware"
] | 1 | 2019-10-16T18:38:33.000Z | 2019-10-16T18:38:33.000Z | script.skinshortcuts/resources/docs/advanced/Required shortcuts.md | Reapercrew666/crypt | e1e0994f5323c6b454ac0f65fb2e579f7bea8e5a | [
"Beerware"
] | null | null | null | script.skinshortcuts/resources/docs/advanced/Required shortcuts.md | Reapercrew666/crypt | e1e0994f5323c6b454ac0f65fb2e579f7bea8e5a | [
"Beerware"
] | null | null | null | # Required shortcuts
When using the script to provide the whole menu, you can specify shortcuts that are required for the skin to appear in the main menu. If there is no main menu item with the specified action, an additional shortcut will be created. Users will be unable to delete these shortcuts whilst using your skin.
These are specified in your skins [overrides.xml](./overrides.md) file.
`<requiredshortcut label="[label]" thumbnail="[thumbnail]" icon="[icon]">[action]</requiredshortcut>`
| Property | Optional | Description |
| :------: | :------: | ----------- |
| `[label]` | | The display name of the shortcut (can be a localised string) |
| `[thumbnail]` | Yes | The thumbnail associated with the shortcut |
| `[icon]` | Yes | The icon associated with the shortcut |
| `[action]` | The action associated with this shortcut |
## Notes
The type property will be set to the id of your skin (skin.name) to indicate that it is specific to the skin.
***Quick links*** - [Readme](../../../README.md) - [Getting Started](../started/Getting Started.md) - [Advanced Usage](./Advanced Usage.md) | 55.15 | 301 | 0.703536 | eng_Latn | 0.997734 |
5b04adb38b42f89adcc21a0225bccbf3c1a97a49 | 317 | md | Markdown | _posts/2021-07-19-Naughty-x-20210719185705246713.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-19-Naughty-x-20210719185705246713.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-19-Naughty-x-20210719185705246713.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | ---
title: "Naughty x"
metadate: "hide"
categories: [ Pussy ]
image: "https://preview.redd.it/g38jrwt7c5c71.jpg?auto=webp&s=40f4ae8b41cedc04a639377f19864c3adca60a93"
thumb: "https://preview.redd.it/g38jrwt7c5c71.jpg?width=1080&crop=smart&auto=webp&s=b22da84a106c6a24de4e93d3361ee9fb993e6ccc"
visit: ""
---
Naughty x
| 31.7 | 125 | 0.77918 | yue_Hant | 0.208263 |
5b060d9b9ca4c9e688229f99e9594bd499986477 | 3,579 | md | Markdown | CONTRIBUTING.md | dev-ved30/Phishy-API | fedb380ab0df52b089c31ced520c23bd282af233 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | dev-ved30/Phishy-API | fedb380ab0df52b089c31ced520c23bd282af233 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | dev-ved30/Phishy-API | fedb380ab0df52b089c31ced520c23bd282af233 | [
"MIT"
] | 2 | 2021-01-14T07:23:21.000Z | 2021-03-05T12:38:59.000Z | # For Contributors Only
This document provides some general guidelines and starter tips for contributors to follow. If any part of this document seems archaic or even downright terrible, please feel free to suggest changes through a pull request.
## Getting Started
Assuming you have git installed, to get started all you need to do is:
* Run `git clone https://github.com/tanvig14/Phishy.git` in your terminal.
## Workflow
The master branch should always contain a usable version of the project. This means pushing directly to the master branch is not allowed; all modifications must be made through pull requests.
To add a new feature (assuming you are inside the project directory):
* Run `git checkout -b feature` where feature is the name of the new branch in which you will make your changes.
* Implement your changes.
* After you've made your changes, run: `git add <file1> <file2> ... <fileN>` to stage the changes to those files. Generally, you shouldn't use `git add .` because that way you can be sure you're not accidentally trying to change files you din't mean to. It also encourages cleaner commits.
* Now you can commit the staged changes.
* If your commit message is small then simply run `git commit -m "A short description of the changes"`.
* If you need to write a longer commit message, run `git commit` and write the changes in the commit file.
* Push your changes
* If you're pushing to the remote branch for the first time, run `git push --set-upstream origin feature`
* Otheriwse run `git push`
* Repeat the previous 4 steps as necessary on the new `feature` branch and when the changes are ready to merge into master, create a pull request from the Github website for [this repository](https://github.com/tanvig14/Phishy.git). Remember to write good titles and descriptions when you open pull requests.
* All pull requests must be reviewed by at least one person including [@dev-ved30](https://github.com/dev-ved30).
* Once the feature branch has been merged into master, do the following:
* Switch back to the master branch with `git checkout master`
* Pull the latest changes with `git pull`
* Delete your local copy of the feature branch with `git branch -D feature`
* **NOTES**
* Whenever possible create small pull requests. Smaller additions and modifications are easier to review, so it's better not to open pull requests that make multiple huge changes all at once.
* If you're in the middle of implementing some feature on a branch and a pull request is closed in the master branch,
you should (assuming you're on your feature branch) run `git pull origin master` and then resolve any merge conflicts that arise.
## Project Structure
In this section, we will briefly explain the project structure for Phishy.
* Phishy/API: Contains our flask `server.py` file. The `data` directory inside API contains some of our logging files.
* Phishy/data: Contains the data we used to train and validate our SVM model in .arff and .csv formats. This data was aquired through `UC Irvine's Machine Learning Repository`.
* Phishy/Docs: Contains the documentation for all our python code.
* Phishy/models: Contains the models we are using for our classification problem in .pkl format along with the performance stats in text files.
* Phishy/src: Contains most of the source code for the project. `extract.py` contains all the feature extraction functions. `train.py` and `train_weighted.py` contains the code we used to train the model.
* Phishy/WebSite: Contains all the code for our official website.
| 73.040816 | 308 | 0.766695 | eng_Latn | 0.999407 |
5b062c8e11ba2eb3fd1d6412b8dbbcd8d3731174 | 10,705 | md | Markdown | flow-docs/connection-dynamics365.md | salapao2136/flow-docs.th-th | bb06b8ed61b5a6d7123459ac7ddf2b7ab277fcbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | flow-docs/connection-dynamics365.md | salapao2136/flow-docs.th-th | bb06b8ed61b5a6d7123459ac7ddf2b7ab277fcbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | flow-docs/connection-dynamics365.md | salapao2136/flow-docs.th-th | bb06b8ed61b5a6d7123459ac7ddf2b7ab277fcbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: สร้างโฟลว์ด้วย Dynamics 365 (ออนไลน์) | Microsoft Docs
description: สร้างเวิร์กโฟลว์ที่มีประโยชน์ โดยใช้การเชื่อมต่อ Dynamics 365 และ Microsoft Flow
services: ''
suite: flow
documentationcenter: na
author: Mattp123
manager: anneta
editor: ''
tags: ''
ms.service: flow
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: na
ms.date: 02/06/2017
ms.author: matp
search.app:
- Flow
search.audienceType:
- flowmaker
- enduser
ms.openlocfilehash: 9a054ab8179d4c2a06cbab95cd2633088bbf7458
ms.sourcegitcommit: 44bc9de9f06b64615731ceb60a4f46cfcd45b167
ms.translationtype: HT
ms.contentlocale: th-TH
ms.lasthandoff: 09/17/2018
ms.locfileid: "45727216"
---
# <a name="create-a-flow-by-using-dynamics-365-online"></a>สร้างโฟลว์โดยใช้ Dynamics 365 (ออนไลน์)
ด้วยตัวเชื่อมต่อ Dynamics 365 คุณสามารถสร้างโฟลว์ที่เริ่มต้นเมื่อเกิดเหตุการณ์ใน Dynamics 365 หรือบริการอื่น ๆ จากนั้นจะดำเนินการใน Dynamics 365 หรือบริการอื่น ๆ ได้
ใน Microsoft Flow คุณสามารถตั้งค่าเวิร์กโฟลว์อัตโนมัติระหว่างแอปโปรดของคุณและบริการ เพื่อซิงโครไนซ์ไฟล์, รับการแจ้งเตือน, การเก็บรวบรวมข้อมูล และอื่น ๆ อีกมาก สำหรับข้อมูลเพิ่มเติม ดู[เริ่มต้นใช้งาน Microsoft Flow](getting-started.md)
> [!IMPORTANT]
> ในการเรียกใช้ทริกเกอร์โฟลว์ เอนทิตีมีส่วนร่วมของลูกค้า Dynamics 365 ที่ใช้กับโฟลว์ดังกล่าวต้องมีการเปิดใช้งาน **ติดตามการเปลี่ยนแปลง** ข้อมูลเพิ่มเติม:[เปิดใช้งานการติดตามการเปลี่ยนแปลงเพื่อควบคุมการซิงโครไนซ์ข้อมูล](https://docs.microsoft.com/dynamics365/customer-engagement/admin/enable-change-tracking-control-data-synchronization)
## <a name="create-a-flow-from-a-template"></a>สร้างโฟลว์จากเทมเพลต
คุณสามารถสร้างโฟลว์โดยใช้หนึ่งในเทมเพลตจำนวนมากที่มี เช่นตัวอย่างต่อไปนี้:
* เมื่อออบเจ็กต์ใน Dynamics 365 ถูกสร้าง สร้างรายการใน SharePoint
* สร้างเรกคอร์ด (หรือระเบียน) ลูกค้าเป้าหมาย Dynamics 365 จากตาราง Excel
* คัดลอกบัญชี Dynamics 365 ไปเป็นลูกค้าใน Dynamics 365 สำหรับการดำเนินการ
เพื่อสร้างโฟลว์จากเทมเพลต ทำตามขั้นตอนต่อไปนี้
1. ลงชื่อเข้าใช้[เว็บไซต์ Microsoft Flow](https://flow.microsoft.com/)
2. คลิกหรือแตะ**บริการ** แล้วคลิกหรือแตะ **Dynamics 365**
3. มีเทมเพลตหลายรายการให้ใช้งาน เพื่อเริ่มต้นใช้งาน เลือกเทมเพลตที่คุณต้องการ
## <a name="create-a-task-from-a-lead"></a>สร้างงานจากลูกค้าเป้าหมาย
ถ้าไม่มีเทมเพลตที่คุณต้องการ สร้างโฟลว์ตั้งแต่เริ่มต้น การฝึกปฏิบัตินี้ แสดงวิธีการสร้างงานใน Dynamics 365 เมื่อใดก็ตามที่ลูกค้าเป้าหมายถูกสร้างใน Dynamics 365
1. ลงชื่อเข้าใช้[เว็บไซต์ Microsoft Flow](https://flow.microsoft.com/)
2. คลิกหรือแตะ**โฟลว์ของฉัน** แล้วคลิกหรือแตะ**สร้างจากว่างเปล่า**
3. ในรายการของทริกเกอร์โฟลว์ คลิกหรือแตะ **Dynamics 365 - เมื่อเรกคอร์ดถูกสร้าง**
4. ถ้าได้รับพร้อมท์ ลงชื่อเข้าใช้ Dynamics 365
5. ภายใต้**ชื่อองค์กร** เลือกอินสแตนซ์ Dynamics 365 ที่คุณต้องการให้โฟลว์ฟัง
6. ภายใต้**ชื่อเอนทิตี** เลือกเอนทิตีที่คุณต้องการฟัง ซึ่งจะทำหน้าที่เป็นทริกเกอร์ที่เริ่มต้นโฟลว์
สำหรับการฝึกปฏิบัตินี้ เลือก**ลูกค้าเป้าหมาย**

> [สำคัญ] เพื่อให้โฟลว์ทริกเกอร์บนเอนทิตี Dynamics 365 ข้อกำหนดของเอนทิตีควรมีการเปิดใช้งานการติดตามการเปลี่ยนแปลง โปรดดู[เปิดใช้งานการติดตามการเปลี่ยนแปลงเพื่อควบคุมการซิงโครไนซ์ข้อมูล](https://docs.microsoft.com/dynamics365/customer-engagement/admin/enable-change-tracking-control-data-synchronization)
7. คลิกหรือแตะ**ขั้นตอนใหม่** แล้วคลิกหรือแตะ**เพิ่มการดำเนินการ**
8. คลิกหรือแตะ **Dynamics 365 – สร้างเรกคอร์ดใหม่**
9. ภายใต้**ชื่อองค์กร** เลือกอินสแตนซ์ Dynamics 365 ที่คุณต้องการให้โฟลว์สร้างเรกคอร์ด สังเกตว่า จะไม่จำเป็นต้องเป็นอินสแตนซ์เดียวกับที่ทริกเกอร์เหตุการณ์
10. ภายใต้**ชื่อเอนทิตี** เลือกเอนทิตีที่จะสร้างเรกคอร์ดเมื่อเหตุการณ์เกิดขึ้น
สำหรับการฝึกปฏิบัตินี้ เลือก**งาน**
11. กล่อง**เรื่อง**จะปรากฏขึ้น เมื่อคุณคลิกหรือแตะที่กล่อง บานหน้าต่างเนื้อหาแบบไดนามิกจะปรากฏขึ้น ซึ่งคุณสามารถเลือกฟิลด์ใดฟิลด์หนึ่งเหล่านี้
* **นามสกุล** ถ้าคุณเลือกฟิลด์นี้ นามสกุลของลูกค้าเป้าหมาย จะถูกแทรกลงในฟิลด์**เรื่อง**ของงานเมื่อถูกสร้างขึ้น
* **หัวข้อ** ถ้าคุณเลือกฟิลด์นี้ ฟิลด์**หัวข้อ**สำหรับลูกค้าเป้าหมาย จะถูกแทรกลงในฟิลด์**เรื่อง**ของงานเมื่อถูกสร้างขึ้น
สำหรับการฝึกปฏิบัตินี้ เลือก**หัวข้อ**

> **เคล็ดลับ:** ในบานหน้าต่างเนื้อหาแบบไดนามิก คลิกหรือแตะ**ดูเพิ่มเติม** เพื่อแสดงฟิลด์เพิ่มเติมที่เกี่ยวข้องกับเอนทิตี ตัวอย่างเช่น คุณยังสามารถเติมฟิลด์**เรื่อง**ของงาน ด้วยฟิลด์**ชื่อบริษัท**, **ลูกค้า**, **คำอธิบาย** หรือ**อีเมล**ของลูกค้าเป้าหมาย
>
>
12. คลิกหรือแตะ**สร้างโฟลว์**
## <a name="create-a-wunderlist-task-from-a-dynamics-365-task"></a>สร้างงาน Wunderlist จากงาน Dynamics 365
การฝึกปฏิบัตินี้แสดงวิธีการสร้างงานใน [Wunderlist](https://www.wunderlist.com) เมื่อใดก็ตามที่งานถูกสร้างขึ้นใน Dynamics 365 Wunderlist คือ บริการบนอินเทอร์เน็ตข้อที่คุณสามารถใช้ เพื่อสร้างรายการสิ่งที่ต้องทำ, เพิ่มตัวแจ้งเตือน หรือติดตามงาน
1. ลงชื่อเข้าใช้[เว็บไซต์ Microsoft Flow](https://flow.microsoft.com/)
2. คลิกหรือแตะ**โฟลว์ของฉัน** แล้วคลิกหรือแตะ**สร้างจากว่างเปล่า**
3. ในรายการของทริกเกอร์โฟลว์ คลิกหรือแตะ **Dynamics 365 - เมื่อเรกคอร์ดถูกสร้าง**
4. ภายใต้**ชื่อองค์กร** เลือกอินสแตนซ์ Dynamics 365 ที่คุณต้องการให้โฟลว์ฟัง
5. ภายใต้**ชื่อเอนทิตี** เลือกเอนทิตีที่คุณต้องการฟัง ซึ่งจะทำหน้าที่เป็นทริกเกอร์เพื่อเริ่มต้นโฟลว์
สำหรับการฝึกปฏิบัตินี้ เลือก**งาน**
6. คลิกหรือแตะ**ขั้นตอนใหม่** แล้วคลิกหรือแตะ**เพิ่มการดำเนินการ**
7. พิมพ์*สร้างงาน* แล้วคลิกหรือแตะ **Wunderlist – สร้างงาน**
8. ภายใต้ **ID รายการ** เลือก**กล่องขาเข้า**
9. ภายใต้**ชื่อเรื่อง** เลือก**เรื่อง**ในบานหน้าต่างเนื้อหาแบบไดนามิก
10. คลิกหรือแตะ**สร้างโฟลว์**
## <a name="trigger-based-logic"></a>ทริกเกอร์ตามตรรกะ
ทริกเกอร์ เช่น **เมื่อสร้างระเบียน** **เมื่อมีการอัปเดตระเบียน** และ**เมื่อลบระเบียน** เริ่มโฟลว์ของคุณภายในไม่กี่นาทีของเหตุการณ์ที่กำลังเกิดขึ้น ในบางกรณี โฟลว์ของคุณอาจใช้เวลาถึง 2 ชั่วโมงในการทริกเกอร์
เมื่อการทริกเกอร์เกิดขึ้น โฟลว์จะได้รับการแจ้งเตือนหนึ่ง แต่โฟลว์นั้นทำงานบนข้อมูลที่มีอยู่ในขณะที่มีการดำเนินการดังกล่าว ตัวอย่างเช่น ถ้าโฟลว์ของคุณทริกเกอร์เมื่อมีการสร้างระเบียนใหม่ และคุณอัปเดตระเบียนสองครั้งก่อนที่จะเรียกใช้โฟลว์ โฟลว์ของคุณจะเรียกใช้เพียงครั้งเดียว โดยเรียกใช้ด้วยข้อมูลล่าสุด
## <a name="specify-advanced-options"></a>ระบุตัวเลือกขั้นสูง
เมื่อคุณเพิ่มขั้นตอนให้กับโฟลว์ คุณสามารถคลิกหรือแตะ**แสดงตัวเลือกขั้นสูง**เพื่อเพิ่มคิวรีตัวกรอง หรือคิวรีเรียงตามลำดับ ที่ควบคุมวิธีกรองข้อมูลในโฟลว์ ได้
ตัวอย่างเช่น คุณสามารถใช้คิวรีตัวกรองเพื่อดึงข้อมูลติดต่อที่ใช้งานอยู่เท่านั้น และคุณสามารถเรียงลำดับตามนามสกุล เพื่อทำเช่นนี้ ใส่คิวรีตัวกรองของ OData **statuscode eq 1** และเลือก**นามสกุล** จากบานหน้าต่างเนื้อหาแบบไดนามิก สำหรับข้อมูลเพิ่มเติมเกี่ยวกับ คิวรีตัวกรองและเรียงลำดับ ดู [MSDN: $filter](https://msdn.microsoft.com/library/gg309461.aspx#Anchor_1) และ [MSDN: $orderby](https://msdn.microsoft.com/library/gg309461.aspx#Anchor_2)

### <a name="best-practices-when-using-advanced-options"></a>แนวทางปฏิบัติที่ดีที่สุดเมื่อใช้ตัวเลือกขั้นสูง
เมื่อคุณเพิ่มค่าลงในฟิลด์ คุณต้องเพิ่มให้ตรงกับชนิดฟิลด์ ไม่ว่าคุณจะพิมพ์ค่า หรือเลือกหนึ่งรายการจากบานหน้าต่างเนื้อหาแบบไดนามิก
| ชนิดของฟิลด์ | วิธีใช้ | ตำแหน่งที่พบ | ชื่อ | ชนิดข้อมูล |
| --- | --- | --- | --- | --- |
| ฟิลด์ข้อความ |ฟิลด์ข้อความ ต้องการข้อความบรรทัดเดียว หรือเนื้อหาแบบไดนามิกที่มีฟิลด์ชนิดข้อความ ตัวอย่างเช่น ฟิลด์**ประเภท**และ**ประเภทย่อย** |**การตั้งค่า** > **การกำหนดเอง** > **แก้ไขระบบตามความต้องการ** > **เอนทิตี** > **งาน** > **ฟิลด์** |**category** |**ข้อความบรรทัดเดียว** |
| ฟิลด์จำนวนเต็ม |บางฟิลด์ต้องการจำนวนเต็ม หรือเนื้อหาแบบไดนามิกที่มีชนิดฟิลด์เป็นจำนวนเต็ม ตัวอย่างเช่น **เปอร์เซ็นต์ที่เสร็จสมบูรณ์** และ**ระยะเวลา** |**การตั้งค่า** > **การกำหนดเอง** > **แก้ไขระบบตามความต้องการ** > **เอนทิตี** > **งาน** > **ฟิลด์** |**percentcomplete** |**จำนวนเต็ม** |
| ฟิลด์วันที่ |บางฟิลด์ต้องการวันที่ ที่ป้อนในรูปแบบ mm/dd/yyyy หรือเนื้อหาแบบไดนามิกที่มีฟิลด์ชนิดวันที่ ตัวอย่างเช่น **สร้างเมื่อ**, **วันที่เริ่มต้น**, **เริ่มต้นจริง**, **เวลาในการระงับครั้งล่าสุด**, **สิ้นสุดจริง** และ**วันที่ครบกำหนด** |**การตั้งค่า** > **การกำหนดเอง** > **แก้ไขระบบตามความต้องการ** > **เอนทิตี** > **งาน** > **ฟิลด์** |**createdon** |**วันที่และเวลา** |
| ฟิลด์ที่ต้องการทั้ง รหัสเรกคอร์ด และชนิดการค้นหา |บางฟิลด์ที่อ้างอิงเรกคอร์ดเอนทิตีอื่นต้องการทั้ง รหัสเรกคอร์ด และชนิดการค้นหา |**ตั้งค่า** > **กำหนดเอง** > **กำหนดด้วยตนเองระบบ** > **เอนทิตี** > **บัญชี** > **ฟิลด์** |**accountid** |**คีย์หลัก** |
### <a name="more-examples-of-fields-that-require-both-a-record-id-and-lookup-type"></a>ตัวอย่างเพิ่มเติมของฟิลด์ที่ต้องการทั้งรหัสเรกคอร์ดและชนิดการค้นหา
อธิบายเพิ่มเติมตารางก่อนหน้านี้ ต่อไปนี้คือตัวอย่างเพิ่มเติมของฟิลด์ที่ไม่ทำงานกับค่าที่เลือกจากรายการเนื้อหาแบบไดนามิก แต่ฟิลด์เหล่านี้จำเป็นต้องมีทั้งรหัสเรกคอร์ดและชนิดการค้นหา ใส่ลงในฟิลด์ใน PowerApps
* **เจ้าของ**และ**ชนิดเจ้าของ**
* ฟิลด์**เจ้าของ** ต้องมีรหัสเรกคอร์ดของผู้ใช้ หรือทีมที่ถูกต้อง
* **ชนิดเจ้าของ** ต้องเป็น **systemusers** หรือ **teams** อย่างใดอย่างหนึ่ง
* **ลูกค้า**และ**ชนิดลูกค้า**
* ฟิลด์**ลูกค้า** ต้องมีรหัสเรกคอร์ดของบัญชี หรือผู้ติดต่อที่ถูกต้อง
* **ชนิดลูกค้า** ต้องเป็น **accounts** หรือ **contacts** อย่างใดอย่างหนึ่ง
* **ความเกี่ยวข้อง**และ**ชนิดความเกี่ยวข้อง**
* ฟิลด์**ความเกี่ยวข้อง** ต้องมีรหัสเรกคอร์ดที่ถูกต้อง เช่น รหัสเรกคอร์ดบัญชี หรือผู้ติดต่อ
* **ชนิดความเกี่ยวข้อง** ต้องเป็นชนิดการค้นหาสำหรับเรกคอร์ด เช่น **accounts** หรือ **contacts**
ตัวอย่างนี้ เพิ่มเรกคอร์ดบัญชีที่ตรงกับรหัสเรกคอร์ด ลงในฟิลด์**ความเกี่ยวข้อง**ของงาน

ตัวอย่างนี้ยังมอบหมายงานให้กับผู้ใช้ ตามรหัสเรกคอร์ดของผู้ใช้

เพื่อค้นหารหัสเรกคอร์ด ดู[ค้นหารหัสเรกคอร์ด](#find-the-records-id)ภายหลังในหัวข้อนี้
> **สิ่งสำคัญ:** ฟิลด์ไม่ควรมีค่า ถ้าฟิลด์มีคำอธิบายว่า "สำหรับใช้ภายในเท่านั้น" ฟิลด์เหล่านี้รวม **พาธที่ตรวจสอบ**, **พารามิเตอร์เพิ่มเติม** และ**หมายเลขรุ่นของกฎโซนเวลา**
>
>
## <a name="find-the-records-id"></a>ค้นหารหัสเรกคอร์ด
1. ในแอปพลิเคชันบนเว็บของ Dynamics 365 เปิดเรกคอร์ด เช่น เรกคอร์ดบัญชี
2. บนแถบเครื่องมือการดำเนินการ คลิกหรือแตะ **เปิดหน้าต่างใหม่**
 (หรือคลิกหรือแตะ **ส่งการเชื่อมโยงทางอีเมล** เพื่อคัดลอก URL แบบเต็มไปยังโปรแกรมอีเมลเริ่มต้นของคุณ)
ในแถบที่อยู่ของเว็บเบราว์เซอร์ URL ประกอบด้วยรหัสเรกคอร์ด ระหว่างตัวอักษระที่เข้ารหัส %7b และ %7d

## <a name="related-topics"></a>หัวข้อที่เกี่ยวข้อง
[การแก้ไขปัญหาโฟลว์](fix-flow-failures.md)
[ถามตอบ Flow ในองค์กรของคุณ](organization-q-and-a.md)
[คำถามที่ถามบ่อย](frequently-asked-questions.md)
| 62.970588 | 438 | 0.681644 | tha_Thai | 0.995994 |
5b062e9665605b78e9a33100dafa09c76876a825 | 4,883 | md | Markdown | docs/_api/state-sources/http.md | oliverwoodings/marty | 225eeaaef40788ad592a4a5db4c0040f568dab55 | [
"MIT"
] | null | null | null | docs/_api/state-sources/http.md | oliverwoodings/marty | 225eeaaef40788ad592a4a5db4c0040f568dab55 | [
"MIT"
] | null | null | null | docs/_api/state-sources/http.md | oliverwoodings/marty | 225eeaaef40788ad592a4a5db4c0040f568dab55 | [
"MIT"
] | null | null | null | ---
layout: page
title: HTTP State Source
id: state-source-http
section: State Sources
---
Provides a simple way of making HTTP requests.
{% sample %}
classic
=======
var UsersAPI = Marty.createStateSource({
type: 'http',
id: 'UsersAPI',
createUser: function (user) {
this.post({ url: '/users', body: user })
.then(function (res) {
UserSourceActionCreators.receiveUser(res.body);
});
}
});
es6
===
class UsersAPI extends Marty.HttpStateSource {
createUser(user) {
this.post({ url: '/users', body: user })
.then((res) => UserSourceActionCreators.receiveUser(res.body));
}
}
{% endsample %}
<h2 id="baseUrl">baseUrl</h2>
An (optional) base url to prepend to any urls.
<h2 id="requestOptions">request(options)</h2>
Starts an HTTP request with the given <code>method</code> and <code>options</code>. We use the [fetch](https://github.com/github/fetch) polyfill however you can override ``request()`` with your own implementation. The only requirement is it returns a <code>Promise</code>.
{% sample %}
classic
=======
var UsersAPI = Marty.createStateSource({
type: 'http',
id: 'UsersAPI',
createUser: function (user) {
this.request({
url: '/users',
method: 'POST',
body: { name: 'foo' },
contentType: 'application/json'
});
}
});
es6
===
class UsersAPI extends Marty.HttpStateSource {
createUser(user) {
this.request({
url: '/users',
method: 'POST',
body: { name: 'foo' }
});
}
}
{% endsample %}
<h3>Options</h3>
<table class="table table-bordered table-striped">
<thead>
<tr>
<th style="width: 100px;">Name</th>
<th style="width: 100px;">type</th>
<th style="width: 50px;">default</th>
<th>description</th>
</tr>
</thead>
<tbody>
<tr>
<td>url</td>
<td>string</td>
<td></td>
<td>Url of resource</td>
</tr>
<tr>
<td>method</td>
<td>string</td>
<td>get</td>
<td>http method</td>
</tr>
<tr>
<td>headers</td>
<td>object</td>
<td>{}</td>
<td>http headers</td>
</tr>
<tr>
<td>body</td>
<td>string | object</td>
<td></td>
<td>entity body for `PATCH`, `POST` and `PUT` requests.</td>
</tr>
<tr>
<td>contentType</td>
<td>string</td>
<td>application/json</td>
<td>Content type of request</td>
</tr>
<tr>
<td>dataType</td>
<td>string</td>
<td>json</td>
<td>The type of data that you're expecting back from the server. <code>xml</code>, <code>json</code>, <code>script</code>, or <code>html</code></td>
</tr>
</tbody>
</table>
<h2 id="getUrl">get(url)</h2>
Same as <code>request({ method: 'GET', url: url })</code>
<h2 id="getOptions">get(options)</h2>
Same as <code>request(_.extend(options, { method: 'GET'})</code>
<h2 id="postUrl">post(url)</h2>
Same as <code>request({ method: 'POST', url: url })</code>
<h2 id="postOptions">post(options)</h2>
Same as <code>request(_.extend(options, { method: 'POST'})</code>
<h2 id="putUrl">put(url)</h2>
Same as <code>request({ method: 'PUT', url: url })</code>
<h2 id="putOptions">put(options)</h2>
Same as <code>request(_.extend(options, { method: 'PUT'})</code>
<h2 id="deleteUrl">delete(url)</h2>
Same as <code>request({ method: 'DELETE', url: url })</code>
<h2 id="deleteOptions">delete(options)</h2>
Same as <code>request(_.extend(options, { method: 'DELETE'})</code>
<h2 id="hooks">Hooks</h2>
Hooks allows you to make changes to requests before they are sent and as well as when responses are received. This can be useful when you want to do things like automatically converting all JSON responses to immutable objects.
Hooks are object literals which have 3 optional keys: ``before``, ``after`` and ``priority``. If ``before`` is present then it will be called with the request as its argument. If ``after`` is present then it will be called after the response is received with the response as its argument. Setting a priority allows you to alter in what order the hook is executed (The smaller the number, the earlier it will be executed).
{% highlight js %}
var Marty = require('marty');
Marty.HttpStateSource.addHook({
priority: 1,
before(req) {
req.headers['Foo'] = 'bar';
},
after(res) {
return res.json();
}
});
{% endhighlight %}
<h3 id="addHook">addHook</h3>
Registers the hook in the pipeline
{% highlight js %}
var Marty = require('marty');
Marty.HttpStateSource.addHook({
priority: 1,
before: function (req) {
req.headers['Foo'] = 'bar';
},
after: function (res) {
return res.json();
}
});
{% endhighlight %}
<h3 id="removeHook">removeHook</h3>
Removes the hook from the pipline.
{% highlight js %}
var Marty = require('marty');
var ParseJSON = require('marty/http/hooks/parseJSON');
Marty.HttpStateSource.removeHook(ParseJSON);
{% endhighlight %}
| 24.054187 | 421 | 0.63035 | eng_Latn | 0.731716 |
5b07400baa0408bafc9767d6e89a89500154410b | 2,558 | md | Markdown | _posts/jsp/2021-03-07-jsp-22.md | limunosekai/limunosekai.github.io | 7c1d51bb79155ff37cb2e30fe557551a50ae79b1 | [
"MIT"
] | null | null | null | _posts/jsp/2021-03-07-jsp-22.md | limunosekai/limunosekai.github.io | 7c1d51bb79155ff37cb2e30fe557551a50ae79b1 | [
"MIT"
] | null | null | null | _posts/jsp/2021-03-07-jsp-22.md | limunosekai/limunosekai.github.io | 7c1d51bb79155ff37cb2e30fe557551a50ae79b1 | [
"MIT"
] | 2 | 2021-05-04T08:11:41.000Z | 2021-05-04T08:34:33.000Z | ---
layout: post
title: JSP - JSTL
category: JSP
permalink: /jsp/:year/:month/:day/:title/
tags: [JSP]
comments: true
---
---
## JSP - JSTL
---
1. **JSTL (JSP Standard Tag Library)**
* JSP에서 자바코드를 제거하기 위해 JSTL, 커스텀 태그가 등장
* JSTL : JSP 페이지에서 가장 많이 사용하는 기능을 태그로 제공하며, 라이브러리를 설치해야함
* 커스텀 태그 : 개발자가 필요해서 만든 태그로, 스프링 등에서 미리 만들어서 제공
* JSTL 태그 종류

<br>
* 라이브러리 다운로드
* https://tomcat.apache.org/download-taglibs.cgi#Standard-1.2.5

* http://xalan.apache.org/xalan-j/index.html

<br>
2. **Core 태그**
* <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> 명시해주어야함
* 변수 선언, 조건문, 반복문, catch문, URL 처리 등이 가능함

```jsp
<c:set var="contextPath" value="${pageContext.request.contextPath}" scope="page"/>
<!-- 코드 재사용성과 유지보수성, 편리함 -->
<a href="${contextPath}"/name.jsp>이동</a>
```
```jsp
<c:if test="${조건식}">
<h1>안녕</h1>
</c:if>
```
```jsp
<!-- if~else문이랑 비슷함 -->
<c:choose>
<c:when test="${조건식}">
</c:when>
<c:when test="${조건식2}">
</c:when>
<c:when test="${조건식3}">
</c:when>
<c:otherwise></c:otherwise>
</c:choose>
```
```jsp
<!-- 상태변수 속성에는 index,count,first,last가 있음 -->
<c:forEach var="변수명" begin="시작" end="끝" step="증가값" varStatus="상태변수">
${상태변수.count}
</c:forEach>
```
```jsp
<!-- 향상된 for문과 비슷함 -->
<c:forEach var="변수명" items="${list}">
${변수명.name}
</c:forEach>
```
```jsp
<!-- 파람을 가지고 URL 생성-->
<c:url var="변수명" value="URL경로">
<!-- 생략 가능 -->
<c:param name="name" value="value" />
</c:url>
```
```jsp
<!-- response.sendRedirect()와 비슷함 -->
<c:redirect url="URL경로">
<!-- 생략 가능 -->
<c:param name="name" value="value" />
</c:redirect>
```
<br>
3. **FMT (국제화) 태그**
* 다국어 태그
* <%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %> 명시해주어야함

* 포매팅 태그

* \<formatNumber> 태그 속성

* \<formatDate> 태그 속성

4. **문자열 처리 함수**
* <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %> 명시해주어야함

<br>
---
## 참고 자료
---
KG 아이티뱅크 강의 자료
처음해보는 JSP&Servlet 웹 프로그래밍
자바 웹을 다루는 기술 | 18.141844 | 85 | 0.543002 | kor_Hang | 0.964355 |
5b07ea586d2927bb17c88f7913be0a04508aea59 | 580 | md | Markdown | README.md | Oddert/Portfolio-React | 7b4f51192960edfb1ad69292df0b28852834059f | [
"MIT"
] | null | null | null | README.md | Oddert/Portfolio-React | 7b4f51192960edfb1ad69292df0b28852834059f | [
"MIT"
] | null | null | null | README.md | Oddert/Portfolio-React | 7b4f51192960edfb1ad69292df0b28852834059f | [
"MIT"
] | null | null | null | # Portfolio React (old)
The first major rebuild of my portfolio site to convert it to a react page.
Now surpassed by current site at [robynveitch.com/](https://robynveitch.com)
## Live Demo
[oddert.github.io](https://oddert.github.io/)
## Installation
```
$ git clone https://github.com/Oddert/Portfolio-React.git
$ cd Portfolio-React
$ npm i
```
## Development Environment
```
$ npm start
```
## For Production Build
```
$ npm run build
```
## Scripts
| script | command | action
|--------|---------|----------|
start | npm run server-start | runs the server for production
| 20 | 76 | 0.668966 | eng_Latn | 0.70323 |