hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2c9cfa4ce8c42c4473e7064467bc814d83585cf6 | 6,323 | md | Markdown | README.md | acute0203/py-data-api | 199769b9584d4e9dd48a98696f35e2435880a34f | [
"MIT"
] | 1 | 2019-11-08T10:07:57.000Z | 2019-11-08T10:07:57.000Z | README.md | acute0203/py-data-api | 199769b9584d4e9dd48a98696f35e2435880a34f | [
"MIT"
] | null | null | null | README.md | acute0203/py-data-api | 199769b9584d4e9dd48a98696f35e2435880a34f | [
"MIT"
] | null | null | null | # py-data-api - Data API Client for Python
[](https://travis-ci.org/koxudaxi/py-data-api)
[](https://badge.fury.io/py/pydataapi)
[](https://pypi.python.org/pypi/pydataapi)
[](https://codecov.io/gh/koxudaxi/py-data-api)

py-data-api is a user-friendly client which supports SQLAlchemy models.
Also, the package includes DB API 2.0 Client and SQLAlchemy Dialects.
## Features
- A user-friendly client which supports SQLAlchemy models
- SQLAlchemy Dialects (experimental)
- DB API 2.0 compatible client [PEP 249](https://www.python.org/dev/peps/pep-0249/)
## What's AWS Aurora Serverless's Data API?
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html
## This project is an experimental phase.
Warning: Some interface will be changed.
## How to install
pydataapi requires Python 3.6.1 or later
```bash
$ pip install pydataapi
```
## Example
```python
from typing import List
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Query
from sqlalchemy.sql import Insert
from pydataapi import DataAPI, transaction, Result, Record
class Pets(declarative_base()):
__tablename__ = 'pets'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(255, collation='utf8_unicode_ci'), default=None)
database: str = 'test'
resource_arn: str = 'arn:aws:rds:us-east-1:123456789012:cluster:serverless-test-1'
secret_arn: str = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:serverless-test1'
def example_with_statement():
# DataAPI supports with statement for handling transaction
with DataAPI(database=database, resource_arn=resource_arn, secret_arn=secret_arn) as data_api:
# start transaction
insert: Insert = Insert(Pets, {'name': 'dog'})
# INSERT INTO pets (name) VALUES ('dog')
# `execute` accepts SQL statement as str or SQL Alchemy SQL objects
result: Result = data_api.execute(insert)
print(result.number_of_records_updated)
# 1
query = Query(Pets).filter(Pets.id == 1)
result: Result = data_api.execute(query) # or data_api.execute('select id, name from pets')
# SELECT pets.id, pets.name FROM pets WHERE pets.id = 1
# `Result` like a Result object in SQL Alchemy
print(result.scalar())
# 1
print(result.one())
# [Record<id=1, name='dog'>]
# `Result` is Sequence[Record]
records: List[Record] = list(result)
print(records)
# [Record<id=1, name='dog'>]
# Record is Sequence and Iterator
record = records[0]
print(record[0])
# 1
print(record[1])
# dog
for column in record:
print(column)
# 1 ...
# show record as dict()
print(record.dict())
# {'id': 1, 'name': 'dog'}
# batch insert
insert: Insert = Insert(Pets)
data_api.batch_execute(insert, [
{'id': 2, 'name': 'cat'},
{'id': 3, 'name': 'snake'},
{'id': 4, 'name': 'rabbit'},
])
result = data_api.execute('select * from pets')
print(list(result))
# [Record<id=1, name='dog'>, Record<id=2, name='cat'>, Record<id=3, name='snake'>, Record<id=4, name='rabbit'>]
# result is a sequence object
for record in result:
print(record)
# Record<id=1, name='dog'> ...
# commit
def example_decorator():
pet_names: List[str] = ['dog', 'cat', 'snake']
add_pets(pet_names)
@transaction(database=database, resource_arn=resource_arn, secret_arn=secret_arn)
def add_pets(data_api: DataAPI, pet_names: List[str]) -> None:
# start transaction
for pet_name in pet_names:
data_api.execute(Insert(Pets, {'name': pet_name}))
# some logic ...
# commit
def example_simple_execute():
data_api = DataAPI(resource_arn=resource_arn, secret_arn=secret_arn, database=database)
result: Result = data_api.execute('show tables')
print(result.scalar())
# Pets
def example_rollback():
with DataAPI(resource_arn=resource_arn, secret_arn=secret_arn) as data_api:
data_api.execute(Insert(Pets, {'name': 'dog'}))
# you can rollback by Exception
raise Exception
def example_rollback_with_custom_exception():
class OriginalError(Exception):
pass
with DataAPI(resource_arn=resource_arn, secret_arn=secret_arn, rollback_exception=OriginalError) as data_api:
data_api.execute(Insert(Pets, {'name': 'dog'}))
# some logic ...
# rollback when happen `rollback_exception`
raise OriginalError # rollback
# raise Exception <- DataAPI don't rollback
def example_driver_for_sqlalchemy():
from sqlalchemy.engine import create_engine
engine = create_engine(
'mysql+pydataapi://',
connect_args={
'resource_arn': 'arn:aws:rds:us-east-1:123456789012:cluster:dummy',
'secret_arn': 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy',
'database': 'test'}
)
result: ResultProxy = engine.execute("select * from pets")
print(result.fetchall())
```
## Contributing to pydataapi
We are waiting for your contributions to `pydataapi`.
### How to contribute
[https://koxudaxi.github.io/py-data-api/contributing](https://koxudaxi.github.io/py-data-api/contributing)
## Related projects
### local-data-api
DataAPI Server for local
https://github.com/koxudaxi/local-data-api
## PyPi
[https://pypi.org/project/pydataapi](https://pypi.org/project/pydataapi)
## Source Code
[https://github.com/koxudaxi/py-data-api](https://github.com/koxudaxi/py-data-api)
## Documentation
[https://koxudaxi.github.io/py-data-api](https://koxudaxi.github.io/py-data-api)
## License
py-data-api is released under the MIT License. http://www.opensource.org/licenses/mit-license
| 30.843902 | 130 | 0.6712 | eng_Latn | 0.317148 |
2c9d0359b7dee3c865d84ad215658eb45f7d0d0f | 10,218 | md | Markdown | README.md | sylvainjule/kirby-mapnotator | 735ad933d33b4de8af968b4a253483cfe81b6e5c | [
"MIT"
] | 20 | 2022-02-26T19:55:42.000Z | 2022-03-23T12:03:17.000Z | README.md | sylvainjule/kirby-mapnotator | 735ad933d33b4de8af968b4a253483cfe81b6e5c | [
"MIT"
] | null | null | null | README.md | sylvainjule/kirby-mapnotator | 735ad933d33b4de8af968b4a253483cfe81b6e5c | [
"MIT"
] | null | null | null | # Kirby Mapnotator
Annotate maps and generate GeoJSON in Kirby by drawing markers, paths and shapes.

## Overview
> This plugin is completely free and published under the MIT license. However, if you are using it in a commercial project and want to help me keep up with maintenance, please consider making a donation of your choice through [GitHub Sponsors](https://github.com/sponsors/sylvainjule) or [Paypal](https://www.paypal.me/sylvainjl) or purchasing your license(s) through [my affiliate link](https://a.paddle.com/v2/click/1129/36369?link=1170).
- [1. Installation](#1-installation)
- [2. Setup](#2-setup)
- [3. Tile-servers](#3-tile-servers)
* [3.1. Open-source / free tiles](#31-open-source-free-tiles)
* [3.2. Mapbox tiles](#32-mapbox-tiles)
- [4. Geocoding service](#4-geocoding-service)
* [4.1. Open-source API (Nominatim)](#41-open-source-nominatim)
* [4.2. Mapbox API](#42-mapbox-api)
- [5. Per-field options](#5-per-field-options)
- [6. Global options](#6-global-options)
- [7. Front-end usage](#6-front-end-usage)
- [8. Credits](#7-credits)
- [9. License](#8-license)
<br/>
## 1. Installation
Download and copy this repository to ```/site/plugins/mapnotator```
Alternatively, you can install it with composer: ```composer require sylvainjule/mapnotator```
<br/>
## 2. Setup
Out of the box, the field is set to use open-source services both for geocoding (Nominatim) and tiles-rendering (Positron), without any API-key requirements.
Keep in mind that **these services are bound by strict usage policies**, always double-check if your usage is compatible. Otherwise, please set-up the field to use Mapbox, see details below.
```yaml
mymap:
label: My map
type: mapnotator
```
<br/>
## 3. Tile-servers
#### 3.1. Open-source / free tiles

You can pick one of the 4 free tile servers included:
1. ~~`wikimedia` ([Terms of Use](https://foundation.wikimedia.org/wiki/Maps_Terms_of_Use))~~ → Public usage is now forbidden
2. `openstreetmap` ([Terms of Use](https://wiki.openstreetmap.org/wiki/Tile_usage_policy))
3. `positron` (default, [Terms of Use](https://carto.com/legal/) [Under *Free Basemaps Terms of Service*])
4. `voyager` ([Terms of Use](https://carto.com/legal/) [Under *Free Basemaps Terms of Service*])
```yaml
mymap:
type: mapnotator
tiles: positron
```
You can also set this globally in your installation's main `config.php`, then you won't have to configure it in every blueprint:
```php
return array(
'sylvainjule.mapnotator.tiles' => 'positron',
);
```
#### 3.2. Mapbox tiles

1. ~~mapbox.outdoors~~ → `mapbox/outdoors-v11` (default mapbox theme)
2. ~~mapbox.streets~~ → `mapbox/streets-v11`
3. ~~mapbox.light~~ → `mapbox/light-v10`
4. ~~mapbox.dark~~ → `mapbox/dark-v10`
In case your usage doesn't fall into the above policies (or if you don't want to rely on those services), you can set-up the field to use Mapbox' tiles.
You will have to set both the `id` of the tiles you want to use and your mapbox `public key` in your installation's main `config.php`:
```php
return array(
'sylvainjule.mapnotator.mapbox.id' => 'mapbox/outdoors-v11',
'sylvainjule.mapnotator.mapbox.token' => 'pk.vdf561vf8...',
);
```
You can now explicitely state in your blueprint that you want to use Mapbox tiles:
```yaml
mymap:
type: mapnotator
tiles: mapbox
```
You can also set this globally in your installation's main `config.php`, then you won't have to configure it in every blueprint:
```php
return array(
'sylvainjule.mapnotator.tiles' => 'mapbox',
);
```
<br/>
## 4. Geocoding services
#### 4.1. Open-source API (Nominatim)
This is the default geocoding service. It doesn't require any additional configuration, but please double-check if your needs fit the [Nominatim Usage Policy](https://operations.osmfoundation.org/policies/nominatim/).
```yaml
mymap:
type: mapnotator
geocoding: nominatim
```
#### 4.2. Mapbox API
In case your usage doesn't fall into the above policy (or if you don't want to use Nominatim), you can set-up the field to use Mapbox API.
If you haven't already, you will have to set your mapbox `public key` in your installation's main `config.php`:
```php
return array(
'sylvainjule.mapnotator.mapbox.token' => 'pk.vdf561vf8...',
);
```
You can now explicitely state in your blueprint that you want to use Mapbox as a geocoding service:
```yaml
mymap:
type: mapnotator
geocoding: mapbox
```
With Mapbox API comes the ability to autocomplete your search. It is activated by default, you can deactivate it by setting the `autocomplete` option to `false`.
```yaml
mymap:
type: mapnotator
geocoding: mapbox
autocomplete: false
```
You can also set this globally in your installation's main `config.php`, then you won't have to configure it in every blueprint:
```php
return array(
'sylvainjule.mapnotator.geocoding' => 'mapbox',
);
```
<br>
## 5. Per-field options
#### 5.1. `center`
The coordinates of the center of the map, if the field has no stored value. Default is `{lat: 48.864716, lon: 2.349014}` (Paris, FR).
Once the field has at least one shape drawn, it will automatically find its initial center in order to display all the shapes.
```yaml
mymap:
type: mapnotator
center:
lat: 48.864716
lon: 2.349014
```
#### 5.2. `zoom`
The `min`, `default` and `max` zoom values, where `default` will be the one used on every first-load of the map. Default is: `{min: 2, default: 12, max: 18}`.
Once the field has at least one shape drawn, it will automatically find its initial zoom level in order to display all the shapes.
```yaml
mymap:
type: mapnotator
zoom:
min: 2
default: 12
max: 18
```
#### 5.3. `shapes`
The shapes your editors are allowed to draw on the map. They are all activated by default:
```yaml
mymap:
type: mapnotator
shapes:
- marker
- polyline
- rectangle
- polygon
- circle
- circleMarker
```
#### 5.4. `tools`
The tools / shape modifiers your editors are allowed ot use. They are all activated by default:
```yaml
mymap:
type: mapnotator
tools:
- edit
- drag
- cut
- remove
- rotate
```
#### 5.5. `size`
The height of the field. Default is `full`, which will make the field fill the entire height of the viewport. Options are:
- `full` (entire viewport height)
- `large` (fits all buttons in the toolbar)
- `medium` (fits 8 buttons in the toolbar)
- `small` (fits 6 buttons in the toolbar)
#### 5.6. `color`
You can change the shapes / markers color by setting this option to any valid color value. Default it blue (`#2281f7`).
```yaml
mymap:
type: mapnotator
color: '#2281f7'
```
<br>
## 6. Global options
The same options are available globally, which means you can set them all in your installation's `config.php` file and don't worry about setting it up individually afterwards:
```php
return array(
'sylvainjule.mapnotator.token' => '',
'sylvainjule.mapnotator.id' => 'mapbox.outdoors',
'sylvainjule.mapnotator.tiles' => 'positron',
'sylvainjule.mapnotator.zoom.min' => 2,
'sylvainjule.mapnotator.zoom.default' => 12,
'sylvainjule.mapnotator.zoom.max' => 18,
'sylvainjule.mapnotator.center.lat' => 48.864716,
'sylvainjule.mapnotator.center.lon' => 2.349014,
'sylvainjule.mapnotator.shapes' => ['marker', 'polyline', 'rectangle', 'polygon', 'circle', 'circleMarker'],
'sylvainjule.mapnotator.tools' => ['edit', 'drag', 'cut', 'remove', 'rotate'],
'sylvainjule.mapnotator.size' => 'full',
'sylvainjule.mapnotator.geocoding' => 'nominatim',
'sylvainjule.mapnotator.autocomplete' => true,
'sylvainjule.mapnotator.color' => '#2281f7',
);
```
<br/>
## 7. Front-end usage
The GeoJSON is stored as YAML and therefore needs to be decoded with the `yaml` method.
```php
$location = $page->mymap()->yaml();
```
You can then encode it to JSON using Kirby's toolkit:
```php
$json = Json::encode($location);
```
#### 7.1. circle and circleMarker
The GeoJSON syntax doesn't support circles or circleMarkers. They are stored as a point by default (see [this Medium post](https://medium.com/geoman-blog/how-to-handle-circles-in-geojson-d04dcd6cb2e6) for more details).
Therefore, the field stores additionnal properties alongside their coordinates, to allow you to recreate them in your projects:
```json
{
"type": "Feature",
"properties": {
"shape": "CircleMarker"
},
"geometry": {
"type": "Point",
"coordinates": [6.862806, 47.967742]
}
},
{
"type": "Feature",
"properties": {
"shape": "Circle",
"radius": 241.85391410521
},
"geometry": {
"type": "Point",
"coordinates": [6.84809, 47.969121]
}
}
```
When importing the GeoJSON into your project, you will need to check for those properties in order to transform them into the appropriate shapes. With Leaflet, for example, it would look like:
```javascript
L.geoJSON(myGeoJSON, {
pointToLayer: (feature, latlng) => {
if (feature.properties.shape == 'Circle') {
return new L.Circle(latlng, feature.properties.radius);
}
else if (feature.properties.shape == 'CircleMarker') {
return new L.CircleMarker(latlng);
}
else {
return new L.Marker(latlng);
}
}
}).addTo(myMap)
```
<br/>
## 8. Credits
**Services:**
- [Openstreetmap](https://www.openstreetmap.org/#map=5/46.449/2.210), [Carto](https://carto.com/) or [Mapbox](https://www.mapbox.com/) as tile servers.
- [Nominatim](https://nominatim.openstreetmap.org/) or [Mapbox Search](https://www.mapbox.com/search/) as a geocoding API
- [Leaflet](https://leafletjs.com/) as a mapping library.
- [Geoman](https://geoman.io/) as a GeoJSON editor.
<br/>
## 9. License
MIT
| 29.111111 | 440 | 0.687414 | eng_Latn | 0.919869 |
2c9da9f3760ea44aea4f13f306ae9b85022035b9 | 5,087 | md | Markdown | README.md | int128/hellopage-manifests | 5b5e742adc5384d15dc6566e4e83dc362f696c9e | [
"Apache-2.0"
] | 3 | 2020-06-15T07:16:15.000Z | 2020-06-15T15:43:49.000Z | README.md | int128/hellopage-manifests | 5b5e742adc5384d15dc6566e4e83dc362f696c9e | [
"Apache-2.0"
] | null | null | null | README.md | int128/hellopage-manifests | 5b5e742adc5384d15dc6566e4e83dc362f696c9e | [
"Apache-2.0"
] | null | null | null | # flux-continuous-deployment-demo
This is a demo of Continuous Deployment with Flux using the feature of [automated deployment of new container images](https://docs.fluxcd.io/en/stable/references/automated-image-update.html).
## Introduction
In a typical GitOps flow, you need to build a new Docker image and update the manifest to deploy the application.

In continuous deployment flow, you only need to build a new Docker image. Flux will update the manifest when a newer image is found.

## Demo
This demo uses the following components:
- Application repository: https://github.com/int128/hellopage
- Google Cloud Build
- Google Container Registry: https://gcr.io/int128-1313/github.com/int128/hellopage
- Manifest repository: https://github.com/int128/flux-continuous-deployment-demo
You can use your own components by replacing URLs in [`helmfile.yaml`](helmfile.yaml).
### 1. Set up the tools
You need to install the following tools:
- Docker
- Kind
- Helmfile
- fluxctl
To check if the commands are available:
```sh
make check
```
### 2. Provision a cluster
Run make.
```sh
make
```
It will create a cluster and deploy the following components:
1. `Deployment`, `Service` and `Ingress` for the demo app
1. NGINX Ingress
1. Flux
Open http://hellopage-127-0-0-1.nip.io:30080 and make sure you can access the demo app.
### 3. Configure Git access
```sh
export KUBECONFIG=output/kubeconfig.yaml
```
Open https://github.com/int128/flux-continuous-deployment-demo/settings/keys and add the deploy key with write access.
You can get the deploy key as follows:
```console
% fluxctl identity
ssh-rsa ...
```
Make sure that Flux recognizes the deployment.
```console
% fluxctl list-workloads -n hellopage
WORKLOAD CONTAINER IMAGE RELEASE POLICY
hellopage:deployment/hellopage app gcr.io/int128-1313/github.com/int128/hellopage:dev-81f12fd ready automated
% fluxctl list-images -n hellopage
WORKLOAD CONTAINER IMAGE CREATED
hellopage:deployment/hellopage app gcr.io/int128-1313/github.com/int128/hellopage
'-> dev-81f12fd 14 Jun 20 07:11 UTC
```
You can see Flux log for debug.
```sh
make logs-flux
```
### 4. Deploy a new version
Open https://github.com/int128/hellopage and create a commit.
Google Cloud Build will build an image and push it to GCR.
Flux will create a commit to this repository for updating the image tag of deployment.
You can see the image tags which Flux scans.
```console
% fluxctl list-images -n hellopage
WORKLOAD CONTAINER IMAGE CREATED
hellopage:deployment/hellopage app gcr.io/int128-1313/github.com/int128/hellopage
'-> dev-7be21e9 15 Jun 20 01:52 UTC
dev-81f12fd 14 Jun 20 07:11 UTC
```
You can see the new version within a minute.
### 5. Clean up
```sh
make delete-cluster
```
## Troubleshoot
You can see Flux log for debug.
```sh
make logs-flux
```
When Flux found a newer image, it writes logs like:
```
ts=2020-06-15T01:53:47.6752392Z caller=images.go:17 component=sync-loop msg="polling for new images for automated workloads"
ts=2020-06-15T01:53:47.7193163Z caller=images.go:111 component=sync-loop workload=hellopage:deployment/hellopage container=app repo=gcr.io/int128-1313/github.com/int128/hellopage pattern=glob:* current=gcr.io/int128-1313/github.com/int128/hellopage:dev-81f12fd info="added update to automation run" new=gcr.io/int128-1313/github.com/int128/hellopage:dev-7be21e9 reason="latest dev-7be21e9 (2020-06-15 01:52:55.214282133 +0000 UTC) > current dev-81f12fd (2020-06-14 07:11:00.193482088 +0000 UTC)"
```
When Flux pushed a commit, it writes logs like:
```
ts=2020-06-15T01:53:47.7215553Z caller=loop.go:141 component=sync-loop jobID=d23d293c-cf44-52b9-0624-e9e6a62462b7 state=in-progress
ts=2020-06-15T01:53:47.8430268Z caller=releaser.go:59 component=sync-loop jobID=d23d293c-cf44-52b9-0624-e9e6a62462b7 type=release updates=1
ts=2020-06-15T01:53:52.4599673Z caller=daemon.go:292 component=sync-loop jobID=d23d293c-cf44-52b9-0624-e9e6a62462b7 revision=dbf62188f5f4426c1ad6b8043383800b1a4903fb
ts=2020-06-15T01:53:52.4605235Z caller=daemon.go:701 component=daemon event="Commit: dbf6218, hellopage:deployment/hellopage" logupstream=false
ts=2020-06-15T01:53:52.4608724Z caller=loop.go:153 component=sync-loop jobID=d23d293c-cf44-52b9-0624-e9e6a62462b7 state=done success=true
ts=2020-06-15T01:53:54.3104503Z caller=loop.go:133 component=sync-loop event=refreshed url=ssh://git@github.com/int128/continuous-deployment-flux-demo branch=master HEAD=dbf62188f5f4426c1ad6b8043383800b1a4903fb
```
| 35.573427 | 495 | 0.706703 | eng_Latn | 0.680345 |
2c9e0e7a1758ffa0f7e0b74fa533735134c6a36c | 1,619 | md | Markdown | spiceaidocs/content/en/deep-learning-ai/_index.md | ewgenius/docs | 4fae919321389ecdc59d173e786360ea61fc3f26 | [
"Apache-2.0"
] | null | null | null | spiceaidocs/content/en/deep-learning-ai/_index.md | ewgenius/docs | 4fae919321389ecdc59d173e786360ea61fc3f26 | [
"Apache-2.0"
] | null | null | null | spiceaidocs/content/en/deep-learning-ai/_index.md | ewgenius/docs | 4fae919321389ecdc59d173e786360ea61fc3f26 | [
"Apache-2.0"
] | null | null | null | ---
type: docs
title: "Deep Learning AI"
linkTitle: "Deep Learning AI"
weight: 40
---
The Spice.ai engine learns and provides recommendations to your application using a type of AI called deep reinforcement learning.
Reinforcement learning (RL) is a general framework where agents learn to perform actions in an environment so as to maximize a reward according to a policy. In deep reinforcement learning, the policy is trained by a neural network based on a deep learning algorithm.
The agent and environment continuously interact with each other. At each time step, the agent takes an action on the observation space based on its policy (aka brain), and receives a reward and the next observation from the environment. The goal is to improve the policy so as to maximize the sum of rewards (score).
Spice.ai provides a standard interface that a deep learning algorithm can be implemented with. At launch Spice.ai supports two deep learning algorithms and more will be added over time.
By default, Spice.ai will use [Vanilla Policy Gradient]({{<ref "deep-learning-ai/vpg">}}). To use a different algorithm, set the environment variable `SPICE_DEEPRL_ALGORITHM` to one of the following values:
| SPICE_DEEPRL_ALGORITHM | Algorithm |
| ---------------------- | ----------------------------------------------------------- |
| vpg | [Vanilla Policy Gradient]({{<ref "deep-learning-ai/vpg">}}) |
| dql | [Deep Q-Learning]({{<ref "deep-learning-ai/dql">}}) |
**Example**
```bash
SPICE_DEEPRL_ALGORITHM=dql spice run
```
| 57.821429 | 316 | 0.683755 | eng_Latn | 0.994064 |
2c9f865e383b86b703af65cf534e8c7e0ba92cba | 15,747 | md | Markdown | _posts/S/2015-01-20-sd1~GA20ox2.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | _posts/S/2015-01-20-sd1~GA20ox2.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | _posts/S/2015-01-20-sd1~GA20ox2.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | ---
layout: post
title: "sd1,GA20ox2"
description: ""
category: genes
tags: [dwarf, ga , gibberellin, growth, shoot, grain protein content, grain protein, transcription factor, Gibberellin, gibberellin biosynthesis, height, plant height]
---
* **Information**
+ Symbol: sd1,GA20ox2
+ MSU: [LOC_Os01g66100](http://rice.plantbiology.msu.edu/cgi-bin/ORF_infopage.cgi?orf=LOC_Os01g66100)
+ RAPdb: [Os01g0883800](http://rapdb.dna.affrc.go.jp/viewer/gbrowse_details/irgsp1?name=Os01g0883800)
* **Publication**
+ [Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice%5BTitle%5D), 2010, Planta.
+ [A role of OsGA20ox1 , encoding an isoform of gibberellin 20-oxidase, for regulation of plant stature in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=A role of OsGA20ox1 , encoding an isoform of gibberellin 20-oxidase, for regulation of plant stature in rice%5BTitle%5D), 2004, Plant Mol Biol.
+ [Green revolution: a mutant gibberellin-synthesis gene in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=Green revolution: a mutant gibberellin-synthesis gene in rice%5BTitle%5D), 2002, Nature.
+ [Positional Cloning of Rice Semidwarfing Gene, sd-1: Rice "Green Revolution Gene" Encodes a Mutant Enzyme Involved in Gibberellin Synthesis](http://www.ncbi.nlm.nih.gov/pubmed?term=Positional Cloning of Rice Semidwarfing Gene, sd-1: Rice "Green Revolution Gene" Encodes a Mutant Enzyme Involved in Gibberellin Synthesis%5BTitle%5D), 2002, DNA Research.
+ [Overexpression of a GRAS protein lacking the DELLA domain confers altered gibberellin responses in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=Overexpression of a GRAS protein lacking the DELLA domain confers altered gibberellin responses in rice%5BTitle%5D), 2005, Plant J.
+ [The rice SPINDLY gene functions as a negative regulator of gibberellin signaling by controlling the suppressive function of the DELLA protein, SLR1, and modulating brassinosteroid synthesis](http://www.ncbi.nlm.nih.gov/pubmed?term=The rice SPINDLY gene functions as a negative regulator of gibberellin signaling by controlling the suppressive function of the DELLA protein, SLR1, and modulating brassinosteroid synthesis%5BTitle%5D), 2006, Plant J.
+ [OsGSR1 is involved in crosstalk between gibberellins and brassinosteroids in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=OsGSR1 is involved in crosstalk between gibberellins and brassinosteroids in rice%5BTitle%5D), 2009, Plant J.
+ [The rice YABBY1 gene is involved in the feedback regulation of gibberellin metabolism](http://www.ncbi.nlm.nih.gov/pubmed?term=The rice YABBY1 gene is involved in the feedback regulation of gibberellin metabolism%5BTitle%5D), 2007, Plant Physiol.
+ [Semidwarf sd-1, "green revolution" rice, contains a defective gibberellin 20-oxidase gene](http://www.ncbi.nlm.nih.gov/pubmed?term=Semidwarf sd-1, "green revolution" rice, contains a defective gibberellin 20-oxidase gene%5BTitle%5D), 2002, Proc Natl Acad Sci U S A.
+ [Control of grain protein contents through SEMIDWARF1 mutant alleles: sd1 increases the grain protein content in Dee-geo-woo-gen but not in Reimei.](http://www.ncbi.nlm.nih.gov/pubmed?term=Control of grain protein contents through SEMIDWARF1 mutant alleles: sd1 increases the grain protein content in Dee-geo-woo-gen but not in Reimei.%5BTitle%5D), 2014, Mol Genet Genomics.
+ [Intragenic recombination between two non-functional semi-dwarf 1 alleles produced a functional SD1 allele in a tall recombinant inbred line in rice.](http://www.ncbi.nlm.nih.gov/pubmed?term=Intragenic recombination between two non-functional semi-dwarf 1 alleles produced a functional SD1 allele in a tall recombinant inbred line in rice.%5BTitle%5D), 2017, PLoS One.
+ [Ethylene-gibberellin signaling underlies adaptation of rice to periodic flooding.](http://www.ncbi.nlm.nih.gov/pubmed?term=Ethylene-gibberellin signaling underlies adaptation of rice to periodic flooding.%5BTitle%5D), 2018, Science.
+ [High resolution insight into recombination events at the SD1 locus in rice.](http://www.ncbi.nlm.nih.gov/pubmed?term=High resolution insight into recombination events at the SD1 locus in rice.%5BTitle%5D), 2018, Plant J.
+ [Generation of semi-dwarf rice Oryza sativa L. lines by CRISPR/Cas9-directed mutagenesis of OsGA20ox2 and proteomic analysis of unveiled changes caused by mutations.](http://www.ncbi.nlm.nih.gov/pubmed?term=Generation of semi-dwarf rice Oryza sativa L. lines by CRISPR/Cas9-directed mutagenesis of OsGA20ox2 and proteomic analysis of unveiled changes caused by mutations.%5BTitle%5D), 2019, 3 Biotech.
* **Genbank accession number**
+ [AB077025](http://www.ncbi.nlm.nih.gov/nuccore/AB077025)
+ [AF465255](http://www.ncbi.nlm.nih.gov/nuccore/AF465255)
+ [AF465256](http://www.ncbi.nlm.nih.gov/nuccore/AF465256)
+ [AY114310](http://www.ncbi.nlm.nih.gov/nuccore/AY114310)
+ [AB077025](http://www.ncbi.nlm.nih.gov/nuccore/AB077025)
+ [AF465255](http://www.ncbi.nlm.nih.gov/nuccore/AF465255)
+ [AF465256](http://www.ncbi.nlm.nih.gov/nuccore/AF465256)
+ [AY114310](http://www.ncbi.nlm.nih.gov/nuccore/AY114310)
+ [AB077025](http://www.ncbi.nlm.nih.gov/nuccore/AB077025)
+ [AF465255](http://www.ncbi.nlm.nih.gov/nuccore/AF465255)
+ [AF465256](http://www.ncbi.nlm.nih.gov/nuccore/AF465256)
+ [AY114310](http://www.ncbi.nlm.nih.gov/nuccore/AY114310)
+ [AB077025](http://www.ncbi.nlm.nih.gov/nuccore/AB077025)
+ [AF465255](http://www.ncbi.nlm.nih.gov/nuccore/AF465255)
+ [AF465256](http://www.ncbi.nlm.nih.gov/nuccore/AF465256)
+ [AY114310](http://www.ncbi.nlm.nih.gov/nuccore/AY114310)
* **Key message**
+ The suppressive function of OsSPY in GA signaling was supported by the findings that the dwarfism was partially rescued and OsGA20ox2 (GA20 oxidase) expression was reduced in GA-deficient and GA-insensitive mutants by the knockdown of OsSPY function
+ Moreover, overexpression of SLRL1 in normal rice plants induced a dwarf phenotype with an increased level of OsGA20ox2 gene expression and diminished the GA-induced shoot elongation, suggesting that SLRL1 acts as a repressor of GA signaling
+ Furthermore, OsGSR1 RNAi plants show a reduced sensitivity to GA treatment, an increased expression of the GA biosynthetic gene OsGA20ox2, which is feedback inhibited by GA signaling, and an elevated level of endogenous GA: together, these suggest that OsGSR1 is a positive regulator of GA signaling
+ ), OsGA20ox2 ( SD1 ), is well known as the Green Revolution gene, and loss-of function mutation in this locus causes semi-dwarfism
+ The short stature of IR8 is due to a mutation in the plant's sd1 gene, and here we identify this gene as encoding an oxidase enzyme involved in the biosynthesis of gibberellin, a plant growth hormone
+ The expression levels of gibberellin (GA) biosynthetic genes including OsCPS1, OsKS1, OsKO1, OsKAO, OsGA20ox2/SD1 and OsGA2ox3 were significantly increased in d62 mutant
+ In this report, we show that a rice (Oryza sativa) YABBY1 (YAB1) gene had a similar expression pattern as key rice GA biosynthetic genes GA3ox2 and GA20ox2
+ Control of grain protein contents through SEMIDWARF1 mutant alleles: sd1 increases the grain protein content in Dee-geo-woo-gen but not in Reimei.
+ When submerged, plants carrying the deepwater rice-specific SD1 haplotype amplify a signaling relay in which the SD1 gene is transcriptionally activated by an ethylene-responsive transcription factor, OsEIL1a
+ Here, we identify the gibberellin biosynthesis gene, SD1 (SEMIDWARF1), whose loss-of-function allele catapulted the rice Green Revolution, as being responsible for submergence-induced internode elongation
+ Here, physical separation of two defects allows recombination to generate the wild-type SD1 gene, for which plant height can then be used as a reporter
* **Connection**
+ __DLT~OsGRAS-32~D62~GS6~SMOS2__, __sd1~GA20ox2__, [Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice%5BTitle%5D), The expression levels of gibberellin (GA) biosynthetic genes including OsCPS1, OsKS1, OsKO1, OsKAO, OsGA20ox2/SD1 and OsGA2ox3 were significantly increased in d62 mutant
+ __OsKOS4~OsKO1__, __sd1~GA20ox2__, [Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice%5BTitle%5D), The expression levels of gibberellin (GA) biosynthetic genes including OsCPS1, OsKS1, OsKO1, OsKAO, OsGA20ox2/SD1 and OsGA2ox3 were significantly increased in d62 mutant
+ __OsCPS~OsCPS1__, __sd1~GA20ox2__, [Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice%5BTitle%5D), The expression levels of gibberellin (GA) biosynthetic genes including OsCPS1, OsKS1, OsKO1, OsKAO, OsGA20ox2/SD1 and OsGA2ox3 were significantly increased in d62 mutant
+ __OsKS1__, __sd1~GA20ox2__, [Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=Identification and characterization of dwarf 62, a loss-of-function mutation in DLT/OsGRAS-32 affecting gibberellin metabolism in rice%5BTitle%5D), The expression levels of gibberellin (GA) biosynthetic genes including OsCPS1, OsKS1, OsKO1, OsKAO, OsGA20ox2/SD1 and OsGA2ox3 were significantly increased in d62 mutant
+ __OsGA20ox1~GNP1~SDSFL1__, __sd1~GA20ox2__, [A role of OsGA20ox1 , encoding an isoform of gibberellin 20-oxidase, for regulation of plant stature in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=A role of OsGA20ox1 , encoding an isoform of gibberellin 20-oxidase, for regulation of plant stature in rice%5BTitle%5D), This result indicates that not only OsGA20ox2 but also OsGA20ox1 affects plant stature
+ __OsSPY__, __sd1~GA20ox2__, [The rice SPINDLY gene functions as a negative regulator of gibberellin signaling by controlling the suppressive function of the DELLA protein, SLR1, and modulating brassinosteroid synthesis](http://www.ncbi.nlm.nih.gov/pubmed?term=The rice SPINDLY gene functions as a negative regulator of gibberellin signaling by controlling the suppressive function of the DELLA protein, SLR1, and modulating brassinosteroid synthesis%5BTitle%5D), The suppressive function of OsSPY in GA signaling was supported by the findings that the dwarfism was partially rescued and OsGA20ox2 (GA20 oxidase) expression was reduced in GA-deficient and GA-insensitive mutants by the knockdown of OsSPY function
+ __OsGSR1~GW6~OsGASR7__, __sd1~GA20ox2__, [OsGSR1 is involved in crosstalk between gibberellins and brassinosteroids in rice](http://www.ncbi.nlm.nih.gov/pubmed?term=OsGSR1 is involved in crosstalk between gibberellins and brassinosteroids in rice%5BTitle%5D), Furthermore, OsGSR1 RNAi plants show a reduced sensitivity to GA treatment, an increased expression of the GA biosynthetic gene OsGA20ox2, which is feedback inhibited by GA signaling, and an elevated level of endogenous GA: together, these suggest that OsGSR1 is a positive regulator of GA signaling
+ __d18~OsGA3ox2__, __sd1~GA20ox2__, [The rice YABBY1 gene is involved in the feedback regulation of gibberellin metabolism](http://www.ncbi.nlm.nih.gov/pubmed?term=The rice YABBY1 gene is involved in the feedback regulation of gibberellin metabolism%5BTitle%5D), In this report, we show that a rice (Oryza sativa) YABBY1 (YAB1) gene had a similar expression pattern as key rice GA biosynthetic genes GA3ox2 and GA20ox2
+ __OsYABBY1~OsYAB1__, __sd1~GA20ox2__, [The rice YABBY1 gene is involved in the feedback regulation of gibberellin metabolism](http://www.ncbi.nlm.nih.gov/pubmed?term=The rice YABBY1 gene is involved in the feedback regulation of gibberellin metabolism%5BTitle%5D), In this report, we show that a rice (Oryza sativa) YABBY1 (YAB1) gene had a similar expression pattern as key rice GA biosynthetic genes GA3ox2 and GA20ox2
+ __sd1~GA20ox2__, __SDT__, [Regulation of OsmiR156h through Alternative Polyadenylation Improves Grain Yield in Rice.](http://www.ncbi.nlm.nih.gov/pubmed?term=Regulation of OsmiR156h through Alternative Polyadenylation Improves Grain Yield in Rice.%5BTitle%5D), Most importantly, pyramiding of the sdt allele and the green revolution gene sd1 enhances grain yield by about 20% in hybrid rice breeding
+ __OsYABBY4__, __sd1~GA20ox2__, [The rice YABBY4 gene regulates plant growth and development through modulating the gibberellin pathway.](http://www.ncbi.nlm.nih.gov/pubmed?term=The rice YABBY4 gene regulates plant growth and development through modulating the gibberellin pathway.%5BTitle%5D), We report on an important role for OsYABBY4 in negative control of the expression of a GA biosynthetic gene by binding to the promoter region of the gibberellin 20-oxidase 2 gene (GA20ox2), which is a direct target of SLR1 (the sole DELLA protein negatively controlling GA responses in rice)
+ __HTD1~OsCCD7~D17__, __sd1~GA20ox2__, [A Strigolactone Biosynthesis Gene Contributed to the Green Revolution in Rice.](http://www.ncbi.nlm.nih.gov/pubmed?term=A Strigolactone Biosynthesis Gene Contributed to the Green Revolution in Rice.%5BTitle%5D), We found that the HTD1 gene had been widely utilized and co-selected with Semidwarf 1 (SD1), both contributing to the improvement of plant architecture in modern rice varieties since the Green Revolution in the 1960s
+ __OsZFP7~ZFP207__, __sd1~GA20ox2__, [A Cys2/His2 zinc finger protein acts as a repressor of green revolution gene SD1/OsGA20ox2 in rice Oryza sativa L.](http://www.ncbi.nlm.nih.gov/pubmed?term=A Cys2/His2 zinc finger protein acts as a repressor of green revolution gene SD1/OsGA20ox2 in rice Oryza sativa L.%5BTitle%5D), Moreover, ZFP207 repressed the expression of OsGA20ox2 via binding to its promoter region.
+ __OsZFP7~ZFP207__, __sd1~GA20ox2__, [A Cys2/His2 zinc finger protein acts as a repressor of green revolution gene SD1/OsGA20ox2 in rice Oryza sativa L.](http://www.ncbi.nlm.nih.gov/pubmed?term=A Cys2/His2 zinc finger protein acts as a repressor of green revolution gene SD1/OsGA20ox2 in rice Oryza sativa L.%5BTitle%5D), Taken together, ZFP207 acts as a transcriptional repressor of SD1/OsGA20ox2 and it may play a critical role in plant growth and development through fine-tuning GA biosynthesis in rice.
+ __OsZFP7~ZFP207__, __sd1~GA20ox2__, [A Cys2/His2 zinc finger protein acts as a repressor of green revolution gene SD1/OsGA20ox2 in rice Oryza sativa L.](http://www.ncbi.nlm.nih.gov/pubmed?term=A Cys2/His2 zinc finger protein acts as a repressor of green revolution gene SD1/OsGA20ox2 in rice Oryza sativa L.%5BTitle%5D), Here we report a Cys2/His2 zinc finger protein ZFP207 acting as a transcriptional repressor of OsGA20ox2.
[//]: # * **Key figures**
| 194.407407 | 718 | 0.789547 | eng_Latn | 0.896311 |
2c9ff6a9f6f83f934ced5f1faf8e77a481ab46ac | 169 | md | Markdown | BarkSendMessage/README.md | goodryb/alfred | b23d0cafef43b01625ab54270839f77e4a794b90 | [
"MIT"
] | null | null | null | BarkSendMessage/README.md | goodryb/alfred | b23d0cafef43b01625ab54270839f77e4a794b90 | [
"MIT"
] | null | null | null | BarkSendMessage/README.md | goodryb/alfred | b23d0cafef43b01625ab54270839f77e4a794b90 | [
"MIT"
] | null | null | null | 导入后需要添加apiurl参数
apiurl是bark服务器的访问地址,例如
https://xxx.com/YYYYYYYY
YYYYYYYY 代表你的key
一般在bark app里面可以查询到
完整推送连接如下
https://xxx.com/YYYYYYYY/自动复制推送内容?automaticallyCopy=1
| 12.071429 | 53 | 0.828402 | yue_Hant | 0.951857 |
2ca025fafd6ab2b11de4e42e4d90be0a8ac0dccd | 1,506 | md | Markdown | trex/rl/eval/linear/weightdecay/stress_test/6demosallpairs_100epochs_1weightdecay/README.md | jeremy29tien/assistive-gym | bcad5fb2a8b217633e1306e11e95efb4feae6f99 | [
"MIT"
] | 1 | 2022-02-04T19:42:04.000Z | 2022-02-04T19:42:04.000Z | trex/rl/eval/linear/weightdecay/stress_test/6demosallpairs_100epochs_1weightdecay/README.md | jeremy29tien/assistive-gym | bcad5fb2a8b217633e1306e11e95efb4feae6f99 | [
"MIT"
] | null | null | null | trex/rl/eval/linear/weightdecay/stress_test/6demosallpairs_100epochs_1weightdecay/README.md | jeremy29tien/assistive-gym | bcad5fb2a8b217633e1306e11e95efb4feae6f99 | [
"MIT"
] | null | null | null | ### Reward Learning
- 6 demos, all pairs, 100 epochs, weight_decay=1.0
```
python3 linear_model.py --num_demos 6 --all_pairs --num_epochs 100 --weight_decay 1.0 --reward_model_path models/6demosallpairs_100epochs_1weightdecay.params > reward_learning_outputs/linear/6demosallpairs_100epochs_1weightdecay.txt
```
### RL Training
- Modify `FeedingLinearRewardEnv`'s path to use `/home/jtien/assistive-gym/trex/models/linear/6demosallpairs_100epochs_1weightdecay.params`
```
python3 -m assistive_gym.learn --env "FeedingLinearRewardSawyer-v0" --algo ppo --seed 0 --train --train-timesteps 1000000 --save-dir ./trained_models_reward_learning/linear/weightdecay/stress_test/6demos
```
### Evaluation
- Evaluate (seed=1) on ground truth reward:
```
python3 -m assistive_gym.learn --env "FeedingSawyer-v1" --algo ppo --evaluate --eval-episodes 100 --seed 1 --verbose --load-policy-path ./trained_models_reward_learning/linear/weightdecay/stress_test/6demos/ppo/FeedingLinearRewardSawyer-v0/checkpoint_53/checkpoint-53 > trex/rl/eval/linear/weightdecay/stress_test/learnedpolicy_truereward.txt
```
- Evaluate (seed=1) on learned reward:
```
python3 -m assistive_gym.learn --env "FeedingLinearRewardSawyer-v0" --algo ppo --evaluate --eval-episodes 100 --seed 1 --verbose --load-policy-path ./trained_models_reward_learning/linear/weightdecay/stress_test/6demos/ppo/FeedingLinearRewardSawyer-v0/checkpoint_53/checkpoint-53 > trex/rl/eval/linear/weightdecay/stress_test/learnedpolicy_learnedreward.txt
```
| 68.454545 | 359 | 0.796149 | eng_Latn | 0.380097 |
2ca0638ab4c67b557ff39bc5d722acfbc29d51c1 | 2,923 | md | Markdown | README.md | V3XXXX/disc-11 | 94c89542b83eab00b0360bd55c64ed15154c40f6 | [
"BSD-3-Clause"
] | null | null | null | README.md | V3XXXX/disc-11 | 94c89542b83eab00b0360bd55c64ed15154c40f6 | [
"BSD-3-Clause"
] | null | null | null | README.md | V3XXXX/disc-11 | 94c89542b83eab00b0360bd55c64ed15154c40f6 | [
"BSD-3-Clause"
] | null | null | null | 
# Disc 11 by Zhycorp
> A dedicated open-source Discord bot for Zhycorp based from [our Discord bot template](https://github.com/zhycorp/discord-bot-template) with more features. Easy to use, and with no coding required.
<a href="https://zhycorp.net/discord"><img src="https://img.shields.io/discord/332877090003091456?color=5865F2&logo=discord&logoColor=white" alt="Discord Server" /></a>
<a href="https://discord.com/oauth2/authorize?client_id=690736793682968576&permissions=53857345&scope=bot"><img src="https://img.shields.io/static/v1?label=Invite%20Me&message=Disc%2011%230606&plastic&color=5865F2&logo=discord"></a>
<img src="https://badgen.net/badge/icon/typescript?icon=typescript&label">
<a href="https://github.com/zhycorp/disc-11/actions?query=workflow%3A%22Lint+code+%26+compile+test%22"><img src="https://github.com/zhycorp/disc-11/workflows/Lint%20code%20&%20compile%20test/badge.svg" alt="CI Status" /></a>
## Features
- Interaction support.
- Basic music commands.
- Basic moderation commands.
- Configurable, and easy to use.
- A production-ready project, set up the bot without coding.
## General Setup
1. Download and install [Node.js](https://nodejs.org) version `16.6.0` and [Python](https://python.org) version `3.6.0` or above
2. Open `.env_example` file and rename it to `.env`
3. Install required and optional dependencies
```sh
$ npm install
```
4. Compile the file
```sh
$ npm run build
```
5. If you want to save your disk spaces, let's prune the dev dependencies
```sh
$ npm prune --production
```
6. Finally, you can start the bot
```sh
$ npm start
```
## Hosting Setup
### Heroku
You can host this bot to make it stay online on Heroku.
<a href="https://heroku.com/deploy?template=https://github.com/V3XXXX/disc-11"><img src="https://www.herokucdn.com/deploy/button.svg" alt="Deploy to Heroku"></a>
### Glitch
You can use Glitch too for this project, featured with its code editor.
> Watch the tutorial video on YouTube!
>
> ▶️ **https://youtu.be/ILutlBl_Xyk**
1. Star and fork this project
2. Go to [glitch.com](https://glitch.com) and make an account
3. Click **New Project** then **Import from GitHub**, specify the pop-up field with `https://github.com/<your-name>/disc-11` (without `<>`)
4. Please wait for a while, this process takes some minutes
5. Find `.env` file and delete it, find `.env_example` file and rename it back to `.env`
6. After specifying `.env`, open **Tools** > **Terminal**
7. Type `refresh`, and track the process from **Logs**
8. To make the bot stay online, please watch [this video](https://youtu.be/K2nqthN1xKQ?t=551) carefully.
<a href="https://glitch.com/edit/#!/import/github/zhycorp/disc-11"><img src="https://cdn.glitch.com/2703baf2-b643-4da7-ab91-7ee2a2d00b5b%2Fremix-button.svg" alt="Remix on Glitch"></a>
> © 2021 Zhycorp Development
| 44.969231 | 232 | 0.733493 | eng_Latn | 0.627768 |
2ca13d76ec4a225e164aa99ad41e638d5ba654d0 | 56 | md | Markdown | ChangeLog.md | ayachigin/language-choucho | af17802a6b95b888716d2108453c3121d28de1ee | [
"BSD-3-Clause"
] | null | null | null | ChangeLog.md | ayachigin/language-choucho | af17802a6b95b888716d2108453c3121d28de1ee | [
"BSD-3-Clause"
] | null | null | null | ChangeLog.md | ayachigin/language-choucho | af17802a6b95b888716d2108453c3121d28de1ee | [
"BSD-3-Clause"
] | null | null | null | # Changelog for language-choucho
## Unreleased changes
| 14 | 32 | 0.785714 | eng_Latn | 0.994259 |
2ca1b9e57480422b95ebb73d42058117fab60cf2 | 1,194 | md | Markdown | docs/content/development/server/plugin_interface/_index.md | ExalDraen/website | 70c91be500934d885bbf89101c17e373785760e5 | [
"Apache-2.0"
] | null | null | null | docs/content/development/server/plugin_interface/_index.md | ExalDraen/website | 70c91be500934d885bbf89101c17e373785760e5 | [
"Apache-2.0"
] | null | null | null | docs/content/development/server/plugin_interface/_index.md | ExalDraen/website | 70c91be500934d885bbf89101c17e373785760e5 | [
"Apache-2.0"
] | null | null | null | +++
title = "Plugin Interface"
weight = 20
+++
All Choria plugins have to implement the same basic *plugin.Pluggable* interface that looks like this ([godoc](https://godoc.org/github.com/choria-io/go-choria/plugin)):
```go
// Pluggable is a Choria Plugin
type Pluggable interface {
// PluginInstance is any structure that implements the plugin, should be right type for the kind of plugin
PluginInstance() interface{}
// PluginName is a human friendly name for the plugin
PluginName() string
// PluginType is the type of the plugin, to match plugin.Type
PluginType() Type
// PluginVersion is the version of the plugin
PluginVersion() string
}
```
And you need a function your package thar produce an instance of the above interface:
```go
func ChoriaPlugin() plugin.Pluggable
```
Thus when you add your plugin to the plugin system like below in `packager/user_plugins.yaml`:
```yaml
---
myplugin: github.com/mycorp/myplugin
```
The system will call your *myplugin.ChoriaPlugin()* that should produce a *plugin.Pluggable*. An example of this can be found in the [Golang MCO RPC compatibility layer](https://godoc.org/github.com/choria-io/mcorpc-agent-provider/mcorpc/golang).
| 29.85 | 246 | 0.754606 | eng_Latn | 0.892511 |
2ca204327803d02338e7da0565e1d67da47a5b35 | 2,076 | md | Markdown | articles/cosmos-db/sql-query-substring.md | changeworld/azure-docs.nl-nl | bdaa9c94e3a164b14a5d4b985a519e8ae95248d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/sql-query-substring.md | changeworld/azure-docs.nl-nl | bdaa9c94e3a164b14a5d4b985a519e8ae95248d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/sql-query-substring.md | changeworld/azure-docs.nl-nl | bdaa9c94e3a164b14a5d4b985a519e8ae95248d5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Subtekenreeks in Azure Cosmos DB query taal
description: Meer informatie over de subtekenreeks voor de SQL-systeemfunctie in Azure Cosmos DB.
author: ginamr
ms.service: cosmos-db
ms.topic: conceptual
ms.date: 09/13/2019
ms.author: girobins
ms.custom: query-reference
ms.openlocfilehash: d4462fc407093b23510bddfae4d9f55d68f8c0fa
ms.sourcegitcommit: f915d8b43a3cefe532062ca7d7dbbf569d2583d8
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 03/05/2020
ms.locfileid: "78303695"
---
# <a name="substring-azure-cosmos-db"></a>Subtekenreeks (Azure Cosmos DB)
Onderdeel van een tekenreeksexpressie vanaf de op nul gebaseerde positie van het opgegeven teken geretourneerd en blijft aan de opgegeven lengte of aan het einde van de tekenreeks.
## <a name="syntax"></a>Syntaxis
```sql
SUBSTRING(<str_expr>, <num_expr1>, <num_expr2>)
```
## <a name="arguments"></a>Argumenten
*str_expr*
Is een teken reeks expressie.
*num_expr1*
Is een numerieke expressie om het begin teken aan te duiden. De waarde 0 is het eerste teken van *str_expr*.
*num_expr2*
Is een numerieke expressie waarmee het maximum aantal tekens van *str_expr* wordt aangegeven dat moet worden geretourneerd. Een waarde van 0 of minder resulteert in een lege teken reeks.
## <a name="return-types"></a>Retour typen
Retourneert een tekenreeksexpressie.
## <a name="examples"></a>Voorbeelden
Het volgende voorbeeld retourneert de subtekenreeks van "abc" beginnen op 1 en een lengte van 1 teken bevatten.
```sql
SELECT SUBSTRING("abc", 1, 1) AS substring
```
Hier volgt de resultatenset.
```json
[{"substring": "b"}]
```
## <a name="remarks"></a>Opmerkingen
Deze systeem functie maakt deel uit van een [bereik index](index-policy.md#includeexclude-strategy) als de begin positie `0`is.
## <a name="next-steps"></a>Volgende stappen
- [Teken reeks functies Azure Cosmos DB](sql-query-string-functions.md)
- [Systeem functies Azure Cosmos DB](sql-query-system-functions.md)
- [Inleiding tot Azure Cosmos DB](introduction.md)
| 32.4375 | 189 | 0.74422 | nld_Latn | 0.991365 |
2ca28bf4c3d4285cd3b1870779eb4009820e6bfc | 13 | md | Markdown | README.md | thukyaw11/alpha_lab | 078c30ccea95845b5f96899fd8cfa78b6e6aecac | [
"MIT"
] | null | null | null | README.md | thukyaw11/alpha_lab | 078c30ccea95845b5f96899fd8cfa78b6e6aecac | [
"MIT"
] | null | null | null | README.md | thukyaw11/alpha_lab | 078c30ccea95845b5f96899fd8cfa78b6e6aecac | [
"MIT"
] | null | null | null |
# alpha_lab
| 4.333333 | 11 | 0.692308 | eng_Latn | 0.658851 |
2ca28eaff82960e0b4a4351b3f1e0f1ad7f8fe5d | 343 | md | Markdown | docs/em-quanto-tempo-a-vacina-contra-o-covid-19-surte-efeito.md | vyk1/informativo-vacina-covid-19 | 8b7af0caee4bb39b362e0ddeac4030ebd988d243 | [
"MIT"
] | 6 | 2021-01-19T13:36:47.000Z | 2021-02-25T20:54:48.000Z | docs/em-quanto-tempo-a-vacina-contra-o-covid-19-surte-efeito.md | vyk1/informativo-vacina-covid-19 | 8b7af0caee4bb39b362e0ddeac4030ebd988d243 | [
"MIT"
] | 6 | 2021-01-11T19:27:51.000Z | 2021-08-08T08:05:00.000Z | docs/em-quanto-tempo-a-vacina-contra-o-covid-19-surte-efeito.md | vyk1/informativo-vacina-covid-19 | 8b7af0caee4bb39b362e0ddeac4030ebd988d243 | [
"MIT"
] | 6 | 2021-01-19T13:53:36.000Z | 2021-02-17T15:34:19.000Z | ---
tags:
- FAQ
title: Em quanto tempo a vacina contra o COVID-19 surte efeito
date: 2021-01-06
slug: em-quanto-tempo-surte-efeito-vacina-covid19
published: false
---
The mRNA vaccines require two doses. While people will have some immunity after the first dose, protection will be most likely about one week after receipt of the second dose. | 34.3 | 175 | 0.778426 | eng_Latn | 0.977942 |
2ca29c7fca8187097e77713398fa39c0d6c8c48c | 5,697 | md | Markdown | Readme.md | aische/type-of-sound | 483ac33e22edc534b87c17ea8ae180ee39e31c90 | [
"BSD-3-Clause"
] | 1 | 2017-12-14T13:57:33.000Z | 2017-12-14T13:57:33.000Z | Readme.md | aische/type-of-sound | 483ac33e22edc534b87c17ea8ae180ee39e31c90 | [
"BSD-3-Clause"
] | null | null | null | Readme.md | aische/type-of-sound | 483ac33e22edc534b87c17ea8ae180ee39e31c90 | [
"BSD-3-Clause"
] | null | null | null | # Type of sound
In this experiment I'm trying to define synthesizers only as types (without values). There are two approaches: In the first approach, the synthesizer types look like expressions. In the second approach, the synthesizer types look more like imperative programs that manipulate a stack (resp. a list zipper) with signals.
## Sound1
The basic idea of this project is to have type classes like this:
class Sound s where
sound :: Proxy s -> [Double]
class Sound1 s where
sound1 :: Proxy s -> [Double] -> [Double]
class Sound2 s where
sound2 :: Proxy s -> [Double] -> [Double] -> [Double]
and a lot of types that instanciate some of these type classes:
data Sine s
instance Sound s => Sound (Sine s) where
sound p = map sin $ sound (proxy1a p)
instance Sound1 Sine where
sound1 p = map sin
A synthesizer can then be defined as a type:
type Freq = 1 :%: 20
type Harmonic n = (Mult (1 :%: n) (Sine (Phasor (Mult (n :%: 1) Freq))))
type family Saw (n :: Nat) where
Saw 0 = Harmonic 1
Saw 1 = Harmonic 1
Saw n = Add (Harmonic n) (Saw (n-1))
type Example = Mult (1 :%: 2) (Saw 11)
saveWaveMono "out.wav" 10 $ sound (Proxy :: Proxy Example)
In this example a synthesizer is defined using the "Saw" type family. It creates a sawtooth wave by adding a the harmonics with amplitudes reciprocal to their frequency. The expanded type looks like this:
ghci> :kind! Example
Example :: *
= Mult
(1 :%: 2)
(Add
(Harmonic 10)
(Add
(Harmonic 9)
(Add
(Harmonic 8)
(Add
(Harmonic 7)
(Add
(Harmonic 6)
(Add
(Harmonic 5)
(Add
(Harmonic 4)
(Add
(Harmonic 3)
(Add (Harmonic 2)
(Harmonic 1))))))))))
The sawtooth synth uses the same frequency type for all harmonics. The frequency signal will be computed for each harmonic. This is not desired, especially if the frequency signal is the result of an expensive computation. A second version of the sawtooth generator is shown below. Here the frequency signal is shared by using a combinator "Both".
type Harmonic2 n = C (Mult (1 :%: n)) (C Sine (C Phasor (Mult (n :%: 1))))
type family Saw2 (n :: Nat) where
Saw2 0 = Harmonic2 1
Saw2 1 = Harmonic2 1
Saw2 n = Both Add (Harmonic2 n) (Saw2 (n-1))
data Both (p :: t1 -> t2 -> t3) (f :: Type -> t1) (g :: Type -> t2) (c :: Type)
instance (Sound2 p, Sound1 f, Sound1 g, Sound c) => Sound (Both p f g c) where
sound p =
let
c = sound (proxy4of4 p)
in
sound2 (proxy1of4 p) (sound1 (proxy2of4 p) c) (sound1 (proxy3of4 p) c)
Using combinators to share signals becomes tedious because different combinators for different arities are needed (at least this is how I interpret the error messages I got). So if there is more than one signal that should be shared, a different "Both" combinator is needed. I did not explore that further and tried a second approach:
## Sound2
The second approach uses a list zipper and operations which transformed it:
data Signals (l :: Nat) (r :: Nat) = Signals [[Double]] [[Double]]
The building blocks of the synthesizer types are not used to build expressions like in the first approach. They are commands that operate on the two stacks of the list zipper. The sound class looks like this:
class Sound t s where
type SoundType t s
sound :: Proxy t -> s -> SoundType t s
and the Sound instance for Add:
data Add
instance (2 <= ar) => Sound Add (Signals al ar) where
type SoundType Add (Signals al ar) = (Signals al (ar - 1))
sound _ (Signals al (x : y : ar)) = Signals al (add x y : ar)
This is not as beautiful as using expression-like types, but it is more powerful. Signals can be shared without limitations and without using complicated combinators. On the other hand, one has to know the state of the stack all the time...
type Harmonic n = Do [ n :%: 1, Mult, Phasor, Sine, 1 :%: n, Mult ]
type family Saw (n :: Nat) where
Saw 0 = Harmonic 1
Saw 1 = Harmonic 1
Saw n = Do [ Dup, MoveR, Harmonic n, MoveL, Saw (n-1), Add ]
type Synth1 =
Do
[ FilterLFO -- push filter cutoff lfo
, MixLFO -- push lfo for mixing sawtooth and rect
, FreqLFO -- push frequency modulation lfo
, 1 :%: 60 -- push frequency
, Mult -- multiply frequency with frequency modulation lfo
, DupN 2 -- duplicate frequency and mix-lfo
, Saw 10 -- create sawtooth from frequency
, MoveR -- move sawtooth to the left stack
, 1 :%: 1 -- push 1
, Sub -- subtract mix-lfo from 1
, MoveL -- get sawtooth back on right stack
, Mult -- multiply sawtooth with (inverted) mix-lfo
, MoveR -- move sawtooth to the left stack
, Rect 10 -- create rectangle wave
, Mult -- multiply rectangle with mix-lfo
, MoveL -- get sawtooth back on right stack
, Add -- add sawtooth and rectangle waves
, MoveR --
, Swap -- swap wave and filter-lfo
, MoveL --
, LowpassQN 5 -- apply filter to wave
, 1 :%: 2 -- push 1/2
, Mult -- multiply wave with 1/2
]
| 39.020548 | 347 | 0.589784 | eng_Latn | 0.997539 |
2ca30b8fd7580c44847921fd2bde8d1e2059ca55 | 4,666 | md | Markdown | best_practice_coding_tips.md | snowdj/book-4 | 0ad978e1f595ee5ef2f4b48ae363f006be82d16d | [
"CC-BY-4.0"
] | 6 | 2018-09-06T04:02:11.000Z | 2020-12-16T15:12:13.000Z | best_practice_coding_tips.md | snowdj/book-4 | 0ad978e1f595ee5ef2f4b48ae363f006be82d16d | [
"CC-BY-4.0"
] | 1 | 2018-07-31T04:07:05.000Z | 2018-08-24T07:33:22.000Z | best_practice_coding_tips.md | snowdj/book-4 | 0ad978e1f595ee5ef2f4b48ae363f006be82d16d | [
"CC-BY-4.0"
] | 6 | 2018-09-05T16:43:39.000Z | 2020-06-04T00:57:03.000Z | # Coding Best Practices 1
---
**Overview.** Now that we know a bit about how to code, we discuss how to code in an **effective** and **readable** manner.
**Pythons.** Jupyter, comments, debugging,
---
## Structuring code within Jupyter
Lets take a quick step back and discuss how to **use** Jupyter to create effective, readable code. Often we are writing much longer programs. In this case it would be silly to have every single cell be composed of just one line. Why? Lots of reasons, but the most important reasons is that that reading the code AND understanding it's context within the whole program would be very hard to do. **And readable code is something that we should aspire to.**
**Use a code cell for a certain task.** For example, one code cell is dedicated to reading in the data. Another code cell performs one manipulation of the data (which could take multiple lines of code), another code cell performs a different manipulation, and then another is dedicated to plotting a figure. So each code cell is accomplishing a specific task at hand. Why not put everything in one code cell? Again, this is problematic regarding readability.
Applying this practice is also important because chopping up the code into different cells helps debugging. For example, you may have aspects of your code that works, but other parts are having problems. Trust us, this will happen often -- there is no avoiding it. So rather than running the whole program over and over again to debug only a subset, you have that subset of code in its own cell and then work on that on its own.
Let's give it a try. Type these commands into **one** code cell:
```python
a = 'some'
b = 'thing'
c = a + b
print('c =', c)
```
When we run the code, we see that the first three lines produce no output. The last one produces the output `c = something`. What we have now is one code cell performing one distinct task.
## Using Markdown to facilitate readability
One of the nice features of Jupyter is that it can mix nicely formatted text via Markdown with code. This allows you to describe, at a high-level the steps that your analysis is walking through. The "high-level" aspect of this is nice in that you can structure your notebook where, say your Manager, can follow your steps without having to delve deep into the details of the code. Moreover, this allows us to "tell the story" when working with data.
For example, in the cell above the code discussed above, you could insert a cell saying something to the effect:
```Markdown
# Creating a word by addition
Here is an example where I am using the addition function on strings to create the word something
```
**Exercise.** Do this. Remind yourself how to create a markdown cell and insert the markdown code above the "something" code cell.
## Add comments to your code
We also want to mechanically explain what individual lines of code our doing. One of the rules of good code is that **we explain what we've done---in the code**. Think about this aspect as writing and explaining code that one of our classmates can understand without help. These explanations are referred to as comments.
Add a comment with the hash character (#). Anything in a line after a hash is a comment, meaning it's ignored by Python. Here are some examples:
```python
# everything that appears after this symbol (in the same line) is a comment!
# comments help PEOPLE understand the code, but PYTHON ignores them!
# we're going to add 4 and 5
4 + 5 # here we're doing it
print(4+5) # here we're printing it
```
We often put comments like this in our code. Not quite this basic, but close. One of the unwritten "laws of nature" in programming is that code is read much more often than it is written. See these links...they're awesome:
[1](http://docs.python-guide.org/en/latest/writing/style/) [2](https://blogs.msdn.microsoft.com/oldnewthing/20070406-00/?p=27343) [3](https://blog.codinghorror.com/when-understanding-means-rewriting/).
Writing informative comments will not only lead to others thanking you for saving them time, but you will find that you thank yourself very frequently.
Final point on comments: **Update your comments as you update your code!** Nothing is worse than finding comments that contradict the code. So as you modify your code, modify your comments. Again, so when you take a break and return to what your doing, you know what's up.
**Exercise moving forward.** Practice writing comments **all the time**. Whenever you learn something new, write a comment explaining it in your code. It feels tedious, but the best coders always explain their work. It's a good habit to develop.
| 75.258065 | 458 | 0.763395 | eng_Latn | 0.999853 |
2ca40d0b144f969342764931b5077d1a61842652 | 8,359 | md | Markdown | paper/README.md | plijnzaad/atropos | dcc79d0aa89b5d2b87f02ee841525fa154898ae6 | [
"CC0-1.0"
] | null | null | null | paper/README.md | plijnzaad/atropos | dcc79d0aa89b5d2b87f02ee841525fa154898ae6 | [
"CC0-1.0"
] | 1 | 2020-09-23T07:37:32.000Z | 2020-09-23T07:37:32.000Z | paper/README.md | plijnzaad/atropos | dcc79d0aa89b5d2b87f02ee841525fa154898ae6 | [
"CC0-1.0"
] | null | null | null | # Manuscript
The peer-reviewed manuscript is
> Didion JP, Martin M, Collins FS. (2017) Atropos: specific, sensitive, and speedy trimming of sequencing reads. PeerJ 5:e3720 https://doi.org/10.7717/peerj.3720
See the [manuscript folder](manuscript/README.md) for details on how the manuscript was created.
The version of Atropos used in the peer-reviewed manuscript can be found at: https://github.com/jdidion/atropos/releases/tag/1.1.5. Note that additional tools have been added, tool versions (including Atropos) have been updated, and the workflow has been modified since publication. These changes will eventually be refelected in an updated preprint on BioRxiv.
# Overview
The scripts in this directory will enable you to re-run the analyses in the Atropos paper. The workflows defined here run the benchmarks and generate the figures and tables shown in the paper.
We have created [Docker](https://www.docker.com/) images for all of the software tools used, as well as data volumes containing all the raw data and resources. These images can be used directly on Mac, Windows, and some linux platforms using the Docker engine. On unsupported linux platforms (namely RedHat and derivatives, such as Scientific Linux), [Singularity](http://singularity.lbl.gov/) or [Udocker](https://github.com/indigo-dc/udocker) can be used to execute the containers directly from Docker Hub. The versions of the tools used in the paper are noted in the Dockerfile headers, and also in the supplementary data.
Our workflows are written in [Nextflow](https://www.nextflow.io/index.html), primarily because it supports Docker, Singularity, and Udocker, which we need to run benchmarks on both desktop and RedHat-based HPC cluster. We also provide [CWL](http://www.commonwl.org/) tool definitions to simplify the development of alternate workflows.
Each workflow (.nf file) runs the analysis for one data type (RNA-Seq, WGBS, or simulated DNA-Seq). We provide a configuration file with profiles we used for both the local and cluster executions. Our cluster runs SGE, so you may need to alter the cluster configuration files for your environment.
# 1. Install software
* You will need a [Docker](https://www.docker.com/) engine if you want to build the containers yourself. If you only want to run the containers, you can use either Docker, [Singularity](http://singularity.lbl.gov/), or [Udocker](https://github.com/indigo-dc/udocker).
* [Nextflow](https://www.nextflow.io/index.html), which requires Java 7+.
# 2. Build containers
All of the containers defined in the 'containers' subdirectory have already been built and pushed to Docker Hub, with two exceptions: the data containers for the STAR indexes (data/hg37/star-index and data/hg38/star-index) are too large to be pushed to Docker Hub or Quay.io. Thus, you will unfortunately need to build at least one of them yourself. We use GRCh38 in the paper, so to build that container, clone data/hg38/star-index and run the build.sh script in that directory.
First, the default Docker repository size (32G) is too small to build the star index containers, so you need to increase the repository size. This requires that you're running the [Docker "Edge" build](https://store.docker.com/editions/community/docker-ce-desktop-mac). Now increase the disk size following the instructions [here](https://forums.docker.com/t/increase-docker-container-disk-space-on-os-x/26725/2).
For full reproducibility, you are free to build the containers yourself, but you'll need to create your own account on Docker Hub, and you'll need to update the scripts to push/pull containers from your own repository. Build all the tool containers, then build all the data containers.
In general, for each tool container, run the following sequence of commands:
# Build the container
docker build -f Dockerfile -t <your repo>/<tool name> .
# Upload the container to your Docker Hub repo
docker push <your repo>/<tool name>
For each data container, run the following sequence of commands:
# Download the data and build the docker container
./build.sh
# Upload the container to your Docker Hub repo
docker push <your repo>/<data name>
Note that you can create a .tar archive of any container using the `docker save` command, and you can load a saved container using the `docker load` command. This is especially useful for the star index container(s).
On a technical note, we use Phusion (https://hub.docker.com/r/phusion/baseimage/) as the base image for the containers for the tools we benchmark. This is primarily for convenience and comparability (i.e. removing base image as a variable); you could build smaller images using Alpine with a little extra work.
# 3. Run the workflows
Clone the files in the 'workflow' directory, including the 'bin' subdirectory. In the 'workflow' directory, first edit the nextflow.config file and customize it to your own computing environment.
On our cluster, we run the scripts from a subdirectory under /scratch. At runtime, /scratch is replaced with /spin1/scratch, hence the beforeScript command to cd back to /scratch to avoid confusing Nextflow.
If you are running Docker, you'll likely need to increase the number of CPUs and memory limit to match what you've configured. This can be found in the Docker preferences on the "Advanced" tab.
Now run:
./run-workflows.sh <env>
Where <env> is either 'local' or 'cluster'. Note that the first time you run this it will download several Docker images requiring ~[XX] GB of disk space.
All results will be placed in the 'results' subdirectory (unless you change the path in nextflow.config).
Note that when re-running the workflow and comparing the results to those shown in the manuscript, there will be some variability in the performance metrics, but the relative rankings of the tools should not change significantly -- please let us know if you find otherwise!
# Non-Docker Systems
Unfortunately, the ideals of easily reproducible research don't yet match up with reality. Ideally, if you wanted to run this workflow on a system that doesn't support Docker (which includes any RedHat-based Linux system and most HPC environments), you could use transparently use Singularity. In reality, Nextflow doesn't support Singularity's ability to automatically pull and convert images from a Docker Hub. Nor would you want it to; Singularity does not use a daemon or other caching system, and would thus fetch a separate copy of every image for every process instance. This will be addressed in Singularity v2.3, but for now you need to manually convert all the Docker images to Singularity images on a computer running Docker, then copy them to the HPC environment. We expect, but can't guarantee, that this had minimal effect on the measurement of relative performance between the desktop and cluster.
## 1. Fix docker2singularity bug and build container
First you need to clone the docker2singularity repository (https://github.com/singularityware/docker2singularity) and edit the docker2singularity.sh file to change the 'if' statement on/around line 178 to:
```
buildname=$(head -n 1 /etc/issue)
if [[ $buildname =~ Buildroot|Alpine ]] ; then
```
Now build the container:
```
docker build -f Dockerfile -t <docker2singulariy container_name> .
```
## 2. Convert and transfer images
Make sure you've manually built the star index container as described in #2 above, and that it shows up when you run
```
docker images
```
From the 'containers' directory, run:
```
./docker2singularity.sh \
<docker2singulariy container_name> <remote host> <remote dir>
```
# TODO
* Update #3 with an estimate of the total disk space requirement.
* Use [docker-builder](https://pypi.python.org/pypi/docker_builder) to auto-update containers when dependencies change.
* Look at using TopHat2 or Kallisto pseudo-alignment rather than STAR for RNA-Seq benchmarks. This would enable the RNA-Seq benchmark to be run on our desktop with 32 GB RAM.
* These papers do a nice job of benchmarking trimmers. Consider adding some more benchmarks.
* http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-16-S1-S2
* https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-015-0454-y
* Make machine info work on OSX (currently requires /proc/cpuinfo and /proc/meminfo) | 73.324561 | 912 | 0.778323 | eng_Latn | 0.996776 |
2ca4a1a2a7a6df5281542ed0522292d95079fbf4 | 3,546 | md | Markdown | README.md | Apnoea/gulp-bem-template | ad2bdfb5e01c853ad5d710d2a6a3b989381b7e00 | [
"MIT"
] | 2 | 2021-08-18T13:04:15.000Z | 2021-12-06T07:06:39.000Z | README.md | Apnoea/gulp-bem-template | ad2bdfb5e01c853ad5d710d2a6a3b989381b7e00 | [
"MIT"
] | null | null | null | README.md | Apnoea/gulp-bem-template | ad2bdfb5e01c853ad5d710d2a6a3b989381b7e00 | [
"MIT"
] | 3 | 2021-08-31T19:17:32.000Z | 2021-12-03T18:49:40.000Z | # gulp-bem-template
## Установка
* установите [Node.js](https://nodejs.org/en/) (если требуется) и [Yarn](https://yarnpkg.com/en/docs/install): ```npm install --global yarn```
* скачайте сборку в консоли с помощью [Git](https://git-scm.com/downloads): ```git clone https://github.com/Apnoea/gulp-bem-template.git```
* перейдите в скачанную папку со сборкой: ```cd gulp-bem-template```
* скачайте необходимые зависимости: ```yarn```
* чтобы начать работу, введите команду: ```yarn start``` (режим разработки)
* чтобы собрать проект, введите команду ```yarn build``` (режим сборки)
Если вы всё сделали правильно, у вас должен открыться браузер с локальным сервером.
Режим сборки предполагает оптимизацию проекта: сжатие изображений, минифицирование CSS и JS-файлов для загрузки на сервер.
## Файловая структура
```
gulp-bem-template
├── build
├── gulp-tasks
├── src
│ ├── blocks
│ ├── fonts
│ ├── images
│ ├── js
│ ├── layouts
│ └── styles
├── .bem-template-pug.js
├── .bem-template-scss.js
├── .bemrc.js
├── .editorconfig
├── .eslintignore
├── .eslintrc.json
├── .gitattributes
├── .gitignore
├── .gitlab-ci.yml
├── .pug-lint.json
├── .stylelintrc.json
├── gulpfile.js
├── package.json
└── webpack.config.js
```
* Папка ```build``` - папка, из которой запускается локальный сервер для разработки (при запуске ```yarn start```)
* Папка ```gulp-tasks``` - папка с Gulp-тасками
* Папка ```src``` - используется во время разработки:
* блоки: ```src/blocks```
* шрифты: ```src/fonts```
* изображения: ```src/images```
* JS-файлы: ```src/js```
* основной макет сайта: ```src/layouts```
* SCSS-файлы: ```src/styles```
* страницы сайта: ```src/*.pug```
## Команды
* ```yarn start``` - запуск сервера для разработки проекта
* ```yarn build``` - собрать проект с оптимизацией без запуска сервера
* ```yarn script``` - собрать только js
* ```yarn bem``` - добавить bem блок
## Рекомендации по использованию
### Блоки проекта
* блоки проекта находятся в папке ```src/blocks```
* блоки, созданные командой ```yarn bem```, автоматически подключаются в файл: ```src/blocks/mixins.pug```
* каталог блока содержит в себе файлы разметки, стилей и, по необходимости, скриптов
### Страницы проекта
* страницы проекта находятся в корне папки ```src/*.pug```
### Шрифты
* шрифты находятся в папке ```src/fonts```
* используйте [форматы](https://caniuse.com/#search=woff) ```.woff2``` и ```.woff```
* шрифты подключаются в файле ```src/styles/utils/fonts.scss```
* сконвертировать локальные шрифты можно с помощью [данного сервиса](https://transfonter.org/)
### Изображения
* изображения находятся в папке ```src/images```
* изображения автоматически минифицируются с сохранением структуры папок
* из изображений формата ```.svg``` формируется спрайт ```build/img/sprite.svg```
* отдельные ```.svg``` доступны по пути ```build/img/svgs```
* для вложенных изображений создается префикс с названием папки, к примеру ```main--icon_circle.svg```
### Сторонние библиотеки
* все сторонние библиотеки устанавливаются в папку ```node_modules```
* для их загрузки воспользуйтеcь командой ```yarn add package_name```
* для подключения JS-файлов библиотек импортируйте их в самом начале JS-файла, например:
```javascript
import $ from 'jquery'
```
* для подключения стилевых файлов библиотек импортируйте их в файл ```src/styles/layouts/style.scss```
* JS-файлы и стилевые файлы библиотек самостоятельно изменять нельзя
## Контакты
* Telegram: [@Alex K](https://t.me/Apnoea)
| 37.723404 | 142 | 0.686689 | rus_Cyrl | 0.832912 |
2ca4c6c888d09a91bff5711416d5fda65d55a8a9 | 862 | md | Markdown | _posts/java/J/JOptionPane/2021-01-01-JOptionPane.setValue.md | w3api/w3api | 681462ece7265723031a88bec5285209d0e125bf | [
"MIT"
] | 1 | 2021-09-15T20:32:10.000Z | 2021-09-15T20:32:10.000Z | _posts/java/J/JOptionPane/2021-01-01-JOptionPane.setValue.md | w3api/w3api | 681462ece7265723031a88bec5285209d0e125bf | [
"MIT"
] | 20 | 2021-01-17T01:13:46.000Z | 2021-06-20T21:16:02.000Z | _posts/java/J/JOptionPane/2021-01-01-JOptionPane.setValue.md | w3api/w3api | 681462ece7265723031a88bec5285209d0e125bf | [
"MIT"
] | 2 | 2021-09-15T20:32:08.000Z | 2022-02-20T16:57:46.000Z | ---
title: JOptionPane.setValue()
permalink: /Java/JOptionPane/setValue/
date: 2021-01-11
key: Java.J.JOptionPane
category: Java
tags: ['java se', 'javax.swing', 'java.desktop', 'metodo java', 'Java 1.2']
sidebar:
nav: java
---
{% include w3api/datos.html clase=site.data.Java.J.JOptionPane.metodos valor="setValue" %}
## Descripción
{{_dato.description }}
## Sintaxis
~~~java
@BeanProperty(preferred=true, description="The option pane\'s value object.") public void setValue(Object newValue)
~~~
## Parámetros
* **Object newValue**, {% include w3api/param_description.html metodo=_dato parametro="Object newValue" %}
## Clase Padre
[JOptionPane](/Java/JOptionPane/)
## Ejemplo
~~~java
{{ _dato.code}}
~~~
## Artículos
<ul>
{%- for _ldc in _dato.ldc -%}
<li>
<a href="{{_ldc['url'] }}">{{ _ldc['nombre'] }}</a>
</li>
{%- endfor -%}
</ul>
| 21.02439 | 115 | 0.663573 | yue_Hant | 0.276528 |
2ca4e055029cac736597bb734dd49489102d67af | 3,312 | md | Markdown | guides/kubernetes/kubernetes-csi-for-packet.md | jb4free/docs | c1d122454c6c4b53e85b02d03221b0976fddae3e | [
"MIT"
] | null | null | null | guides/kubernetes/kubernetes-csi-for-packet.md | jb4free/docs | c1d122454c6c4b53e85b02d03221b0976fddae3e | [
"MIT"
] | 1 | 2021-06-25T17:45:00.000Z | 2021-06-25T17:45:00.000Z | guides/kubernetes/kubernetes-csi-for-packet.md | jb4free/docs | c1d122454c6c4b53e85b02d03221b0976fddae3e | [
"MIT"
] | 3 | 2020-10-29T13:28:41.000Z | 2021-06-10T18:56:34.000Z | <!-- <meta>
{
"title": "Kubernetes CSI for Packet",
"description": "Kubernetes CSI is intended to allow clusters to provision, & attach PersistentVolumes as Kubernetes StorageClasses",
"tag": ["Kubernetes", "CSI", "Container Storage Interface"],
"seo-title": "Kubernetes CSI for Bare Metal - Packet Technical Guides",
"seo-description": "Kubernetes CSI (Container Storage Interface) for Packet",
"og-title": "Kubernetes CSI for Packet.",
"og-description": "The K8 Container Storage Interface (CSI) plugin allows you to attach storage system files to containerized workloads. Learn how to leverage CSI in this how-to guide."
}
</meta> -->
The [Kubernetes CSI](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/) is intended to allow clusters to provision, and attach `PersistentVolumes` as Kubernetes `StorageClasses` from a variety of storage providers via this standard. In this case, Packet's [Elastic Block Storage](https://www.packet.com/developers/docs/storage/ebs/).
## Requirements:
You’ll need to clone the Packet CSI [repository](https://github.com/packethost/csi-packet) to get the necessary yaml deployment files. The files are located in the `deploy/kubernetes` [folder](https://github.com/packethost/csi-packet/tree/master/deploy/kubernetes).
Deploying the CSI driver will also require the creation of a `Secret`.
### Version
Recommended versions of Packet CSI based on your Kubernetes version:
* Packet CSI version v1.0.0 supports Kubernetes >=1.13.0
## Deployment
### Token
To run the Packet CSI, you need your Packet API key and project ID that your cluster is running in.
If you are already logged in, you can create one by clicking on your profile in the upper right then "API keys".
To get project ID click into the project that your cluster is under and select "project settings" from the header.
Under General you will see "Project ID". Once you have this information you will be able to fill in the config needed for the CCM.
#### Create config
Copy [deploy/template/secret.yaml](https://github.com/packethost/csi-packet/blob/master/deploy/template/secret.yaml) to releases/packet-cloud-config.yaml:
```bash
cp deploy/template/secret.yaml packet-cloud-config.yaml
```
Replace the placeholder in the copy with your token. When you're done, the packet-cloud-config.yaml should look something like this:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: packet-cloud-config
namespace: kube-system
stringData:
cloud-sa.json: |
{
"apiKey": "abc123abc123abc123",
"projectID": "abc123abc123abc123"
}
```
Then run:
```bash
kubectl apply -f packet-cloud-config.yaml
```
You can confirm that the secret was created in the `kube-system` with the following:
```bash
$ kubectl -n kube-system get secrets packet-cloud-config
NAME TYPE DATA AGE
packet-cloud-config Opaque 1 2m
```
### CSI
You can apply the rest of the CSI by running:
```bash
kubectl -n kube-system apply -f deploy/kubernetes/setup.yaml
kubectl -n kube-system apply -f deploy/kubernetes/node.yaml
kubectl -n kube-system apply -f deploy/kubernetes/controller.yaml
```
or by using the Packet [Helm Chart for CSI](https://github.com/packet-labs/helm-charts/).
| 43.578947 | 353 | 0.731582 | eng_Latn | 0.905785 |
2ca50f3c5df1d5508d8029359dbfa6132dcfceaf | 839 | md | Markdown | portworx-install-kubernetes/step2.md | 0xDEADBEEF-td/katacoda-scenarios-1 | 9d12b359124090e2dbfeedd9a290352898ef266a | [
"Apache-2.0"
] | 11 | 2017-10-20T06:30:05.000Z | 2021-04-21T10:56:04.000Z | portworx-install-kubernetes/step2.md | 0xDEADBEEF-td/katacoda-scenarios-1 | 9d12b359124090e2dbfeedd9a290352898ef266a | [
"Apache-2.0"
] | 30 | 2018-02-28T19:39:03.000Z | 2021-12-16T14:44:00.000Z | portworx-install-kubernetes/step2.md | 0xDEADBEEF-td/katacoda-scenarios-1 | 9d12b359124090e2dbfeedd9a290352898ef266a | [
"Apache-2.0"
] | 57 | 2017-10-20T06:56:36.000Z | 2022-03-20T12:17:57.000Z | Now that we are familiar with the environment, lets install portworx! The first step is to deploy a dedicated KVDB in this environment.
The KVDB can be built-in / portworx managed for small clusters OR we can set up our own. For this lab, lets set up a single node ETCD Cluster in this k8s environment.
Note: A sample etcd pod definition file called px-etcd.yaml is created for you in the root home directory on the master node.
Update the YAML file with the ip address of node03 and create the deployment.
Once done, run the below command to install etcd.
`kubectl create -f px-etcd.yaml`{{execute}}
**Warning:** Do not change anything else in the px-etcd.yaml file provided other than what has been mentioned below.
# Name: px-etcd
# Status: Running
# Node: node03
`kubectl -n kube-system get pods px-etcd -o wide`{{execute}}
| 41.95 | 167 | 0.761621 | eng_Latn | 0.998022 |
2ca5571574ad4130b580bace120551771aa900ea | 587 | md | Markdown | README.md | IgweDaniel/ziitt-ecommerce-api | a37cd743a6c4f45e2190b27ebb5e324324eb2cb4 | [
"MIT"
] | null | null | null | README.md | IgweDaniel/ziitt-ecommerce-api | a37cd743a6c4f45e2190b27ebb5e324324eb2cb4 | [
"MIT"
] | 1 | 2021-05-11T12:02:52.000Z | 2021-05-11T12:02:52.000Z | README.md | IgweDaniel/ziitt-ecommerce-api | a37cd743a6c4f45e2190b27ebb5e324324eb2cb4 | [
"MIT"
] | null | null | null | # Zitt-ecommerce api
An fictional e-commerce api built with Node Js and deployed on [heroku](https://)
## Build With...
- [Contentful](https://www.contentful.com/) - headless CMS use to create customer orders
- [Express Js](https://expressjs.com/) - Fast, unopinionated, minimalist web framework for Node js
- [Stripe](https://stripe.com/en-ca) - for credit card/payment processing
- [Mongodb Atlas](https://stripe.com/en-ca) - for storing customer data
<!-- ## License -->
<!-- This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details -->
| 39.133333 | 107 | 0.712095 | eng_Latn | 0.687299 |
2ca62b098c1b4f2d9681f3171af2e0577e1133b4 | 9,096 | md | Markdown | dynamics-nav-app/finance-dimensions.md | MicrosoftDocs/nav-content.de-at | caa82117dee630c6e160c9d273ea8774046ab321 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T18:45:14.000Z | 2021-04-21T00:13:46.000Z | dynamics-nav-app/finance-dimensions.md | MicrosoftDocs/nav-content.de-at | caa82117dee630c6e160c9d273ea8774046ab321 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamics-nav-app/finance-dimensions.md | MicrosoftDocs/nav-content.de-at | caa82117dee630c6e160c9d273ea8774046ab321 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-14T19:36:57.000Z | 2021-11-05T10:53:48.000Z | ---
title: Arbeiten mit Dimensionen
description: "Sie können Dimensionen nutzen, um Einträge zu kategorisieren, beispielsweise nach Abteilungen oder Projekt, sodass Sie können Daten einfacher verfolgen und analysieren."
documentationcenter:
author: bholtorf
ms.prod: dynamics-nav-2018
ms.topic: article
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords: analysis, history, track
ms.date: 06/14/2017
ms.author: bholtorf
ms.translationtype: HT
ms.sourcegitcommit: 4fefaef7380ac10836fcac404eea006f55d8556f
ms.openlocfilehash: f312a30686566cc5bf123b473c0d2b93d0fadd89
ms.contentlocale: de-at
ms.lasthandoff: 10/16/2017
---
# <a name="working-with-dimensions"></a>Arbeiten mit Dimensionen
Um Analyse in Belegen wie Verkaufsaufträgen einfacher durchzuführen, können Sie Dimensionen verwenden. Dimensionen sind Attribute und Werte, die Posten kategorisieren, sodass Sie sie verfolgen und analysieren können. So können Sie beispielsweise Dimensionen einrichten, mit denen angegeben wird, aus welchem Projekt oder aus welcher Abteilung ein Posten stammt.
Dies ermöglicht die Verwendung von Dimensionen anstelle der Einrichtung separater Sachkonten für einzelne Abteilungen und Projekte. Kennzeichnet eine umfangreiche Verkaufschance zur Analyse, ohne dazu einen komplizierten Kontenplan zu erstellen. Weitere Informationen finden Sie unter [Business Intelligence](bi.md).
Oder Sie richten beispielsweise eine Dimension mit dem Namen *Abteilung* ein und verwenden diese Dimension und einen Dimensionswert, wenn Sie Verkaufsbelege buchen. Dadurch können Sie auch intelligente Geschäfts-Tools verwenden, um zu sehen, welche Abteilung die Artikel verkauft hat.
Je mehr Dimensionen Sie einrichten und verwenden, auf desto detaillierteren Berichten können Sie Ihre Geschäftsentscheidungen basieren. Zum Beispiel kann ein einzelner Verkaufsposten mehrere Dimensionsinformationen enthalten, wie:
* Das Konto, auf das der Artikelverkauf gebucht wurde
* Wo der Artikel verkauft wurde
* Wer ihn verkauft hat
* Die Art des Debitors, die ihn kaufte
## <a name="analyzing-by-dimensions"></a>Nach Dimensionen analysieren
Die Funktionalität Dimensionen wird eine wichtige Rolle in der Business Intelligence spielen, wie auch beim Definieren von Analyseansichten. Weitere Informationen finden Sie unter [Vorgehensweise: Daten nach Dimensionen analysieren](bi-how-analyze-data-dimension.md).
> [!TIP]
> Als schnelle Möglichkeit, Transaktionsdaten nach Dimensionen zu analysieren, können Sie Summen im Kostenplan und Posten in allen **Posten**-Fenstern nach Dimensionen filtern. Suchen Sie nach der Aktion **Dimensionsfilter festlegen**.
## <a name="dimension-sets"></a>Dimensionssätze
Ein Dimensionssatz ist eine eindeutige Kombination von Dimensionswerten. Er wird als Dimensionssatzposten in die Datenbank gespeichert. Jeder Dimensionssatzposten stellt einen einzelnen Dimensionswert dar. Der Dimensionssatz wird durch eine allgemeine Dimensionssatz-ID identifiziert, die jedem Dimensionssatzposten zugewiesen wird, der zum Dimensionssatz gehört.
Wenn Sie eine neue Buch.-Blattzeile, einen Belegkopf oder eine Belegzeile erstellen, können Sie eine Kombination von Dimensionswerten angeben. Anstatt jeden Dimensionswert explizit in der Datenbank zu speichern, wird eine Dimensionssatz-ID der Buch.-Blattzeile, dem Belegkopf oder der Belegzeile zugewiesen, um den Dimensionssatz anzugeben.
## <a name="setting-up-dimensions"></a>Einrichtung von Dimensionen
Sie können die Dimensionen und die Dimensionswerte, die Sie verwenden möchten, definieren, um Buch.-Blätter und Belege zu kategorisieren, wie Verkaufsaufträge und Bestellungen einrichten. Sie errichten Dimensionen im Fenster **Dimensionen**, wo Sie eine Zeile für jede Dimension erstellen, wie *Projekt*,*Abteilung*, *Bereich* und *Verkaufsperson*.
Sie erstellen auch Einrichtungswerte für Dimensionen. Beispielsweise könnten Werte Abteilungen Ihres Unternehmens darstellen. Dimensionswerte können in einer hierarchischen Struktur eingerichtet werden, die der Struktur des Kontenplans gleicht, sodass die Daten in unterschiedlichen Granularitätsstufen aufgeschlüsselt und Untergruppen von Dimensionswerten summiert werden können. Sie können beliebig viele Dimensionen und Dimensionswerte in Ihrem Mandanten definieren und Sie können eine unbegrenzte Anzahl von Dimensionswerten für jede Dimension festlegen.
Sie können mehrere globale und Shortcutdimensionen einrichten:
* ***** Globale Dimensionen ***** werden als Filter (beispielsweise in Berichten und Stapelverarbeitungen verwendet. Sie können nur zwei globale Dimensionen verwenden, also wählen Sie Dimensionen, die Sie häufig verwenden.
* **Shortcutdimensionen** sind verfügbar als Felder in Buch.-Blattzeilen und Belegzeilen. Sie können bis zu sechs davon erstellen.
### <a name="setting-up-default-dimensions-for-customers-vendors-and-other-accounts"></a>Standarddimensionen für Debitoren, Kreditoren und andere Konten einrichten
Sie können eine Standarddimension für ein bestimmtes Konto einrichten. Die Dimension wird in das Buch.-Blatt oder den Beleg kopiert, wenn Sie die Kontonummer auf der Zeile eingeben, aber Sie können den Code in der Zeile ändern oder löschen, falls erforderlich. Sie können eine Dimension auch erstellen, die für das Buchen eines Postens mit einem speziellen Konto benötigt wird.
### <a name="translating-the-names-of-dimensions"></a>Übersetzen Sie die Namen von Dimensionen
Wenn Sie eine Dimension und insbesondere eine Shortcutdimension erstellen, was Sie tatsächlich erstellen, ist ein benutzerdefiniertes Feld oder eine Spaltenüberschrift. Wenn Ihr Geschäft international ist, können Sie Übersetzungen für den Namen der Dimension zur Verfügung stellen. Belege, die Dimensionen enthalten, verwenden den übersetzten Namen, soweit zutreffend.
### <a name="example-of-dimension-setup"></a>Beispiel einer Dimensionseinrichtung
Nehmen wir an, dass Ihr Unternehmen Transaktionen auf Grundlage der Organisationsstruktur und der geografische Lagen verfolgen möchte. Sie können zwei Dimensionen im Fenster **Dimensionen** einrichten.
* **BEREICH**
* **ABTEILUNG**
| Code | Name | Code Caption | Filter Caption |
| --- | --- | --- | --- |
| BEREICH |Bereich |Gebietscode |Filter "Bereich" |
| ABTEILUNG |Abteilung |Abteilungscode |Kostenstellenfilter |
Für **BEREICH** fügen Sie die folgenden Dimensionswerte hinzu:
| Code | Name | Dimensionswertart |
| --- | --- | --- |
| 10 |Amerika |Von-Summe |
| 2.0 |Nordamerika |Standard |
| 30 |Pazifik |Standard |
| 40 |Südamerika |Standard |
| 50 |Amerika, gesamt |Bis-Summe |
| 60 |Europa |Von-Summe |
| 70 |EU |Standard |
| 80 |Nicht-EU |Standard |
| 90 |Europa, gesamt |Bis-Summe |
Um die beiden hauptgeografischen Gebiete Amerika und Europa, fügen Sie bei Bedarf Unterkategorien für Bereiche hinzu, indem Sie die Dimensionswerte einrücken. Dadurch können Sie über Verkäufe oder Ausgaben in den Regionen berichten und Summen für die größeren geographischen Gebiete erhalten. Sie können auch wählen, dass Länder oder Regionen als Ihre Dimensionswerte oder Bundesregionen bzw. Städten gewählt werden, abhängig von Ihrem Geschäft.
> [!NOTE]
> Um eine Hierarchie einzurichten, muss der Code in alphabetischer Reihenfolge sein. Dies umfasst die Codes der standardmäßigen Dimensionswerte in [!INCLUDE[d365fin](includes/d365fin_md.md)]
Für **ABTEILUNG** fügen Sie die folgenden Dimensionswerte hinzu:
| Code | Name | Dimensionswertart |
| --- | --- | --- |
| ADMIN |Verwaltung |Standard |
| PROD |Produktion |Standard |
| VERKAUF |Verkauf |Standard |
Mit dieser Einrichtung fügen Sie dann Ihre zwei Dimensionen als zwei globalen Dimensionen im Fenster **Finanzbuchhaltung Einrichtung** hinzu. Das bedeutet, dass globale Dimensionen als Filter für Sachposten in allen Berichten, Kontenschemata und Stapelverarbeitungen benutzt werden können. Beide globalen Dimensionen stehen auch als Shortcutdimensionen in Buch.-Blattzeilen und Belegköpfen zur Verfügung.
## <a name="using-dimensions"></a>Dimensionen verwenden
In einem Beleg, z. B. einem Verkaufsauftrag, können Sie Dimensionsinformationen sowohl für eine einzelne Belegzeile als auch für den Beleg selbst hinzufügen. Sie können beispielsweise im Fenster **Verkaufsauftrag** Dimensionswerte für die ersten beiden Shortcutdimensionen direkt in den Beleg eingeben und weitere Dimensionsinformationen hinzufügen, wenn Sie auf die Schaltfläche **Dimensionen** klicken.
Wenn Sie stattdessen in einem Buch.-Blatt arbeiten, können Sie auf dieselbe Art und Weise Dimensionsinformationen hinzufügen, wenn Sie Shortcutdimensionen direkt als Felder in Buch.-Blattzeilen eingerichtet haben.
Sie können Standarddimensionen für Konten oder Kontenarten festlegen, sodass Dimensionen und Dimensionswerte automatisch ausgefüllt werden.
## <a name="see-also"></a>Siehe auch
[Business Intelligence](bi.md)
[Finanzen](finance.md)
[Vorgehensweise: Analysieren von Daten nach Dimensionen](bi-how-analyze-data-dimension.md)
[Arbeiten mit [!INCLUDE[d365fin](includes/d365fin_md.md)]](ui-work-product.md)
| 80.495575 | 558 | 0.806179 | deu_Latn | 0.999034 |
2ca7b6bf06c3d5f4596544e9dfb939cde6586b36 | 26,759 | md | Markdown | articles/active-directory/active-directory-aadconnectsync-configure-filtering.md | jglixon/azure-content | c2efc0483bfaaf14e3f8eb9a4d8545854023f625 | [
"CC-BY-3.0"
] | 5 | 2018-07-10T22:15:15.000Z | 2019-10-30T21:09:29.000Z | articles/active-directory/active-directory-aadconnectsync-configure-filtering.md | wbuchwalter/azure-content | 4b6783ee9303241adb73b9bbebb32d8c78945df5 | [
"CC-BY-3.0"
] | null | null | null | articles/active-directory/active-directory-aadconnectsync-configure-filtering.md | wbuchwalter/azure-content | 4b6783ee9303241adb73b9bbebb32d8c78945df5 | [
"CC-BY-3.0"
] | 8 | 2018-04-21T18:28:43.000Z | 2021-08-02T12:19:07.000Z | <properties
pageTitle="Azure AD Connect sync: Configure filtering | Microsoft Azure"
description="Explains how to configure filtering in Azure AD Connect sync."
services="active-directory"
documentationCenter=""
authors="andkjell"
manager="femila"
editor=""/>
<tags
ms.service="active-directory"
ms.workload="identity"
ms.tgt_pltfrm="na"
ms.devlang="na"
ms.topic="article"
ms.date="09/13/2016"
ms.author="andkjell;markvi"/>
# Azure AD Connect sync: Configure Filtering
With filtering, you can control which objects should appear in Azure AD from your on-premises directory. The default configuration takes all objects in all domains in the configured forests. In general, this is the recommended configuration. End users using Office 365 workloads, such as Exchange Online and Skype for Business, benefit from a complete Global Address List so they can send email and call everyone. With the default configuration, they would get the same experience they would with an on-premises implementation of Exchange or Lync.
In some cases, it is required to make some changes to the default configuration. Here are some examples:
- You plan to use the [multi-Azure AD-directory topology](active-directory-aadconnect-topologies.md#each-object-only-once-in-an-azure-ad-directory). Then you need to apply a filter to control which object should be synchronized to a particular Azure AD directory.
- You run a pilot for Azure or Office 365 and you only want a subset of users in Azure AD. In the small pilot, it is not important to have a complete Global Address List to demonstrate the functionality.
- You have many service accounts and other non-personal accounts you do not want in Azure AD.
- For compliance reasons you do not delete any user accounts on-premises. You only disable them. But in Azure AD you only want active accounts to be present.
This article covers how to configure the different filtering methods.
> [AZURE.IMPORTANT]Microsoft does not support modification or operation of the Azure AD Connect sync outside of those actions formally documented. Any of these actions may result in an inconsistent or unsupported state of Azure AD Connect sync and as a result, Microsoft cannot provide technical support for such deployments.
## Basics and important notes
In Azure AD Connect sync, you can enable filtering at any time. If you start with a default configuration of directory synchronization and then configure filtering, the objects that are filtered out are no longer synchronized to Azure AD. As a result of this change, any objects in Azure AD that were previously synchronized but were then filtered are deleted in Azure AD.
Before you start making changes to filtering, make sure you [disable the scheduled task](#disable-scheduled-task) so you do not accidentally export changes that you have not yet verified to be correct.
Since filtering can remove many objects at the same time, you want to make sure your new filters are correct before you start exporting any changes to Azure AD. After you have completed the configuration steps, it is strongly recommended that you follow the [verification steps](#apply-and-verify-changes) before you export and make changes to Azure AD.
To protect you from deleting many objects by accident, the feature [prevent accidental deletes](active-directory-aadconnectsync-feature-prevent-accidental-deletes.md) is on by default. If you delete many objects due to filtering (500 by default), you need to follow the steps in this article to allow the deletes to go through to Azure AD.
If you use a build before November 2015 ([1.0.9125](active-directory-aadconnect-version-history.md#1091250)), make a change to filter configuration and you use password synchronization, then you need to trigger a full sync of all passwords after you have completed the configuration. For steps on how to trigger a password full sync see [Trigger a full sync of all passwords](active-directory-aadconnectsync-implement-password-synchronization.md#trigger-a-full-sync-of-all-passwords). If you are on 1.0.9125 or later, then the regular **full synchronization** action also calculates if passwords should be synchronized and this extra step is no longer required.
If **user** objects were inadvertently deleted in Azure AD because of a filtering error, you can recreate the user objects in Azure AD by removing your filtering configurations and then synchronize your directories again. This action restores the users from the recycle bin in Azure AD. However, you cannot undelete other object types. For example, if you accidentally delete a security group and it was used to ACL a resource, the group and its ACLs cannot be recovered.
Azure AD Connect only deletes objects it has once considered to be in scope. If there are objects in Azure AD that were created by another sync engine and these objects are not in scope, adding filtering do not remove them. For example, if you start with a DirSync server and it created a complete copy of your entire directory in Azure AD and you install a new Azure AD Connect sync server in parallel with filtering enabled from the beginning, it does not remove the extra objects created by DirSync.
The filtering configuration is retained when you install or upgrade to a newer version of Azure AD Connect. It is always a best practice to verify that the configuration was not inadvertently changed after an upgrade to a newer version before running the first synchronization cycle.
If you have more than one forest, then the filtering configurations described in this topic must be applied to every forest (assuming you want the same configuration for all of them).
### Disable scheduled task
To disable the built-in scheduler that triggers a synchronization cycle every 30 minutes, follow these steps:
1. Go to a PowerShell prompt.
2. Run `Set-ADSyncScheduler -SyncCycleEnabled $False` to disable the scheduler.
3. Make the changes as documented in this topic.
4. Run `Set-ADSyncScheduler -SyncCycleEnabled $True` to enable the scheduler again.
**If you use an Azure AD Connect build before 1.1.105.0**
To disable the scheduled task that triggers a synchronization cycle every 3 hours, follow these steps:
1. Start **Task Scheduler** from the start menu.
2. Directly under **Task Scheduler Library**, find the task named **Azure AD Sync Scheduler**, right-click, and select **Disable**.

3. You can now make configuration changes and run the sync engine manually from the **synchronization service manager** console.
After you have completed all your filtering changes, don't forget to come back and **Enable** the task again.
## Filtering Options
The following filtering configuration types can be applied to the Directory Synchronization tool:
- [**Group based**](active-directory-aadconnect-get-started-custom.md#sync-filtering-based-on-groups): Filtering based on a single group can only be configured on initial install using the installation wizard. It is not further covered in this topic.
- [**Domain-based**](#domain-based-filtering): This option enables you to select which domains that synchronize to Azure AD. It also allows you to add and remove domains from the sync engine configuration if you make changes to your on-premises infrastructure after you installed Azure AD Connect sync.
- [**Organizational-Unit–based**](#organizational-unitbased-filtering): This filtering option enables you to select which OUs synchronize to Azure AD. This option is for all object types in selected OUs.
- [**Attribute–based**](#attribute-based-filtering): This option allows you to filter objects based on attribute values on the objects. You can also have different filters for different object types.
You can use multiple filtering options at the same time. For example, you can use OU-based filtering to only include objects in one OU and at the same time attribute-based filtering to filter the objects further. When you use multiple filtering methods, the filters use a logical AND between the filters.
## Domain-based filtering
This section provides you with the steps to configure your domain filter. If you have added or removed domains in your forest after you have installed Azure AD Connect, you also have to update the filtering configuration.
The preferred way to change domain-based filtering is by running the installation wizard and change [domain and OUs filtering](active-directory-aadconnect-get-started-custom.md#domain-and-ou-filtering). The installation wizard is automating all the tasks documented in this topic.
You should only follow these steps if you for some reason are unable to run the installation wizard.
Domain-based filtering configuration consists of these steps:
- [Select the domains](#select-domains-to-be-synchronized) that should be included in the synchronization.
- For each added and removed domain, adjust the [run profiles](#update-run-profiles).
- [Apply and verify changes](#apply-and-verify-changes).
### Select domains to be synchronized
**To set the domain filter, do the following steps:**
1. Sign in to the server that is running Azure AD Connect sync by using an account that is a member of the **ADSyncAdmins** security group.
2. Start **Synchronization Service** from the start menu.
3. Select **Connectors** and in the **Connectors** list, select the Connector with the type **Active Directory Domain Services**. From **Actions**, select **Properties**.

4. Click **Configure Directory Partitions**.
5. In the **Select directory partitions** list, select and unselect the domains as needed. Verify that only the partitions you want to synchronize are selected.

If you have changed your on-premises AD infrastructure and added or removed domains from the forest, then click the **Refresh** button to get an updated list. When you refresh, you are asked for credentials. Provide any credentials with read access to your on-premises Active Directory. It does not have to be the user that is pre-populated in the dialog box.

6. When you are done, close the **Properties** dialog by clicking **OK**. If you have removed domains from the forest, a message pop-up saying a domain was removed and that configuration will be cleaned up.
7. Continue to adjust the [run profiles](#update-run-profiles).
### Update Run Profiles
If you have updated your domain filter, you also need to update the run profiles.
1. In the **Connectors** list, make sure the Connector you changed in the previous step is selected. From **Actions**, select **Configure Run Profiles**.

You need to adjust the following profiles:
- Full Import
- Full Synchronization
- Delta Import
- Delta Synchronization
- Export
For each of the five profiles, take the following steps for each **added** domain:
1. Select the run profile and click **New Step**.
2. On the **Configure Step** page, in the **Type** drop-down, select the step type with the same name as the profile you are configuring. Then click **Next**.

3. On the **Connector Configuration** page, in the **Partition** drop-down, select the name of the domain you have added to your domain filter.

4. To close the **Configure Run Profile** dialog, click **Finish**.
For each of the five profiles, take the following steps for each **removed** domain:
1. Select the run profile.
2. If the **Value** of the **Partition** attribute is a GUID, select the run step and click **Delete Step**.

The result should be that each domain you want to synchronize should be listed as a step in each run profile.
To close the **Configure Run Profiles** dialog, click **OK**.
- To complete the configuration, [Apply and verify changes](#apply-and-verify-changes).
## Organizational-unit–based filtering
The preferred way to change OU-based filtering is by running the installation wizard and change [domain and OUs filtering](active-directory-aadconnect-get-started-custom.md#domain-and-ou-filtering). The installation wizard is automating all the tasks documented in this topic.
You should only follow these steps if you for some reason are unable to run the installation wizard.
**To configure organizational-unit–based filtering, do the following steps:**
1. Sign in to the server that is running Azure AD Connect sync by using an account that is a member of the **ADSyncAdmins** security group.
2. Start **Synchronization Service** from the start menu.
3. Select **Connectors** and in the **Connectors** list, select the Connector with the type **Active Directory Domain Services**. From **Actions**, select **Properties**.

4. Click **Configure Directory Partitions**, select the domain you want to configure, and then click **Containers**.
5. When prompted, provide any credentials with read access to your on-premises Active Directory. It does not have to be the user that is pre-populated in the dialog box.
6. In the **Select Containers** dialog box, clear the OUs that you don’t want to synchronize with the cloud directory, and then click **OK**.

- The **Computers** container should be selected for your Windows 10 computers to be successfully synchronized to Azure AD. If your domain joined computers are located in other OUs, make sure those are selected.
- The **ForeignSecurityPrincipals** container should be selected if you have multiple forests with trusts. This container allows cross-forest security group membership to be resolved.
- The **RegisteredDevices** OU should be selected if you have enabled the device writeback feature. If you use another writeback feature, such as group writeback, make sure these locations are selected.
- Select any other OU where Users, iNetOrgPersons, Groups, Contacts, and Computers are located. In the picture, all these are located in the ManagedObjects OU.
7. When you are done, close the **Properties** dialog by clicking **OK**.
8. To complete the configuration, [Apply and verify changes](#apply-and-verify-changes).
## Attribute-based filtering
Make sure you are on the November 2015 ([1.0.9125](active-directory-aadconnect-version-history.md#1091250)) or later build for these steps to work.
Attribute based filtering is the most flexible way to filter objects. You can use the power of [declarative provisioning](active-directory-aadconnectsync-understanding-declarative-provisioning.md) to control almost every aspect of when an object should be synchronized to Azure AD.
Filtering can be applied both on the [inbound](#inbound-filtering) from Active Directory to the metaverse and [outbound](#outbound-filtering) from the metaverse to Azure AD. It is recommended to apply filtering on inbound since that is easiest to maintain. Outbound filtering should only be used if it is required to join objects from more than one forest before the evaluation can take place.
### Inbound filtering
Inbound based filtering is using the default configuration where objects going to Azure AD must have the metaverse attribute cloudFiltered not set to a value to be synchronized. If this attribute's value is set to **True**, then the object is not synchronized. It should not be set to **False** by design. To make sure other rules have the ability to contribute a value, this attribute is only supposed to have the values **True** or **NULL** (absent).
In the inbound filtering, you use the power of **scope** to determine which objects should or should not be synchronized. This is where you make adjustments to fit your own organization's requirements. The scope module has **group** and **clause** to determine if a sync rule should be in scope. A **group** contains one or many **clause**. There is a logical AND between multiple clauses and a logical OR between multiple groups.
Let us look at an example:

This should be read as **(department = IT) OR (department = Sales AND c = US)**.
In the samples and steps below, you use the user object as an example, but you can use this for all object types.
In the samples below, the precedence value start with 500. This value ensures these rules are evaluated after the out-of-box rules (lower precedence, higher numeric value).
#### Negative filtering, "do not sync these"
In the following example, you filter out (not synchronize) all users where **extensionAttribute15** have the value **NoSync**.
1. Sign in to the server that is running Azure AD Connect sync by using an account that is a member of the **ADSyncAdmins** security group.
2. Start **Synchronization Rules Editor** from the start menu.
3. Make sure **Inbound** is selected and click **Add New Rule**.
4. Give the rule a descriptive name, such as "*In from AD – User DoNotSyncFilter*". Select the correct forest, **User** as the **CS object type**, and **Person** as the **MV object type**. As **Link Type**, select **Join** and in precedence type a value currently not used by another Synchronization Rule (for example 500), and then click **Next**.

5. In **Scoping filter**, click **Add Group**, click **Add Clause**, and in attribute select **ExtensionAttribute15**. Make sure the Operator is set to **EQUAL** and type the value **NoSync** in the Value box. Click **Next**.

6. Leave the **Join** rules empty, and then click **Next**.
7. Click **Add Transformation**, select the **FlowType** to **Constant**, select the Target Attribute **cloudFiltered** and in the Source text box, type **True**. Click **Add** to save the rule.

8. To complete the configuration, [Apply and verify changes](#apply-and-verify-changes).
#### Positive filtering, "only sync these"
Expressing positive filtering can be more challenging since you have to also consider objects that are not obvious to be synchronized, such as conference rooms.
The positive filtering option requires two sync rules. One (or several) with the correct scope of objects to synchronize and a second catch-all sync rule that filter out all objects that have not yet been identified as an object which should be synchronized.
In the following example, you only synchronize user objects where the department attribute has the value **Sales**.
1. Sign in to the server that is running Azure AD Connect sync by using an account that is a member of the **ADSyncAdmins** security group.
2. Start **Synchronization Rules Editor** from the start menu.
3. Make sure **Inbound** is selected and click **Add New Rule**.
4. Give the rule a descriptive name, such as "*In from AD – User Sales sync*". Select the correct forest, **User** as the **CS object type**, and **Person** as the **MV object type**. As **Link Type**, select **Join** and in precedence type a value currently not used by another Synchronization Rule (for example 501), and then click **Next**.

5. In **Scoping filter**, click **Add Group**, click **Add Clause**, and in attribute select **department**. Make sure the Operator is set to **EQUAL** and type the value **Sales** in the Value box. Click **Next**.

6. Leave the **Join** rules empty, and then click **Next**.
7. Click **Add Transformation**, select the **FlowType** to **Constant**, select the Target Attribute **cloudFiltered** and in the Source text box, type **False**. Click **Add** to save the rule.

This is a special case where you set cloudFiltered explicitly to False.
We now have to create the catch-all sync rule.
8. Give the rule a descriptive name, such as "*In from AD – User Catch-all filter*". Select the correct forest, **User** as the **CS object type**, and **Person** as the **MV object type**. As **Link Type**, select **Join** and in precedence type a value currently not used by another Synchronization Rule (for example 600). You have selected a precedence value higher (lower precedence) than the previous sync rule but also left some room so we can add more filtering sync rules later when you want to start synchronizing additional departments. Click **Next**.

9. Leave **Scoping filter** empty, and click **Next**. An empty filter indicates the rule should be applied to all objects.
10. Leave the **Join** rules empty, and then click **Next**.
11. Click **Add Transformation**, select the **FlowType** to **Constant**, select the Target Attribute **cloudFiltered** and in the Source text box, type **True**. Click **Add** to save the rule.

12. To complete the configuration, [Apply and verify changes](#apply-and-verify-changes).
If you need to, then you can create more rules of the first type where you include more objects in our synchronization.
### Outbound filtering
In some cases, it is necessary to do the filtering only after the objects have joined in the metaverse. It could, for example, be required to look at the mail attribute from the resource forest and the userPrincipalName attribute from the account forest to determine if an object should be synchronized. In these cases, you create the filtering on the outbound rule.
In this example, you change the filtering so only users where both mail and userPrincipalName end with @contoso.com are synchronized:
1. Sign in to the server that is running Azure AD Connect sync by using an account that is a member of the **ADSyncAdmins** security group.
2. Start **Synchronization Rules Editor** from the start menu.
3. Under **Rules Type**, click **Outbound**.
4. Find the rule named **Out to AAD – User Join SOAInAD**. Click **Edit**.
5. In the pop-up, answer **Yes** to create a copy of the rule.
6. On the **Description** page, change precedence to an unused value, for example 50.
7. Click **Scoping filter** on the left-hand navigation. Click **Add clause**, in Attribute select **mail**, in Operator select **ENDSWITH**, and in Value type **@contoso.com**. Click **Add clause**, in Attribute select **userPrincipalName**, in Operator select **ENDSWITH**, and in Value type **@contoso.com**.
8. Click **Save**.
9. To complete the configuration, [Apply and verify changes](#apply-and-verify-changes).
## Apply and verify changes
After you have made your configuration changes, these must be applied to the objects already present in the system. It could also be that objects not currently in the sync engine should be processed and the sync engine needs to read the source system again to verify its content.
If you changed configuration using **domain** or **organizational-unit** filtering, then you need to do **Full import** followed by **Delta synchronization**.
If you changed configuration using **attribute** filtering, then you need to do **Full synchronization**.
Take the following steps:
1. Start **Synchronization Service** from the start menu.
2. Select **Connectors** and in the **Connectors** list, select the Connector where you made a configuration change earlier. From **Actions**, select **Run**.

3. In the **Run profiles**, select the operation mentioned in the previous section. If you need to run two actions, run the second after the first one has completed (the **State** column is **Idle** for the selected Connector).
After the synchronization, all changes are staged to be exported. Before you actually make the changes in Azure AD, you want to verify that all these changes are correct.
1. Start a cmd prompt and go to `%Program Files%\Microsoft Azure AD Sync\bin`
2. Run: `csexport "Name of Connector" %temp%\export.xml /f:x`
The name of the Connector can be found in Synchronization Service. It has a name similar to "contoso.com – AAD" for Azure AD.
3. Run: `CSExportAnalyzer %temp%\export.xml > %temp%\export.csv`
4. You now have a file in %temp% named export.csv that can be examined in Microsoft Excel. This file contains all changes that are about to be exported.
5. Make necessary changes to the data or configuration and run these steps again (Import, Synchronize, and Verify) until the changes that are about to be exported are expected.
When you are satisfied, export the changes to Azure AD.
1. Select **Connectors** and in the **Connectors** list, select the Azure AD Connector. From **Actions**, select **Run**.
2. In the **Run profiles**, select **Export**.
3. If your configuration changes delete many objects, then you see an error on the export when the number is more than the configured threshold (by default 500). If you see this error, then you need to temporarily disable the feature
[prevent accidental deletes](active-directory-aadconnectsync-feature-prevent-accidental-deletes.md).
Now it is time to enable the scheduler again.
1. Start **Task Scheduler** from the start menu.
2. Directly under **Task Scheduler Library**, find the task named **Azure AD Sync Scheduler**, right-click, and select **Enable**.
## Next steps
Learn more about the [Azure AD Connect sync](active-directory-aadconnectsync-whatis.md) configuration.
Learn more about [Integrating your on-premises identities with Azure Active Directory](active-directory-aadconnect.md).
| 92.591696 | 661 | 0.776262 | eng_Latn | 0.998427 |
2ca8cf48514e22a8c1d3b14ded56e778ce549127 | 809 | md | Markdown | README.md | Kakadua/PHP-Snippets | cc62aab279260eafd5f0dc8415657592d0b3c834 | [
"Unlicense"
] | 1 | 2017-01-29T14:47:23.000Z | 2017-01-29T14:47:23.000Z | README.md | Kakadua/PHP-Snippets | cc62aab279260eafd5f0dc8415657592d0b3c834 | [
"Unlicense"
] | null | null | null | README.md | Kakadua/PHP-Snippets | cc62aab279260eafd5f0dc8415657592d0b3c834 | [
"Unlicense"
] | 3 | 2015-12-15T15:14:14.000Z | 2018-11-26T07:33:01.000Z | PHP-Snippets
============
This is our collection of PHP-Snippets, they are incredibly easy to use.
You can either include just the ones you want like normal or you can include "include_functions.php" and it will include the rest for you.
Lets look at an example.
We have the following string.
$my_string = "Kakadua means Cockatoo in Swedish";
We want to check if the string contains the word Cockatoo by using our function string_contain($string, $substring)
<?php
include('PHP-Snippets/include_functions.php');
$my_string = "Kakadua means Cockatoo in Swedish";
if(string_contain($my_string, 'Cockatoo')){
echo "The string contains the word Cockatoo";
} else{
echo "The string does not contain the word Cockatoo";
}
?>
All scripts have PHPDoc written for apigen
http://www.apigen.org/ | 24.515152 | 138 | 0.742892 | eng_Latn | 0.99831 |
2ca92339669cd0e09d94eb4a4a19a562770baff9 | 70 | md | Markdown | README.md | SergeiKulishov/RemonlineConnection | 366c717e17768952c1dbdf056afc877aae06c476 | [
"MIT"
] | null | null | null | README.md | SergeiKulishov/RemonlineConnection | 366c717e17768952c1dbdf056afc877aae06c476 | [
"MIT"
] | null | null | null | README.md | SergeiKulishov/RemonlineConnection | 366c717e17768952c1dbdf056afc877aae06c476 | [
"MIT"
] | null | null | null | # RemonlineConnection
Library for making access to remonline.ru API
| 23.333333 | 47 | 0.814286 | eng_Latn | 0.580918 |
2ca9375df814d5b158f361d3d211aa6cb95df8c5 | 1,808 | md | Markdown | blog/stories/2020/04/17/a141010.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 93 | 2016-06-02T15:40:14.000Z | 2022-02-02T20:02:08.000Z | blog/stories/2020/04/17/a141010.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 231 | 2016-06-02T15:21:23.000Z | 2022-02-18T20:48:20.000Z | blog/stories/2020/04/17/a141010.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 11 | 2017-06-27T11:58:01.000Z | 2021-06-21T00:55:07.000Z | # Glitch, day 3
Okay it took a lot of hacking, discomfort, trial and error and confusion, because the Glitch model is so different from the <a href="https://www.digitalocean.com/products/droplets/">one</a> I'm used to, but I did finally get a simple editor app running on a Glitch server.
It's a <a href="http://macwrite.org/glitch/">variant</a> of MacWrite. Hooks up to an instance of "nodeStorage" running on the Glitch server, which in turn connects to Twitter for identity, and it stores the files in the folder Glitch gives me. This is not meant to be something useful, just a testbed to learn with.
Screen shot: Simple text editor running on Glitch.
One thing I've had trouble with is knowing when it's running what version of my app because they appear to relaunch it on every editing change. I understand they do this for newbie programmers, so it's something they don't have to learn to get to Hello World. I might do it that way. But as a programmer for many years, I would like to control that. Ideally from the terminal. Run the app the way I do on my Linux server. <code>node appname.js</code>.
A little prior art, <a href="https://en.wikipedia.org/wiki/Turbo_Pascal">Turbo Pascal</a> was very explicit in when you ran what version of the app. You had to stop the app manually, and then restart it after a change. I was already a professional programmer at the time, so I can't vouch for how that worked for newbies, but a lot of people learned to program in TP, so it worked for some of them. In other words, not sure the way Glitch does it is even the right way to go for beginners. You want there to be as little magic as possible, imho.
Not sure what my next experiment will be.
The thread on GitHub <a href="https://github.com/scripting/Scripting-News/issues/168">continues</a>.
| 113 | 546 | 0.764381 | eng_Latn | 0.999526 |
2ca9a2c15a4317a0488883955ba1ca15ec70348b | 6,781 | md | Markdown | articles/cognitive-services/translator-speech/overview.md | YutongTie-MSFT/azure-docs.de-de | f7922d4a0ebfb2cbb31d7004d4f726202f39716b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/translator-speech/overview.md | YutongTie-MSFT/azure-docs.de-de | f7922d4a0ebfb2cbb31d7004d4f726202f39716b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/translator-speech/overview.md | YutongTie-MSFT/azure-docs.de-de | f7922d4a0ebfb2cbb31d7004d4f726202f39716b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Was ist der Sprachübersetzungsdienst?
titleSuffix: Azure Cognitive Services
description: Verwenden Sie die Sprachübersetzungsdienst-API, um Ihre Anwendungen mit Sprache-in-Sprache- und Sprache-in-Text-Übersetzungen auszustatten.
services: cognitive-services
author: Jann-Skotdal
manager: nitinme
ms.service: cognitive-services
ms.subservice: translator-speech
ms.topic: overview
ms.date: 3/5/2018
ms.author: v-jansko
ROBOTS: NOINDEX,NOFOLLOW
ms.openlocfilehash: 24014bb06a779c214f18f966dfb1d26d61adee8d
ms.sourcegitcommit: 8ca6cbe08fa1ea3e5cdcd46c217cfdf17f7ca5a7
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 02/22/2019
ms.locfileid: "56674853"
---
# <a name="what-is-translator-speech-api"></a>Was ist die Sprachübersetzungs-API?
[!INCLUDE [Deprecation note](../../../includes/cognitive-services-translator-speech-deprecation-note.md)]
Mit der Sprachübersetzungs-API können Sie Anwendungen, Tools und Lösungen, die eine Sprachübersetzung für mehrere Sprachen erfordern, unabhängig vom Zielbetriebssystem oder den Entwicklungssprachen mit umfassenden Sprachübersetzungen in Echtzeit ausstatten. Die API kann sowohl für Sprache-zu-Sprache-Übersetzungen als auch für Sprache-zu-Text-Übersetzungen verwendet werden.
Die Textübersetzungs-API ist ein Azure-Dienst und gehört zur [API-Sammlung von Azure Cognitive Services](https://docs.microsoft.com/azure/). Hierbei handelt es sich um eine Sammlung von Machine Learning- und KI-Algorithmen in der Cloud, die Sie direkt in Ihren Entwicklungsprojekten verwenden können.
Mit der Sprachübersetzungs-API streamen Clientanwendungen Audio an den Dienst und empfangen einen Stream mit text- und audiobasierten Ergebnissen. Diese umfassen den erkannten Text in der Ausgangssprache und die entsprechende Übersetzung in der Zielsprache. Zur Generierung der Textergebnisse wird auf den eingehenden Audiostream eine auf neuronalen Netzwerken basierende automatische Spracherkennung (Automatic Speech Recognition, ASR) angewendet. Eine unformatierte ASR-Ausgabe wird mithilfe einer neuen Technik namens TrueText weiter verbessert, um die Benutzerabsicht noch besser zu erfassen. So entfernt TrueText beispielsweise Elemente, die den Textfluss stören (etwa „Hmm“ und Husten), sowie Wortwiederholungen und sorgt für eine ordnungsgemäße Interpunktion und Großschreibung. Es besteht auch die Möglichkeit, anstößige Ausdrücke zu maskieren oder auszuschließen. Die Erkennungs- und Übersetzungsengines werden speziell für die Verarbeitung von Konversationen trainiert.
Der Sprachübersetzungsdienst verwendet die Erkennung von Stille, um das Ende einer Äußerung zu bestimmen. Nach einer Sprechpause gibt der Dienst mittels Streaming ein Endergebnis der abgeschlossenen Äußerung zurück. Der Dienst kann auch Teilergebnisse zurückgeben, die Zwischeninformationen zu Erkennungen und Übersetzungen einer noch nicht abgeschlossenen Äußerung liefern.
Bei Sprache-zu-Sprache-Übersetzungen bietet der Dienst die Möglichkeit, Sprache aus dem gesprochenen Text in den Zielsprachen zu synthetisieren (Text-to-Speech). Das Audio der Sprachsynthese wird im vom Client angegebenen Format erstellt. Verfügbare Formate sind WAV und MP3.
Die Sprachübersetzungs-API verwendet für die Bereitstellung eines Vollduplex-Kommunikationskanals zwischen dem Client und dem Server das WebSocket-Protokoll.
## <a name="about-microsoft-translator"></a>Informationen zu Microsoft Translator
Microsoft Translator ist ein cloudbasierter Übersetzungsdienst. Das Herzstück dieses Diensts bilden die [Textübersetzungs-API](https://www.microsoft.com/en-us/translator/translatorapi.aspx) und die Sprachübersetzungs-API, die in verschiedensten Produkten und Diensten von Microsoft zum Einsatz kommen und in Anwendungen und Workflows von Tausenden von Unternehmen auf der ganzen Welt genutzt werden, die mit ihren Inhalten ein globales Publikum erreichen möchten.
Weitere Informationen zum Microsoft Translator-Dienst finden Sie [hier](https://www.microsoft.com/en-us/translator/home.aspx).
## <a name="microsoft-translator-neural-machine-translation-nmt"></a>Neuronale maschinelle Übersetzungen (NMT) von Microsoft Translator
Die Sprachübersetzungs-API verwendet sowohl die ältere statistische Maschinenübersetzung (Statistical Machine Translation, SMT) als auch die neuere neuronale maschinelle Übersetzung (Neural Machine Translation, NMT), um Übersetzungen bereitzustellen.
Die Leistung der statistischen Maschinenübersetzung hat ihren Zenit erreicht: Die Übersetzungsqualität lässt sich bei generischen Systemen mit SMT nicht mehr nennenswert verbessern. Dafür ist eine neue Übersetzungstechnologie auf dem Vormarsch, die auf künstlicher Intelligenz und neuronale Netzwerken (NN) basiert.
Im Vergleich zu SMT liefert NMT bessere Übersetzungen – nicht nur im Hinblick auf die grundsätzliche Übersetzungsqualität, sondern auch im Hinblick auf Textfluss und Natürlichkeit.
Der Hauptgrund für diesen Textfluss besteht darin, dass NMT bei der Übersetzung von Wörtern den gesamten Kontext eines Satzes berücksichtigt. SMT berücksichtigt dagegen nur den unmittelbaren Kontext weniger Wörter vor und nach jedem Wort.
NMT-Modelle sind das Herzstück der API und für Endbenutzer nicht sichtbar. Sie machen sich einzig durch Folgendes bemerkbar:
* Höhere Übersetzungsqualität – insbesondere für Sprachen wie Chinesisch, Japanisch und Arabisch
* Inkompatibilität mit den vorhandenen Hub-Anpassungsfeatures (zur Verwendung mit der Textübersetzungs-API von Microsoft)
Alle unterstützten Sprachen für die Sprachübersetzung basieren auf NMT. Daher kommt bei allen Sprache-zu-Sprache-Übersetzungen NMT zum Einsatz.
Bei Sprache-zu-Text-Übersetzungen kann je nach Sprachpaar eine Kombination aus SMT und NMT verwendet werden. Wenn die Zielsprache von NMT unterstützt wird, wird die gesamte Übersetzung über NMT abgewickelt. Wenn die Zielsprache nicht von NMT unterstützt wird, wird für die Übersetzung eine Kombination aus NMT und SMT mit Englisch als „Pivot“ zwischen den beiden Sprachen verwendet.
Die unterstützten Sprachen finden Sie auf [Microsoft.com](https://www.microsoft.com/en-us/translator/languages.aspx).
Weitere Informationen zur Funktionsweise von NMT finden Sie [hier](https://www.microsoft.com/en-us/translator/mt.aspx#nnt).
## <a name="next-steps"></a>Nächste Schritte
> [!div class="nextstepaction"]
> [Registrieren](translator-speech-how-to-signup.md)
> [!div class="nextstepaction"]
> [Programmieren](quickstarts/csharp.md)
## <a name="see-also"></a>Weitere Informationen
- [Dokumentationsseite zu Cognitive Services](https://docs.microsoft.com/azure/)
- [Produktseite zu Cognitive Services](https://azure.microsoft.com/services/cognitive-services/)
- [Lösungs- und Preisinformationen](https://www.microsoft.com/en-us/translator/home.aspx)
| 91.635135 | 979 | 0.831146 | deu_Latn | 0.995424 |
2caa05e4d34a9f2f80e4969703d80768c4151d91 | 1,389 | md | Markdown | README.md | bertrandmartel/external-ip | 74da75a922dacb898ca2163684614c09b153c5cd | [
"MIT"
] | null | null | null | README.md | bertrandmartel/external-ip | 74da75a922dacb898ca2163684614c09b153c5cd | [
"MIT"
] | null | null | null | README.md | bertrandmartel/external-ip | 74da75a922dacb898ca2163684614c09b153c5cd | [
"MIT"
] | null | null | null | # API exposing inbound/outbound IP
[](https://github.com/bertrandmartel/external-ip/actions?workflow=build%20and%20deploy)
[](https://goreportcard.com/report/github.com/bertrandmartel/external-ip)
[](https://hub.docker.com/r/bertrandmartel/external-ip)
[](LICENSE.md)
Small server written in go exposing a single API displaying inbound & outbound IP
* outbound IP(s) is(are) retrieved using [ipify API](https://www.ipify.org/)
* inbound IP(s) is(are) retrieved using [Google DNS API](https://dns.google.com/)
Environment variables :
| name | description |
|----------|-------------|
| PORT | server port |
| HOSTNAME | hostname to check inbound ip |
## Using Docker
* DockerHub
```
docker run -p 4242:4242 -e PORT=4242 -e HOSTNAME=example.com -it bertrandmartel/external-ip
```
* locally
```
docker build . -t external-ip
docker run -p 4242:4242 -e PORT=4242 -e HOSTNAME=example.com -it external-ip
```
## Using Go
```
go install
go run ./main.go
```
or
```
go install
go build
./external-ip
```
## Dependencies
* [echo](https://echo.labstack.com/) | 26.711538 | 190 | 0.711303 | yue_Hant | 0.471494 |
2caa0a9001ad5c5a8e951657eb46ae7631d702c9 | 2,091 | md | Markdown | README.md | Robbie-Bridgwater/Professional-ReadME-generator | 821a37da58ed3b216529e7b5ac73401438643b48 | [
"MIT"
] | null | null | null | README.md | Robbie-Bridgwater/Professional-ReadME-generator | 821a37da58ed3b216529e7b5ac73401438643b48 | [
"MIT"
] | null | null | null | README.md | Robbie-Bridgwater/Professional-ReadME-generator | 821a37da58ed3b216529e7b5ac73401438643b48 | [
"MIT"
] | 1 | 2021-02-28T09:57:12.000Z | 2021-02-28T09:57:12.000Z | # Professional-README-generator
A simple command line application that allows you to generate a README.md file.
## Contents
Section | Description
------------ | -------------
[Deployment](#Walk-Through) | Link to a Video Walk-Through
[Technologies](#Technologies) | Technologies Used
[Installation](#Installation) | Installation Information
[Usage](#Usage) | How to use the application
[Screenshots](#Screenshots) | Screenshots of the deployed application
[Licence](#licence) | Licence for the source code
[Questions](#Questions?) | Where you can reach me
## Walk-Through
OPEN VIDEO WALK-THROUGH ---> [HERE](https://drive.google.com/file/d/1b4cG3TcNyJN-yYBaiMFcU7u-GVawT1B5/view)
## User Story
- AS A developer
- I WANT a README generator
- SO THAT I can quickly create a professional README for a new project
## Technologies
- JavaScript
## Installation
To run this application locally, do the following:
- (i) Clone this repository from GitHub
- (ii) This app contains a package.json so you just need to run `npm i` from the root directory to install the relative node packages
- (iii) run `npm start` in the terminal from the root directory
## Usage
- After running `npm start` in the terminal, inquirer will display prompts in the console that will allow you to generate a README
- A video walkthrough can be found at the top of the README
## Languages Used
- JavaScript
## Screenshots
An example of a readME generated by this application --->


## License
[](https://lbesson.mit-license.org/)
> This project was created under the standard MIT licence.
> [Learn more about this licence.](https://lbesson.mit-license.org/)
## Questions?
Please contact me through my GitHub provided below if you have any questions relating to how the application works or any of my other projects
My GitHub username is Robbie-Bridgwater
Link to my GitHub Profile ---> https://github.com/Robbie-Bridgwater
| 32.671875 | 142 | 0.747967 | eng_Latn | 0.951522 |
2caa4128194082e01d4ababb380e1e1ce1f17ce4 | 96 | md | Markdown | .github/ISSUE_TEMPLATE/ci-cd.md | visuanalytics/visuanalytics | f9cce7bc9e3227568939648ddd1dd6df02eac752 | [
"MIT"
] | 3 | 2020-08-24T19:02:09.000Z | 2021-05-27T20:22:41.000Z | .github/ISSUE_TEMPLATE/ci-cd.md | SWTP-SS20-Kammer-2/Data-Analytics | 23f71b49efed53bba2887d68e389c732566e1932 | [
"MIT"
] | 342 | 2020-08-13T10:24:23.000Z | 2021-08-12T14:01:52.000Z | .github/ISSUE_TEMPLATE/ci-cd.md | visuanalytics/visuanalytics | f9cce7bc9e3227568939648ddd1dd6df02eac752 | [
"MIT"
] | 8 | 2020-09-01T07:11:18.000Z | 2021-04-09T09:02:11.000Z | ---
name: CI/CD
about: Changes to the CI/CD
title: "[CI/CD]"
labels: CI/CD
assignees: ''
---
| 8.727273 | 27 | 0.59375 | eng_Latn | 0.755144 |
2caa4b17ae095332e8e076503199b74dc0d90a71 | 378 | md | Markdown | docs/CustomerCard.md | COMBASE/cloud-api-v3-js-client | 2c09782e8729f1238793cd25cc53e558516340fb | [
"MIT"
] | null | null | null | docs/CustomerCard.md | COMBASE/cloud-api-v3-js-client | 2c09782e8729f1238793cd25cc53e558516340fb | [
"MIT"
] | null | null | null | docs/CustomerCard.md | COMBASE/cloud-api-v3-js-client | 2c09782e8729f1238793cd25cc53e558516340fb | [
"MIT"
] | 1 | 2021-07-13T00:07:15.000Z | 2021-07-13T00:07:15.000Z | # KoronacloudApiV3.CustomerCard
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**_number** | **String** | | [optional]
**type** | **String** | | [optional]
<a name="TypeEnum"></a>
## Enum: TypeEnum
* `CREDIT` (value: `"CREDIT"`)
* `DEBIT` (value: `"DEBIT"`)
* `FRIENDSBONUS` (value: `"FRIENDSBONUS"`)
| 16.434783 | 60 | 0.497354 | yue_Hant | 0.614611 |
2cabffc88c2685b5a82ac43222f0c60da8a7849d | 135 | md | Markdown | src/pages/about/index.md | Simon2828/coding-blog | bf961182cc97003f1c267019c91b894994a377f7 | [
"MIT"
] | null | null | null | src/pages/about/index.md | Simon2828/coding-blog | bf961182cc97003f1c267019c91b894994a377f7 | [
"MIT"
] | null | null | null | src/pages/about/index.md | Simon2828/coding-blog | bf961182cc97003f1c267019c91b894994a377f7 | [
"MIT"
] | null | null | null | ---
templateKey: about-page
title: Learning to code
---
I am using this blog to record my progress as I try to become a web developer.
| 22.5 | 78 | 0.740741 | eng_Latn | 0.999125 |
2cacebef64d88792c2820841a56e22010cb22d44 | 3,284 | md | Markdown | _posts/2020-12-03-RISCW-1.md | SadSock/SadSock.github.io | 6ef18bcbd1d7a0da06120763da7024739805ba03 | [
"Apache-2.0"
] | null | null | null | _posts/2020-12-03-RISCW-1.md | SadSock/SadSock.github.io | 6ef18bcbd1d7a0da06120763da7024739805ba03 | [
"Apache-2.0"
] | null | null | null | _posts/2020-12-03-RISCW-1.md | SadSock/SadSock.github.io | 6ef18bcbd1d7a0da06120763da7024739805ba03 | [
"Apache-2.0"
] | 1 | 2020-12-08T02:17:50.000Z | 2020-12-08T02:17:50.000Z | ---
layout: post
title: "为LLVM添加简易RISCV后端(一):入门"
toc: true
date: 2020-12-04 19:30:38 +0800
categories: LLVM
keywords: LLVM
description: 无
---
为一个新的指令集编写编译器是一件复杂的事情,尽管LLVM的出现使这个过程比之前简单多了!
一个匪夷所思的困难是缺乏一个简单的、循序渐进的教程[^1][^2]。
因此,本系列博客试图提供一个从头开始编写LLVM后端的简易教程来解决(部分)问题。
## 入门
在为新项目编写代码之前,我通常会配置环境,并对查看经存在的代码,这就是这一节要做的。在这一节中,我将展示如何下载编译LLVM和其他对调试有用的工具。我们还将了解如何使用现有的LLVM后端和GNU工具链来编译、汇编、链接和运行程序。
### 环境
我正在使用Ubuntu,但是你应该能够在其他系统中重复这些步骤,而且(相对来说)几乎没有什么不同。您将需要以下工具来构建软件。
* Makefile
* C/C++ Compiler – 我用 GCC 9.2.1
* autotools
* CMake
* Ninja
* Git
* 大量耐心
**注意:**我可能忘记了一些东西,但是构建系统会通过一个错误告诉您;
### 编译LLVM
LLVM维护者已经建立了这个方便的repo,它包含LLVM和工具链的其他部分,比如Clang。
```
git clone https://github.com/llvm/llvm-project
```
在本系列文章中,我们将使用llvm 10.0.1,我建议您也使用该版本的LLVM。
因为LLVM的变化非常快,这里显示的一些代码在旧/新版本中可能无法工作。
不过,原理应该大致相同。
LLVM使用CMake为构建系统生成构建文件,LLVM支持的构建系有:Ninja,Makefiles,Visual Studio和XCode。
我通常使用Ninja,因为我认为它在我的系统中速度最快(我没有证据支持该判断!)。
您可以通过cmake命令的`-G`参数来更改构建系统。
CMake有很多选项,我鼓励您对其进行研究,因为有些选项对调试非常有帮助。
您可以在[这里](https://llvm.org/docs/CMake.html)阅读所有构建选项。
在本教程中,我将使用以下选项:
1. `-DLLVM_ENABLE_PROJECTS` 构建编译器的其余部分,比如Clang。
2. `-DLLVM_TARGETS_TO_BUILD` 指定要构建的后端。查看其他后端的输出对调试很有帮助,但是如果添加太多,构建会花费很长时间。
3. `-DCMAKE_BUILD_TYPE` 构建Debug版本。
4. `-DLLVM_ENABLE_ASSERTIONS=On` 启用断言,对调试很有帮助。
以下是在克隆repo之后构建LLVM的方法。
```sh
cd llvm-project
git checkout llvmorg-10.0.1
mkdir build
cd build
cmake -G "Ninja" -DLLVM_ENABLE_PROJECTS="clang" -DLLVM_TARGETS_TO_BUILD="ARM;Lanai;RISCV" -DCMAKE_BUILD_TYPE="Debug" -DLLVM_ENABLE_ASSERTIONS=On ../llvm
ninja
```
**注意:**您可以在[这里](https://llvm.org/docs/GettingStarted.html)和[这里](https://llvm.org/docs/CMake.html)找到更多有关构建LLVM的信息。
**注意:**您可以为Ninja传递`-j <NUM_JOBS>`选项,以指示要并行的作业数。
过高的`<NUM_JOBS>`会导致构建崩溃,并产生`collect2:ld ...`错误消息。
### 编译RISC V的GNU工具链
你可能有点困惑,为什么我建议构建GCC的RISC V后端?
难道我们不是要自己编写编译器后端吗?
我们构建GCC的RISC V后端,是因为我们希望在初始阶段使用GCC的汇编器和链接器来测试LLVM后端生成的代码。
编译过程分为很多阶段,在初始阶段,我们已经有以下结构:
* Clang 编译C代码到LLVM IR
* LLVM 优化IR
* LLVM后端 编译IR到汇编
* GCC 汇编和链接可执行文件
使用以下命令下载,构建和安装GCC for RISCV。
```bash
git clone https://github.com/riscv/riscv-gnu-toolchain
cd riscv-gnu-toolchain
mkdir build
cd build
../configure --with-arch=rv32gc --with-abi=ilp32
make
make install
```
**注意:**请确保为指令集的正确变体(即RV32)构建GCC工具链,因为构建系统的默认值为RV64!
**注意:**GNU工具链支持RISC V的多个ABI,例如`ilp32`,`ilp32d`和`ilp32f`,这取决于您是否需要软浮点,硬浮点。
### 编译C程序
现在,构建和运行C代码的环境已经配置好了,尽管我们还没自己的后端(还!)。让我们从一个简单的C程序开始:
```C++
#include <stdio.h>
int main(void)
{
printf("Hello world!\n");
return 0;
}
```
首先,使用Clang将C代码编译为LLVM IR。
我们的计划是使用标准库中来自头文件stdio.h的函数printf,如果不能找到头文件,编译器会提示出错。
为了使用GCC自带的RISC V标准C库,我们使用了`-isystem`参数。
这会将包含所需头文件的目录添加到Clang预处理器的搜索目录列表中。
```sh
clang -O2 -emit-llvm -target riscv64 -isystem <PATH_TO_GCC>/riscv64-unknown-elf/include -c test.c -o test.bc
```
上面的命令把C语言文件test.c编译到LLVM IR文件test.bc,这是专门为机器设计的语言人类很难直接阅读。
我们可以使用以下命令反汇编该文件:
```sh
llvm-dis test.bc
```
现在,使用包含以下内容的后端将IR编译为程序集,而无需使用以下命令下载LLVM:
现在,使用LLVM自带的后端将IR编译为程汇编:
```sh
llc -march=riscv64 -O2 -filetype=asm test.bc -o test.S
```
GCC可以直接生成程序的二进制文件。
我将其分为两个步骤,但是您可以根据需要使用单个命令。
```sh
riscv64-unknown-elf-gcc -c test.S -o test.o
riscv64-unknown-elf-gcc test.o -o test
```
最后,我们可以使用模拟器或真实硬件运行程序。
## 注释
[^1]:公平地说,有不少关于LLVM的书籍和网站,但大多数都是对这个工具的一般性描述,还有是关于如何编写新前端的实践教程,但后端的教程非常少。
[^2]:[这个教程](https://jonathan2251.github.io/lbd/)描述了如何开发LLVM后端,但我发现很难理解。
| 21.187097 | 152 | 0.774665 | yue_Hant | 0.869248 |
2cad5c506f10cd3bec0630cfe48f411885b09e92 | 1,194 | md | Markdown | p/plupload/readme.md | ScalablyTyped/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 14 | 2020-01-09T02:36:33.000Z | 2021-09-05T13:40:52.000Z | p/plupload/readme.md | oyvindberg/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 1 | 2021-07-31T20:24:00.000Z | 2021-08-01T07:43:35.000Z | p/plupload/readme.md | oyvindberg/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 4 | 2020-03-12T14:08:42.000Z | 2021-08-12T19:08:49.000Z |
# Scala.js typings for plupload
Typings are for version 2.0
## Library description:
Plupload is a JavaScript API for dealing with file uploads it supports features like multiple file selection, file type filtering, request chunking, client side image scaling and it uses different runtimes to achieve this such as HTML 5, Silverlight and F
| | |
| ------------------ | :-------------: |
| Full name | plupload |
| Keywords | fileuploader, upload, chunk, image, resize, crop, orientation, JavaScript, HTML5, Flash, Silverlight, moxie |
| # releases | 0 |
| # dependents | 58 |
| # downloads | 197747 |
| # stars | 1 |
## Links
- [Homepage](http://plupload.com)
- [Bugs](https://github.com/moxiecode/plupload/issues)
- [Repository](https://github.com/moxiecode/plupload)
- [Npm](https://www.npmjs.com/package/plupload)
## Note
This library has been generated from typescript code from [DefinitelyTyped](https://definitelytyped.org).
Provided with :purple_heart: from [ScalablyTyped](https://github.com/oyvindberg/ScalablyTyped)
## Usage
See [the main readme](../../readme.md) for instructions.
| 34.114286 | 255 | 0.656616 | eng_Latn | 0.78168 |
2cad5d4ed99ee002173438a28eb7febe382da2c5 | 2,150 | md | Markdown | README.md | lambdaconcept/lambdaUSB | 2605a56c64b6fea17cd6528ee3ddbf0a8ff47d44 | [
"BSD-2-Clause"
] | 25 | 2019-10-10T07:32:23.000Z | 2022-03-23T23:54:08.000Z | README.md | jfng/lambdaUSB | 2605a56c64b6fea17cd6528ee3ddbf0a8ff47d44 | [
"BSD-2-Clause"
] | 2 | 2019-10-24T09:50:53.000Z | 2020-07-10T11:37:21.000Z | README.md | jfng/lambdaUSB | 2605a56c64b6fea17cd6528ee3ddbf0a8ff47d44 | [
"BSD-2-Clause"
] | 3 | 2019-10-25T05:25:10.000Z | 2020-05-20T07:54:22.000Z | # lambdaUSB
## A configurable USB 2.0 device core using [nMigen](https://github.com/m-labs/nmigen)
**lambdaUSB is still in an experimental stage and is therefore incomplete. The user-facing API may change before reaching stability.**

### Features
* High Speed USB
* up to 32 endpoints (16 inputs, 16 outputs)
* double buffering, per endpoint
### Installation
Download and install lambdaUSB:
git clone https://github.com/lambdaconcept/lambdaUSB
cd lambdaUSB
python3 setup.py develop --user
### Usage
1. Instantiate the USB device:
```python
m.submodules.ulpi_phy = ulpi_phy = ulpi.PHY(pins=platform.request("ulpi", 0))
m.submodules.usb_dev = usb_dev = usb.Device()
m.d.comb += [
ulpi_phy.rx.connect(usb_dev.rx),
usb_dev.tx.connect(ulpi_phy.tx),
]
```
For the moment, only ULPI transceivers such as the USB3300 are supported.
2. Instantiate endpoint interfaces:
```python
ep1_in = usb.InputEndpoint(xfer=usb.Transfer.BULK, max_size=512)
ep1_out = usb.OutputEndpoint(xfer=usb.Transfer.BULK, max_size=512)
```
3. Add endpoint interfaces to the USB device:
```python
usb_dev.add_endpoint(ep1_in, addr=1)
usb_dev.add_endpoint(ep1_out, addr=1)
```
For a full example, have a look at `examples/blinker`.
### Device configuration
For convenience, we provide a `ConfigurationFSM` to manage EP0 in `lambdausb.usb.config`.
It stores the configuration descriptors in a ROM, and responds to host requests.
To use it, you must first generate a config file:
```
cd tools/genconfig
make
```
You will be presented a menuconfig interface from which you can setup your USB device:

The output `config.py` file can be imported and used like so:
```python
from lambdausb.usb.config import ConfigurationFSM
from config import descriptor_map, rom_init
m.submodules.cfg_fsm = cfg_fsm = ConfigurationFSM(descriptor_map, rom_init)
usb_dev.add_endpoint(cfg_fsm.ep_in, addr=0)
usb_dev.add_endpoint(cfg_fsm.ep_out, addr=0)
m.d.comb += usb_dev.addr.eq(cfg_fsm.dev_addr)
```
### License
lambdaUSB is released under the two-clause BSD license.
| 25.595238 | 134 | 0.754419 | eng_Latn | 0.862001 |
2cae0e402667804e6e78054bcf5519f284b061c2 | 76 | md | Markdown | README.md | RogerDurdn/kubernetes-journey | 9b4593c10eac0e7ca566a0a1430a41591471d1ad | [
"Apache-2.0"
] | null | null | null | README.md | RogerDurdn/kubernetes-journey | 9b4593c10eac0e7ca566a0a1430a41591471d1ad | [
"Apache-2.0"
] | null | null | null | README.md | RogerDurdn/kubernetes-journey | 9b4593c10eac0e7ca566a0a1430a41591471d1ad | [
"Apache-2.0"
] | null | null | null | # kubernetes-journey
Exercises from different resources to learn Kubernetes
| 25.333333 | 54 | 0.855263 | eng_Latn | 0.943772 |
2cae258fa4bfbcde73d2a841b40ee573981b3356 | 481 | md | Markdown | showcase/mediaeditsio.md | 000744210/made-with-webassembly | 85c45bde66216cc35312363e9a8a0277d6242266 | [
"MIT"
] | 169 | 2019-11-13T21:28:48.000Z | 2022-03-31T17:25:02.000Z | showcase/mediaeditsio.md | 000744210/made-with-webassembly | 85c45bde66216cc35312363e9a8a0277d6242266 | [
"MIT"
] | 34 | 2019-11-13T22:20:53.000Z | 2022-03-01T21:51:02.000Z | showcase/mediaeditsio.md | 000744210/made-with-webassembly | 85c45bde66216cc35312363e9a8a0277d6242266 | [
"MIT"
] | 48 | 2019-11-20T23:06:20.000Z | 2022-03-31T15:33:18.000Z | ---
name: MediaEdits.io
logo_url: /assets/showcase-assets/mediaedits-logo.png
website: https://mediaedits.io/
description: Remove background noise from audio files online for free.
keywords: Noise reduction media edit wav mp3 m4a aac
---
MediaEdits.io is an software company specializing in creating web applications that edit media files.
Head on over to [MediaEdits.io](https://mediaedits.io/) for a live demo!

| 34.357143 | 101 | 0.785863 | eng_Latn | 0.746084 |
2cae819d3a49eca7502960b51990300c0fc61889 | 2,422 | md | Markdown | README.md | StavromularBeta/Rover | 3030f1521e5a6bc2c6722983ca59a008b3a11400 | [
"MIT"
] | null | null | null | README.md | StavromularBeta/Rover | 3030f1521e5a6bc2c6722983ca59a008b3a11400 | [
"MIT"
] | null | null | null | README.md | StavromularBeta/Rover | 3030f1521e5a6bc2c6722983ca59a008b3a11400 | [
"MIT"
] | null | null | null | **Rover** is an automated report maker that creates Latex reports from XML data generated by TargetLynx.
An analyst exports data produced by a waters HPLC/UV set-up from TargetLynx as an XML file. This XML file is opened
up in excel, which produces a schema. This file is saved in the xml_data_files directory.
Once this is done, the analyst runs GUI/RoverGUI. They select the data file.
**Pre_Generate** will manipulate the data in the following ways:
1. organizing data into blanks, standards, samples, and dilutions.
2. Finding the best blank and standard to report with the sample data.
3. Swapping out out of calibration values with corresponding values from sample dilutions.
4. Finding the correct header file (customer information) to go with the data, parsing it, and attaching it to the data.
5. rounding raw values to an appropriate amount of significant figures.
6. convert the analytical concentration to a percentage concentration.a
Once Pre_Generate is finished, it will present the batch information in the **GUI**. The analyst will have the opportunity
on this page to do the following:
1. Change incorrectly parsed header and sample name information.
2. Review the best blank and the best standard, review the raw data produced by pre_generate.
3. Select unit type, deluxe/basic reporting, single/multiple samples per page. Add unit masses/density where relevant.
Once header information has been verified, the data reviewed, and options are selected, the analyst will hit the
"generate batch" button at the bottom of the GUI batch window screen. **Post_Generate** will assemble the relevant reports.
If all goes well, Latex files will be generated in the 'reports' directory. These latex files can be
converted to PDF form by any latex writing/editing program - TeXnicCenter is one example, and is an excellent program.
These latex files can be directly edited and provide one last place to modify information on the reports - this is
the only place where an analyst can manipulate the actual sample data. This means that no human interacts with any piece
of data other than customer information throughout the report generating process. Any error in the data can only come
from the program itself, or have been generated prior to producing the XML file for the Rover program.
Rover comes with **documentation** in the docs folder. Opening index.html in a browser will give you access to these. | 71.235294 | 123 | 0.799752 | eng_Latn | 0.999431 |
2caf1c9f0618c50ef1631cb74fdd77362e9f1b5a | 1,484 | md | Markdown | _posts/2016-1-1-Short-entries.md | bk7312/bk7312.github.io | ca09ebc50453f73979a9fd6a8737498a79eca75b | [
"MIT"
] | null | null | null | _posts/2016-1-1-Short-entries.md | bk7312/bk7312.github.io | ca09ebc50453f73979a9fd6a8737498a79eca75b | [
"MIT"
] | null | null | null | _posts/2016-1-1-Short-entries.md | bk7312/bk7312.github.io | ca09ebc50453f73979a9fd6a8737498a79eca75b | [
"MIT"
] | null | null | null | ---
layout: post
title: Short entries.
---
I realized that a few of my recent posts have been a bit wordy? They're not that long, probably about 700 to 1000 words. In fact, they're considered to be more on the short side by some people. What I mean by 'wordy' is not the actual word count itself, I mean the inefficient use of words to get my point accross. Perhaps you don't think that but I feel that my writing can sometimes get a bit too 'chatty' and not to the point. Either I'm setting the bar too high up, or I need to work on my 'information delivery', as I prefer to call it.
It's not just writing, I really want to learn how to be clearer and more to the point in conversations. Like when explaining or presenting something to someone, it feels like I can never get my thoughts straight. Sometimes after I'm done, I even ask myself what exactly was I trying to say back then. It's all jumbled up and the point I'm trying to make just fades into the background, burried by all those other less important details in the conversation.
My solution to this problem? I'm still working on it. Basically, the plan is to simplify things and make my points short and sweet. And simple, but not any simpler. Expect shorter easier to read posts in the coming future! Perhaps I'll even up the posting frequency a bit. But then again, I'm also quite fond of long detailed posts that have a lot of thoughts put into it. We'll see, but I'm definitely making an effort to simplify things. | 148.4 | 541 | 0.772237 | eng_Latn | 0.999972 |
2caf231d470878769107146bb21d020bf738c16b | 109 | md | Markdown | CHANGELOG.md | CORE-Blockchain/wasm3-rs | 7204c225c0c812f62c8469ab3f9e979bc781026f | [
"MIT"
] | 81 | 2020-01-21T22:41:32.000Z | 2021-12-27T07:16:25.000Z | CHANGELOG.md | CORE-Blockchain/wasm3-rs | 7204c225c0c812f62c8469ab3f9e979bc781026f | [
"MIT"
] | 20 | 2020-01-23T19:05:07.000Z | 2021-04-13T07:42:09.000Z | CHANGELOG.md | CORE-Blockchain/wasm3-rs | 7204c225c0c812f62c8469ab3f9e979bc781026f | [
"MIT"
] | 13 | 2020-01-28T05:14:09.000Z | 2021-12-20T21:14:16.000Z | ## Known issues
## Changes
### Version 0.1.1
- Add `build-bindgen` flag
### Version 0.1
- Initial release
| 10.9 | 26 | 0.642202 | eng_Latn | 0.829299 |
2caf8e97fd0e190399e271ad74f6ff661d33cf9a | 995 | md | Markdown | docs/code-quality/c28164.md | Ric-Chang/visualstudio-docs.zh-tw | fab71387c0b4fd9853313d648522b5292ecee128 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-18T01:15:24.000Z | 2019-11-18T01:15:24.000Z | docs/code-quality/c28164.md | Ric-Chang/visualstudio-docs.zh-tw | fab71387c0b4fd9853313d648522b5292ecee128 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/c28164.md | Ric-Chang/visualstudio-docs.zh-tw | fab71387c0b4fd9853313d648522b5292ecee128 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C28164
ms.date: 11/04/2016
ms.prod: visual-studio-dev15
ms.topic: reference
f1_keywords:
- C28164
helpviewer_keywords:
- C28164
ms.assetid: 13327bf3-3f12-4226-85cf-48e215d01c1d
author: mikeblome
ms.author: mblome
manager: wpickett
ms.workload:
- multiple
ms.openlocfilehash: aebc255cb44de3436e6e67c718ac7a46f2a46e15
ms.sourcegitcommit: 37fb7075b0a65d2add3b137a5230767aa3266c74
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 01/02/2019
ms.locfileid: "53900096"
---
# <a name="c28164"></a>C28164
警告 C28164:引數已傳遞至需要的物件 (而非指標的指標) 的指標的函式
物件的指標的函式呼叫中使用指標的指標時,會報告這個警告。
函式會採用這個位置中的 PVOID。 通常,這表示 & p*XXX*時,使用 p*XXX*需要。
有些*多型態函式*(函式可以評估,並套用至不同類型的值) 會使用接受任何指標類型為 PVOID 引數在 C 中實作。 不過,這可讓程式設計師而不會造成編譯器錯誤,即使此類型不是適當的程式碼指標的指標。
## <a name="example"></a>範例
下列程式碼範例會產生此警告:
```
PFAST_MUTEX pFm;
...
KeWaitForSingleObject(&pFm, UserRequest, UserMode, false, NULL);
```
下列程式碼範例可避免此警告:
```
PFAST_MUTEX pFm;
...
KeWaitForSingleObject(pFm, UserRequest, UserMode, false, NULL);
``` | 21.170213 | 101 | 0.772864 | yue_Hant | 0.622049 |
2cb038ed21e5f2960dd713846a8f771e3e49d7b7 | 1,587 | md | Markdown | README.md | tatarko/MongoAR | f90ec319e0cdc257ef76fb2f0e9e346f01a72902 | [
"MIT"
] | 1 | 2015-03-29T11:05:43.000Z | 2015-03-29T11:05:43.000Z | README.md | tatarko/MongoAR | f90ec319e0cdc257ef76fb2f0e9e346f01a72902 | [
"MIT"
] | null | null | null | README.md | tatarko/MongoAR | f90ec319e0cdc257ef76fb2f0e9e346f01a72902 | [
"MIT"
] | null | null | null | # MongoDB Active Record [](https://travis-ci.org/tatarko/MongoAR)
MongoAR is simple library that allows you to use active record pattern on the [MongoDB](http://www.mongodb.org) databases and their tables. It also provides simple yet powerful query builder for simple building of search criteria for `MongoCollection::find()` and `MongoCollection::findOne()` methods.
## Requirements
MongoAR requires to run correctly:
- `PHP`, version `5.4` or above
- `mongo` pecl library, version `0.9` or above
## Instalation
### Composer
Simply add a dependency on `tatarko/mongoar` to your project's `composer.json` file if you use [Composer](http://getcomposer.org) to manage the dependencies of your project. Here is a minimal example of a `composer.json` file that just defines a dependency on MongoAR:
```json
{
"require": {
"tatarko/mongoar": "0.*"
}
}
```
### Straight implementation
In case you don't use `Composer` as your dependency manager you are still able to use `MongoAR`. There are only two easy steps to get `MongoAR` work.
1. Download [MongoAR.zip](https://github.com/tatarko/MongoAR/archive/master.zip) and put extracted archive into your project's folder.
2. Add following code to your project's root php file (e.g. `index.php`) and remember to change `path/to/` according to relative location of downloaded `MongoAR` folder:
```php
require_once 'path/to/source/__autoloader.php';
```
## Documentation
Please, see [Wiki](https://github.com/tatarko/MongoAR/wiki) for online documentation. | 40.692308 | 301 | 0.748582 | eng_Latn | 0.949765 |
2cb05baa909abce2eb5c79062b45df9caa1e5106 | 3,160 | md | Markdown | data/readme_files/thoughtfulml.examples-in-python.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/thoughtfulml.examples-in-python.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/thoughtfulml.examples-in-python.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | # Instructions to build Python environment
## Linux, using Python 2.7, system packages, tested on Ubuntu 16.04 Vagrant box
sudo apt-get update
sudo apt-get install python python-nose-parameterized python-numpy python-sklearn python-pip python-bs4 python-pandas
sudo pip install --upgrade pip
sudo pip install theanets
or use provided Vagrantfile to setup VM.
## Linux, using Python 2.7, virtualenv
Install system packages
sudo apt-get install python python-pip python-virtualenv python-tk
Install remaining packages in virtualenv
virtualenv venv27
venv27/bin/pip install -r requirements27.txt
## Linux, using Python 3.5, system packages, tested on Ubuntu 16.04 Vagrant box
sudo apt-get update
sudo apt-get install python3 python3-nose-parameterized python3-numpy python3-sklearn python3-pip python3-bs4 python3-pandas
sudo pip3 install --upgrade pip
sudo pip3 install theanets
or use provided Vagrantfile to setup VM.
## Linux, using Python 3.5, virtualenv
Install system packages
sudo apt-get install python3 python3-pip python3-virtualenv python3-tk
Install remaining packages in virtualenv
virtualenv -p `which python3` venv35
venv35/bin/pip3 install -r requirements35.txt
## MS Windows, using Python 2.7, anaconda
Download from continuum.io and install Anaconda for Python 2.7 (tested for Anaconda 4.4 on Windows 10)
The Anaconda Python installation contains required packages for all chapters except Artificial neuron networks.
For the last one, we need to install Theano and nose-parameterized by Conda and then theanets by pip.
In Anaconda prompt:
conda install nose-parameterized theano
pip install theanets
## MS Windows, using Python 3.6, anaconda
Download from continuum.io and install Anaconda for Python 3.6 (tested for Anaconda 4.4 on Windows 10)
The Anaconda Python installation contains required packages for all chapters except Artificial neuron networks.
First try the procedure for Python 2.7, if it does not work (due to version incompatibility between pygpu and theano, perhaps) then the following.
Install theano with dependencies and nose-parameterized by conda, deinstall pygpu and theano from conda, install theano and theanets by pip.
In Anaconda prompt:
conda install nose-parameterized theano
conda uninstall pygpu
pip install theano
pip install theanets
## Run tests in command line
Run from the directory of a chapter (not repository root directory).
python -m unittest discover tests
or
../venv35/bin/python3 -m unittest discover tests
or
../venv27/bin/python -m unittest discover tests
## Run tests in PyCharm
If you PyCharm project is the repository, then mark directory of the chapter as sources root (in Project panel, in the context menu of directory "Mark Directory As" -> "Sources Root").
For the single test in Project panel, in the context menu of file "Create Unittests in test_corpus_parser" and make sure that working directory is "something/hidden_markov_model", but not "something/hidden_markov_model/tests".
For all tests do the same in the context menu of the "tests" directory.
| 33.617021 | 226 | 0.775949 | eng_Latn | 0.96892 |
2cb09a371dc43218f192e420790a2ac72e8687d6 | 2,879 | md | Markdown | README.md | edwiansyah18/ProjectUAS | 8456fb8b19f8eb22aa32e3152a33e507f72c2a72 | [
"MIT"
] | null | null | null | README.md | edwiansyah18/ProjectUAS | 8456fb8b19f8eb22aa32e3152a33e507f72c2a72 | [
"MIT"
] | null | null | null | README.md | edwiansyah18/ProjectUAS | 8456fb8b19f8eb22aa32e3152a33e507f72c2a72 | [
"MIT"
] | null | null | null | # Sneaky Snake The Game
* [Description](#description)
* [Installation](#installation)
* [Usage](#Usage)
* [License](#license)
## Description
Merupakan program yang berupa Game klasik 'snake' dimana pemain diharuskan memakan target dengan mengedalikan suatu ular. Semakin banyak target yang termakan, semakin panjang juga ular yang dikendalikan. Permainan akan selesai jika life pemain telah habis baik itu dikarenakan menabrak badan ular itu sendiri maupun menabrak border. Game ini dilengkapi dengan record highscore semua pemain yang telah memainkan game ini.
## Installation
System Requirements: Windows
<br>
Untuk dapat menggunakan Hangman Battle, Anda perlu melakukan hal berikut :
1. Mendownload source-code nya terlebih dahulu dengan cara menekan tombol download disamping.
2. Buka source-code menggunakan C compiler. Jika Anda tidak memilikinya, [download disini](https://sourceforge.net/projects/orwelldevcpp/files/latest/download)
3. Kemudian, jalankan dengan cara meng-compile main-program.c, .dev, atau langsung memainkannya dengan file .exe
## Usage
### Menu

Pengguna akan masuk ke tampilan menu, tekan sembarang tombol untuk memainkan game.
### Rules and Control

Tmapilan Rules dan Control dalam game ini.
### Bersiap-siap.......

UI preparing.
### Lets Play The Game!

Makan sebanyak-banyaknya target dan raihlah poin tertinggimu.
### Game END

Pemain telah menghabiskan semua life nya dan game selesai.
### Highscore

Nama dan score pemain akan di simpan di record kemudian di masukan kedalam record highscore game ini.
## License
MIT License
<details>
<summary>Copyright (c) 2020 edwiansyah18</summary>
<p align="justify">Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:</p>
<p align="justify">The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.</p>
<p align="justify">The software is provided "as is", without warranty of any kind, express or
Implied, including but not limited to the warranties of merchantability,
Fitness for a particular purpose and noninfringement. In no event shall the
Authors or copyright holders be liable for any claim, damages or other
Liability, whether in an action of contract, tort or otherwise, arising from,
Out of or in connection with the software or the use or other dealings in the
Software.</p>
</details>
| 41.128571 | 420 | 0.781521 | ind_Latn | 0.704175 |
2cb0e6c4f3789e5ed63cf14ad948d689f70d1177 | 954 | md | Markdown | docs/CalculationDto.md | Dangl-IT/avaclient-go | 20f1a9e19c51da0a628df10e8fbb953cc84f6366 | [
"MIT"
] | 1 | 2020-07-13T12:59:24.000Z | 2020-07-13T12:59:24.000Z | docs/CalculationDto.md | Dangl-IT/avaclient-go | 20f1a9e19c51da0a628df10e8fbb953cc84f6366 | [
"MIT"
] | null | null | null | docs/CalculationDto.md | Dangl-IT/avaclient-go | 20f1a9e19c51da0a628df10e8fbb953cc84f6366 | [
"MIT"
] | 1 | 2020-07-13T13:03:27.000Z | 2020-07-13T13:03:27.000Z | # CalculationDto
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Description** | **string** | Descriptive text for this calculation. | [optional]
**Formula** | **string** | This Calculation's mathematical expression. Please note that thousands separators are not supported. Both comma and point will be treated as decimal separators. | [optional]
**Result** | **float64** | The calculated result from the formula, 0 if invalid. | [readonly]
**Valid** | **bool** | Whether the Formula is a valid expression. | [readonly]
**ErrorPositionInLine** | **int32** | Will be -1 if the Formula is correct, else it will show the position in the formula where an error was encountered. This is a zero based index. | [readonly]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 59.625 | 205 | 0.668763 | eng_Latn | 0.94572 |
2cb19109b2ba498acfbd2f0a6a5353adad6be452 | 1,175 | md | Markdown | docs/materials/about.md | liuyan0535/ice | 49c499fb757a5e63e94c3e5ce60acaa3c5ae851d | [
"MIT"
] | 1 | 2021-01-31T04:34:26.000Z | 2021-01-31T04:34:26.000Z | docs/materials/about.md | liuyan0535/ice | 49c499fb757a5e63e94c3e5ce60acaa3c5ae851d | [
"MIT"
] | null | null | null | docs/materials/about.md | liuyan0535/ice | 49c499fb757a5e63e94c3e5ce60acaa3c5ae851d | [
"MIT"
] | null | null | null | ---
title: 关于物料
order: 1
---
在中后台系统开发中,我们早已习惯了基于 Fusion、antd 等基础组件搭建我们的页面。然而这些基础组件很难完全满足我们的业务需求,实际项目开发中往往包含大量重复的业务场景。这些业务场景很多都是大同小异的,有些是有一定业务逻辑的组件(员工选择器、表单等),另一些是由基础组件和业务组件组合成的列表、模块,还有一些是页面布局、视觉规范、项目工程化等。如何复用这些业务场景,降低中后台系统开发的成本呢?ICE 团队通过与社区一起共建物料体系,提供海量高质量的物料来解决这些问题。
物料即组成一个前端项目的不同单位,根据抽象粒度的不同,我们将物料从小到大分为组件(component)、区块(block)、页面(page)和模板(scaffold)。在基于物料体系的开发中,我们使用项目物料来初始化前端工程,提供最佳实践,解决工程问题,再使用区块和组件像搭积木一样快速搭建页面。
## 概念解释
物料分为组件(component)、区块(block)、页面(page)和项目(scaffold)四种类型:
- 组件(component):功能比较确定同时复杂度较高,例如用户选择器、地址选择器等,项目中只需要引入对应的 npm 包即可,项目不关心也无法修改组件内部的代码,只能通过组件定义的 props 控制。
- 区块(block):一般是一个 UI 模块,使用区块时会将区块代码拷贝到项目代码中,项目里可以对区块代码进行任何改动,因此区块后续的升级也不会对项目有任何影响,这是区块跟业务组件的最大区别。
- 页面(page): 一般是一个完整的页面模板,使用页面时首先要在 iceworks 中配置模板,之后会自动生成页面代码于 page 文件夹下。与区块一样,项目里可以对生成的页面代码进行任何改动,页面模板的后续升级不会对项目产生任何影响。
- 项目模板(scaffold):即项目脚手架,用于项目初始化。
基于以上四种不同粒度的物料,开发者可以快速开始前端项目开发。

## 物料开发工具
我们通过 iceworks CLI 这个工具提供物料的开发与管理,其本身不耦合任何前端框架或工程体系,这意味着基于 iceworks CLI 可以开发 React/Vue/Angular 等各种前端体系的物料。iceworks CLI 具有以下特性:
- 支持不同前端框架和工程的物料开发
- 支持物料的初始化以及管理能力
- 支持物料数据生成和校验
- 支持将物料托管到 fusion.desion
- 支持自定义物料模板能力
| 36.71875 | 238 | 0.835745 | yue_Hant | 0.536496 |
2cb1d68f34511ee00a499f2e18750e0b3503a6f5 | 1,960 | md | Markdown | controls/radrating/populating-with-data/populating-declaratively.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | controls/radrating/populating-with-data/populating-declaratively.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | controls/radrating/populating-with-data/populating-declaratively.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | ---
title: Declaratively
page_title: Declaratively
description: Check our "Declaratively" documentation article for the RadRating {{ site.framework_name }} control.
slug: populating-declaratively
tags: declaratively
published: True
position: 2
---
# Declaratively
This tutorial will walk you through the common task of populating __RadRating__ with __RadRatingItems__ declaratively.
Example 1 demonstrates a regular __RadRating__ declaration, where __telerik__ points to: __<xmlns:telerik="http://schemas.telerik.com/2008/xaml/presentation" />__
#### __[XAML] Example 1: Declare a RadRating __
{{region xaml-populating-declaratively_0}}
<telerik:RadRating x:Name="radRating" />
{{endregion}}
#### __Figure 1: Result from Example 1__

In order to add rating items you need to use the __RadRating's__ __Items__ property. The __Items__ property is an __ItemCollection__ which contains your __RadRatingItems__. Example 2 shows how to add RadRatingItems to your RadRating.
#### __[XAML] Example 2: Adding RadRatingItems__
{{region xaml-populating-declaratively_1}}
<telerik:RadRating x:Name="radRating">
<telerik:RadRatingItem Content="1" />
<telerik:RadRatingItem Content="2" />
<telerik:RadRatingItem Content="3" />
<telerik:RadRatingItem Content="4" />
<telerik:RadRatingItem Content="5" />
<telerik:RadRatingItem Content="6" />
<telerik:RadRatingItem Content="7" />
<telerik:RadRatingItem Content="8" />
<telerik:RadRatingItem Content="9" />
<telerik:RadRatingItem Content="10" />
</telerik:RadRating>
{{endregion}}
#### __Figure 2: Result from Example 2__

>tip Consider declaring rating items in XAML instead of adding them by code whenever it's possible. This includes situations when you know what items you need at design time.
| 41.702128 | 234 | 0.743878 | eng_Latn | 0.795623 |
2cb2179e3f45bc4d9c378db5b2ce541db29488f3 | 715 | md | Markdown | 流行/隔岸-你呀你冻我心房酸我眼眶一生的伤-抖音热歌/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 17 | 2020-12-01T05:27:50.000Z | 2022-03-28T05:03:34.000Z | 流行/隔岸-你呀你冻我心房酸我眼眶一生的伤-抖音热歌/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | null | null | null | 流行/隔岸-你呀你冻我心房酸我眼眶一生的伤-抖音热歌/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 2 | 2021-08-24T08:58:58.000Z | 2022-02-08T08:22:52.000Z |
你呀你冻我心房酸我眼眶一生的伤
你呀你彼岸观望置身一旁一生两望
这句歌词是不是非常的熟悉呢?这首歌在抖音中火了,很多的抖音博主拍摄视频都以它为背景音乐,歌曲叫做《 **隔岸**
》,由姚六一原创。发行于2020年6月18日。
姚六一是一位创作型音乐人,在大学阶段就创作过多首深受年轻乐迷喜爱的网络热门单曲,特别的旋律灵感和与众不同的歌词搭配让他在20岁就有了属于自己的作品风格。
歌词下方是 _隔岸钢琴谱_ ,大家可以免费下载学习。
### 隔岸歌词:
那一幕怎忘记
初次相遇的你
路人闹挺挺看我滑稽
为你一笑我愿做猴戏
一生能有几序
牵肠挂肚情义
你大可不必猜忌寻觅
我愿意一生为你追寻
就这般望着你 难免我愁愁
除你我禽鸟连花草 成双荡悠悠
你呀你冻我心房酸我眼眶一生的伤
你呀你彼岸观望置身一旁一生两望
也有春花秋月
也望相守不渝
雨打荷叶吵吵了几滴
又怎能熄灭心中柔情
谈笑中提起你
疼痛这般熟悉
如今的你在何处飘零
一片片破碎的心难拾起
你我为何沦落这步田地
就这般望着你 难免我愁愁
除你我禽鸟连花草 成双荡悠悠
你呀你冻我心房酸我眼眶一生的伤
你呀你彼岸观望置身一旁一生两望
穷极一生又何惧
也许只是一个背影
天亮之后就出行
你又会在何处伫立
是否也在等我的你
回顾皆如草木 唯你是青山
嘲笑世间 情难两得 天作隔两岸
你呀你冻我心房酸我眼眶一生的伤
你呀你彼岸观望置身一旁一生两望
穷极一生又何惧
也许只是一个背影
天亮之后就出行
在隔对岸等你
| 13.240741 | 76 | 0.774825 | yue_Hant | 0.435354 |
2cb3af022a560c57b03b26cfb3dc76a264a40d10 | 117 | md | Markdown | README.md | themarcosdev/Url-Shortner---shrtco-api---PHP | 837b0c2fb229cb2901fdee17bb0801cb9e24a9fa | [
"MIT"
] | null | null | null | README.md | themarcosdev/Url-Shortner---shrtco-api---PHP | 837b0c2fb229cb2901fdee17bb0801cb9e24a9fa | [
"MIT"
] | null | null | null | README.md | themarcosdev/Url-Shortner---shrtco-api---PHP | 837b0c2fb229cb2901fdee17bb0801cb9e24a9fa | [
"MIT"
] | null | null | null | # Url-Shortner---shrtco-api---PHP
Usar a shrtco-api para gerar links curtos de urls de vídeos do yt ou de algum site
| 39 | 82 | 0.752137 | por_Latn | 0.971298 |
2cb3e132871d2fd74f8d9292ff0761bb5645f9ad | 18,959 | md | Markdown | labs/11-user-process/README.md | Ouyancheng/RPi-WLAN-driver | f8650faa33ce47c6591285dbec09e0247e3343da | [
"MIT"
] | 23 | 2022-01-05T00:06:04.000Z | 2022-03-29T06:14:31.000Z | labs/11-user-process/README.md | Ouyancheng/RPi-WLAN-driver | f8650faa33ce47c6591285dbec09e0247e3343da | [
"MIT"
] | null | null | null | labs/11-user-process/README.md | Ouyancheng/RPi-WLAN-driver | f8650faa33ce47c6591285dbec09e0247e3343da | [
"MIT"
] | 11 | 2022-01-05T03:01:50.000Z | 2022-03-20T16:29:05.000Z | ## simple user processes and equivalance checking.
Today you're going to do a wildly useful trick for ruthlessly detecting
subtle operating system mistakes. It will prove invaluable next week
when we do virtual memory --- a topic that has extremely hard-to-track-down
bugs if you just use "normal" testing of running programs and checking
print statements and exit codes.
In order to do so we give you:
1. A trivial operating system (`trivial-os`) that has a single process,
switches to it, and when it is finished, reboots.
2. A trivial user-level mode implementation has has three system calls:
`sys_putc`, `sys_put_hex` and `sys_exit`.
3. The user level code only communicates with the system level using
system calls.
This will start getting you used to user-level vs kernel stuff. Also,
we need code at user level so (1) single-stepping works and (2) the
addresses are the same for everyone.
--------------------------------------------------------------------
### What to do
I will fill in a bunch of background. But to kickstart things, we'll
describe what to do.
The lab has five parts;
1. You control which of the five parts is being run by modifying
the `part` variable in `trivial-kernel/trivial-os.c` and set it to `0`,
`1`, etc. You can also use the given enumeration type.
2. You should do all `make` invocations in `trivial-user-level` since
that is what actually runs things.
3. All your real modifications will be done in `trivial-os/equiv.c` or
in `trivial-os/trivial-os-asm.S`. You should not have to touch
`trivial-user-level`.
What to do for each part:
0. We give you the code for part 0. This part just "loads" the
user program (using `program_get`) and jumps to it directly, running
it in kernel mode. The user program does system calls to call back
into the `trivial-os`. This gives you a simple starting point.
*What to do*: follow the flow of control and make sure understand what
is going on. This is useful to help you get orientated with what
the code is doing, since its a bit weird compared to the past labs.
In the previous labs we just linked all the code together. However,
that doesn't make sense with user code. ("Why?") Normally we would
load a user program from a file system. ("Why?") However, we do
not have a file system (yet) and so do the following hack (which
is useful elsewhere, so no worries).
We use the program `trivial-user-level/pitag-linker` to concatenate
the user program to the `trivial-os.bin` and produce a new binary we
can bootload over and jump to. Mechanically this lets us combine
two distinct programs.
However, just as with linux and macos, the user cannot call the OS
routines directly since it was not linked with them. (Also, in the
future: it does not have the right permissions). So instead all
"function calls" from user to the trivial-os are system calls. Because
system call numbers do not change (ours are in `trivial-os/syscalls.h`)
the user code can call the `trivial-os` no matter
how its code has dilated or shrunk.
1. You will write `user_mode_run_fn` (in `trivial-os/trivial-os-asm.S`).
This will change to user mode (using the `cps` instruction),
set the stack pointer to the value given, and jump to the code.
This should be a short routine.
Importantly: the code should run identically to part 0, other than
perhaps printing out that it is at user-level. You should rerun
the programs using `part=0` and `part=1` and verify they both do
the same thing.
2. This part demonstrates how to use the single-stepping routines that
we give you. The exception handler used is in
`trivial-os/single-step.c`. When you run with it enabled, it will
print out all the `pc` addresses in order that execute at user level
(you cannot single-step when not in user mode).
*What do to*: you should verify that the addresses being printed
make sense. They should first be after the `cps` instruction in
part 1. Then they will be in the `start` routine and `notmain` and
then in the user-level code needed to call `sys_exit`. Make sure you
understand what is going on or the next parts will be hard to debug.
3. You will trace all the program counter values and hash them.
This code looks very similar to Part 2, except you need to figure
out when the code has finished executing `user_mode_run_fn` (since
its addresses will change). The way you can do this is to look at
the `pc` --- if this is above our kernel heap, it's in user mode.
If not, its still in the kernel.
*What to do*: Implement both the exception handler in
`equiv.c:prefetch_abort_equiv_pc` and the trampoline that sets
up its state in `trivial-os-asm.S:prefetch_handler_equiv_pc` You
should look at how single-stepping does both of these. If you
are not getting exceptions, you didn't reset the mismatching in
the handler.
You should check that your hashes match everyone else's. I'll add
more tests. I would start with `0-test-nop.c` since that is a
trivial program. I also checked in some pre-built binaries in
`trivial-user-level-prebuilt` since it appears people's compilers
can give slightly different output.
You should only count instructions you hash.
4. Finally you will hash all the register values and the `spsr`.
This is a very harsh test.
*What to do* you'll build the trampoline for this part in
`trivial-os-asm.S`. You should push all sixteen *user-level*
registers onto the stack and the `spsr` and pass the base of
this 17 entry array to the handler so you can hash it the same
as everyone else. (Easiest way: subtract the amount you want,
then do an `stm`.) The way you do this should allow a single
special load at the end of the handler so you can resume
execution.
You should push the registers from highest to lowest offset. So,
`r0` at offset 0, `r1` at offset 4, r15 (the pc) at offset 60,
and the `spsr` at offset 64. The base of this array should be
passed to the handler. Note: stack goes down and arrays go up so
you'll likely have to subtract before storing on the stack.
Your hashes should match everyone else. I'll add more tests.
NOTE: we do not want the value of any exception shadow register,
we want the actual user value, so you'll have to figure out how
to get that with a store. Some hints are given below.
If you are getting differences: make sure you clear *all* registers
besides `r0` and `r1` and the `pc` --- you should clear the `cpsr`
by writing a zeroed register into it once you make the user mode
switch. You also clear `lr` since we are never returning and it
will almost certainly be different for everyone.
Hints for part 4:
- I used `stm` and `ldm`. It's trickier to use
`push` and `pop` since they modify the `sp` and its illegal to do
so when `sp` is not the highest register pushed.
- As we covered in the interrupt handling lab: the exception
context has its own shadow copies of `sp` and `lr` --- so you cannot
simply push these "raw" --- you have to push the user level copies.
Further, `pc` is *not* shadowed --- so you'll have to use the computed
value in `lr`.
- You cannot store `spsr` directly . You'll have to put it in a
general purpose register (this is ok: we just saved a bunch so
can use them).
- When you are in the handler, you should check that the register
at the `r15` (pc) offset actually matches the `pc` value you have.
Also, you should check that the value at the `spsr` offset matches
your `spsr`.
we are modify the `sp`,
--- since we are pushing the `sp`
-------------------------------------------------------------------
#### Part 3: PC Hashes
Below are some of the hashes I got for the test programs. The main thing
to pay attention to is the final two `TRACE:EQUIV` print statements
that give the final number of instructions and the final pc hash.
The instruction count is just those instructions that got hashed.
I would start with `0-test-nop.c` since it is the simplest:
TRACE:simple_boot: sending 11414 bytes, crc32=2f9501d7
waiting for a start
pi: <addr=0x8000, n=11414, cksum=0x2f9501d7>
putting code
bootloader: Done.
listening on ttyusb=</dev/ttyUSB0>
kernel: stack is roughly at: 0x7ffffe8
user_code=0x400004, prog name=<0-test-nop.bin>
TRACE:inst 0 = pc=0x400004, pc_hash=0xa3d1fb72
TRACE:inst 1 = pc=0x400010, pc_hash=0xb5f459d5
TRACE:inst 2 = pc=0x400008, pc_hash=0x44a6b13a
TRACE:inst 3 = pc=0x40000c, pc_hash=0x48be2e85
TRACE:inst 4 = pc=0x400014, pc_hash=0x7fb1242d
TRACE:inst 5 = pc=0x400018, pc_hash=0x4c6c9ecb
TRACE:inst 6 = pc=0x40001c, pc_hash=0x2900fa70
TRACE:inst 7 = pc=0x400020, pc_hash=0xca1ba4db
TRACE:inst 8 = pc=0x4000e4, pc_hash=0x7c98abf1
TRACE:inst 9 = pc=0x4000e8, pc_hash=0xafd42265
0-test-nop.bin: sys_exit(-1): going to reboot
part=3
equiv values
TRACE:EQUIV: number instructions = 10
TRACE:EQUIV: pc hash = 0xafd42265
DONE!!!
Then `0-test-exit.c` since it is the simplest:
TRACE:simple_boot: sending 11423 bytes, crc32=cc87ea74
user_code=0x400004, prog name=<0-test-exit.bin>
TRACE:inst 0 = pc=0x400004, pc_hash=0xa3d1fb72
TRACE:inst 1 = pc=0x400010, pc_hash=0xb5f459d5
TRACE:inst 2 = pc=0x400014, pc_hash=0x625d1830
TRACE:inst 3 = pc=0x400018, pc_hash=0x3db2a8ee
TRACE:inst 4 = pc=0x400020, pc_hash=0x4e403840
TRACE:inst 5 = pc=0x400024, pc_hash=0x31fa5b24
TRACE:inst 6 = pc=0x400028, pc_hash=0xfd7f8720
TRACE:inst 7 = pc=0x40002c, pc_hash=0xd449b7d7
TRACE:inst 8 = pc=0x4000f0, pc_hash=0x3500ce93
TRACE:inst 9 = pc=0x4000f4, pc_hash=0x4696ca4e
0-test-exit.bin: sys_exit(0): going to reboot
part=3
equiv values
TRACE:EQUIV: number instructions = 10
TRACE:EQUIV: pc hash = 0x4696ca4e
For `1-test-hello.c`:
TRACE:simple_boot: sending 11652 bytes, crc32=e3df0073
TRACE:inst 0 = pc=0x400004, pc_hash=0xa3d1fb72
TRACE:inst 1 = pc=0x400010, pc_hash=0xb5f459d5
TRACE:inst 2 = pc=0x400014, pc_hash=0x625d1830
TRACE:inst 3 = pc=0x400018, pc_hash=0x3db2a8ee
TRACE:inst 4 = pc=0x4000ac, pc_hash=0x1e2417b1
TRACE:inst 5 = pc=0x4000b0, pc_hash=0x2467244e
TRACE:inst 6 = pc=0x4000b4, pc_hash=0x77935371
TRACE:inst 7 = pc=0x4000c4, pc_hash=0x88a05aeb
TRACE:inst 8 = pc=0x4000c8, pc_hash=0x59cc4453
TRACE:inst 9 = pc=0x4000cc, pc_hash=0x146f6ad5
hello world
user: stack is roughly at 0x6fffff8
user: cpsr=0x60000190
USER MODE!
1-test-hello.bin: sys_exit(0): going to reboot
part=3
equiv values
TRACE:EQUIV: number instructions = 884
TRACE:EQUIV: pc hash = 0xd5bec853
DONE!!!
For `3-test-vec.c`:
TRACE:simple_boot: sending 11486 bytes, crc32=cb1d156d
waiting for a start
pi: <addr=0x8000, n=11486, cksum=0xcb1d156d>
putting code
bootloader: Done.
listening on ttyusb=</dev/ttyUSB0>
kernel: stack is roughly at: 0x7ffffe8
user_code=0x400004, prog name=<3-test-vec.bin>
TRACE:inst 0 = pc=0x400004, pc_hash=0xa3d1fb72
TRACE:inst 1 = pc=0x400010, pc_hash=0xb5f459d5
TRACE:inst 2 = pc=0x400014, pc_hash=0x625d1830
TRACE:inst 3 = pc=0x400018, pc_hash=0x3db2a8ee
TRACE:inst 4 = pc=0x400028, pc_hash=0xd88ec3d5
TRACE:inst 5 = pc=0x40002c, pc_hash=0xfdf9a1ed
TRACE:inst 6 = pc=0x40001c, pc_hash=0x7fab503d
TRACE:inst 7 = pc=0x400020, pc_hash=0x413ff992
TRACE:inst 8 = pc=0x400024, pc_hash=0xead93cec
TRACE:inst 9 = pc=0x400028, pc_hash=0x4115398d
3-test-vec.bin: sys_exit(0): going to reboot
part=3
equiv values
TRACE:EQUIV: number instructions = 194
TRACE:EQUIV: pc hash = 0x22ab7818
DONE!!!
From `trivial-user-level-prebuilt` (or if your compiler matches
our output: `trivial-user-level`):
TESTS := $(wildcard ./[0-3]*-test-*.c)
# emit all the .outs
% make emitall
# cksum the .outs and canonicalize
% grep EQUIV *.out | sort -n | cksum
799754351 496
---------------------------------------------------------
### Part 4: Full reg hash
Note:
- do not clear `r1`.
- do clear `cpsr` once you're in user mode otherwise the carry
flags can be different.
- definitely don't clear `r13`!
user_code=0x400004, prog name=<0-test-nop.bin>
TRACE: reg hash=0xfac07451
TRACE: spsr=0x190
TRACE: pc = 0x400004, lr = 0x400004
TRACE: regs[0] = 0x400004
TRACE: regs[1] = 0x7000000
TRACE: regs[2] = 0x0
TRACE: regs[3] = 0x0
TRACE: regs[4] = 0x0
TRACE: regs[5] = 0x0
TRACE: regs[6] = 0x0
TRACE: regs[7] = 0x0
TRACE: regs[8] = 0x0
TRACE: regs[9] = 0x0
TRACE: regs[10] = 0x0
TRACE: regs[11] = 0x0
TRACE: regs[12] = 0x0
TRACE: regs[13] = 0x7000000
TRACE: regs[14] = 0x0
TRACE: regs[15] = 0x400004
TRACE:------------------------------------------------------
TRACE: reg hash=0x831b8654
TRACE: spsr=0x190
TRACE: pc = 0x400010, lr = 0x400010
TRACE: regs[0] = 0x400004
TRACE: regs[1] = 0x7000000
TRACE: regs[2] = 0x0
TRACE: regs[3] = 0x0
TRACE: regs[4] = 0x0
TRACE: regs[5] = 0x0
TRACE: regs[6] = 0x0
TRACE: regs[7] = 0x0
TRACE: regs[8] = 0x0
TRACE: regs[9] = 0x0
TRACE: regs[10] = 0x0
TRACE: regs[11] = 0x0
TRACE: regs[12] = 0x0
TRACE: regs[13] = 0x7000000
TRACE: regs[14] = 0x400008
TRACE: regs[15] = 0x400010
TRACE:------------------------------------------------------
TRACE: reg hash=0xdb3c14aa
TRACE: spsr=0x190
TRACE: pc = 0x400008, lr = 0x400008
TRACE: regs[0] = 0x400004
TRACE: regs[1] = 0x7000000
TRACE: regs[2] = 0x0
TRACE: regs[3] = 0x0
TRACE: regs[4] = 0x0
TRACE: regs[5] = 0x0
TRACE: regs[6] = 0x0
TRACE: regs[7] = 0x0
TRACE: regs[8] = 0x0
TRACE: regs[9] = 0x0
TRACE: regs[10] = 0x0
TRACE: regs[11] = 0x0
TRACE: regs[12] = 0x0
TRACE: regs[13] = 0x7000000
TRACE: regs[14] = 0x400008
TRACE: regs[15] = 0x400008
TRACE:------------------------------------------------------
TRACE: reg hash=0x79ec1afc
TRACE: spsr=0x190
TRACE: pc = 0x40000c, lr = 0x40000c
TRACE: regs[0] = 0xffffffff
TRACE: regs[1] = 0x7000000
TRACE: regs[2] = 0x0
TRACE: regs[3] = 0x0
TRACE: regs[4] = 0x0
TRACE: regs[5] = 0x0
TRACE: regs[6] = 0x0
TRACE: regs[7] = 0x0
TRACE: regs[8] = 0x0
TRACE: regs[9] = 0x0
TRACE: regs[10] = 0x0
TRACE: regs[11] = 0x0
TRACE: regs[12] = 0x0
TRACE: regs[13] = 0x7000000
TRACE: regs[14] = 0x400008
TRACE: regs[15] = 0x40000c
TRACE:------------------------------------------------------
0-test-nop.bin: sys_exit(-1): going to reboot
part=4
equiv values
TRACE:EQUIV: number instructions = 10
TRACE:EQUIV: reg hash = 0xc5b473a4
DONE!!!
0-test-exit.bin:
TRACE:EQUIV: number instructions = 10
TRACE:EQUIV: reg hash = 0x1ee2a76e
1-test-hello.bin:
TRACE:EQUIV: number instructions = 884
TRACE:EQUIV: reg hash = 0xe9817e19
3-test-vec.bin:
TRACE:EQUIV: number instructions = 194
TRACE:EQUIV: reg hash = 0x22edf8ea
From `trivial-user-level-prebuilt` (or if your compiler matches
our output: `trivial-user-level`):
# all tests
TESTS := $(wildcard ./[0-3]*-test-*.c)
# generate the .out's
% make emitall
# cksum everything.
% grep EQUIV *.out | sort -n | cksum
4147204067 500
--------------------------------------------------------------------
Part 5: replace our `breakpoint.h` implementation
The code currently calls our single-step implementation (in
`single-step.o`). For this part, you should modify your debug hardware
code to support mismatching and implement the following routines:
***NOTE: these routines call `cp14_enable` if it hasn't been called
yet.***
// this will mismatch on the first instruction at user level.
void brkpt_mismatch_start(void);
// stop mismatching.
void brkpt_mismatch_stop(void);
// set a mismatch on <addr> --- we'll get a prefetch abort
// on any pc value that is not equal to <addr>
void brkpt_mismatch_set(uint32_t addr);
In a common pattern for equivalance checking: when you drop this in,
and rerun the hashes they should be the same.
--------------------------------------------------------------------
Part 6: setup your code to use timer interrupts
Whether we have interrupts or not, user mode behavior should not chnage.
Check this by:
1. Add a part 5 that configures and uses timer interrupts (using code
similar to `6-interrupts`).
2. Rerun the hashes: none should change.
3. As you make the interrupts closer and closer, no hash should change.
--------------------------------------------------------------------
Context switching: user-level context saving and restoring
This lab found an interesting bug in our old context switching code.
When doing single stepping, we can't simply do:
ldm r0, {r0-r15}^
to reload user state. If you look at the ARMv6 document, you'll see
`ldm` behaves differently depending on if you have the `pc` (`r15`)
as one of the registers.
<table><tr><td>
<img src="images/ldm-pc.png"/>
</td></tr></table>
This bug is a bit interesting in that we missed this nuance for the past
4-5 years but had managed to avoid getting bitten by it despite doing
context switching in different ways b/c of how it was implemented.
We didn't realize the bug in lab 10 because we had no other process
running.
In any case, first step is to fix your code to use `rfe` and (if you
want) `srs` to restore and save state. If you look at the `prelab-code`
directory you'll see an example that uses them. You'll want to look in
the ARMv6 manual to make sure you understand.
What to do:
1. Copy your `10-process/code` into `15-integration/code` (we never
want to mutate a working system that we've already checked).
2. Make sure your tests run as before.
3. Rewrite the equivalance assembly to use the `rfe` instructon
at the end.
4. Make sure your tests still pass!
| 38.770961 | 76 | 0.649032 | eng_Latn | 0.983631 |
2cb41d9e734cfb83066341b982ca278203f53f49 | 287 | md | Markdown | jsp/15/README.md | r4b3rt/JSP-Webshells | f348ae806b3736f6ed193648460587abffd39d07 | [
"Apache-2.0"
] | 2 | 2021-07-23T02:46:14.000Z | 2021-07-23T11:24:18.000Z | jsp/15/README.md | r4b3rt/JSP-Webshells | f348ae806b3736f6ed193648460587abffd39d07 | [
"Apache-2.0"
] | null | null | null | jsp/15/README.md | r4b3rt/JSP-Webshells | f348ae806b3736f6ed193648460587abffd39d07 | [
"Apache-2.0"
] | null | null | null | ### BCEL类加载器进行一定包装-可能在某些禁了loadClass方法的地方bypass的JSP Webshell
生成BCEL字节码:
```
javac EvilMake.class Evil15.class
java EvilMake
```
使用:
```
1.把jsp文件放到能被解析的服务器目录,例:tomcat的webapps/ROOT
2.在浏览器访问15.jsp,并使用参数cmd传入需要远程执行的命令,例:http://127.0.0.1:8080/15.jsp?cmd=whoami
3.服务器将会执行相应的shell命令,最后回显
``` | 17.9375 | 76 | 0.783972 | yue_Hant | 0.730916 |
2cb50e8b2fd545ac5ae61c441005da7bdc4eb59c | 2,441 | md | Markdown | _posts/2018-11-15-logistic_regression.md | agdal1125/agdal1125.github.io | c03da19cdf05d0eb4e44645ab0b2b2185341b6d4 | [
"MIT"
] | 1 | 2019-10-29T05:30:17.000Z | 2019-10-29T05:30:17.000Z | _posts/2018-11-15-logistic_regression.md | agdal1125/agdal1125.github.io | c03da19cdf05d0eb4e44645ab0b2b2185341b6d4 | [
"MIT"
] | 2 | 2021-05-19T18:36:10.000Z | 2022-02-26T04:27:20.000Z | _posts/2018-11-15-logistic_regression.md | agdal1125/agdal1125.github.io | c03da19cdf05d0eb4e44645ab0b2b2185341b6d4 | [
"MIT"
] | null | null | null | ---
layout: post
title: Logistic Regression
tags:
- statistics
- classification
- supervised learning
- regression
mathjax: true
---
- Logistic Model is a statistical model that uses a logistic function to classify dependent variable.
- In general, (Binary) Logistic Regression is used for binary classification. (Pass/Fail, Alive/Dead, Win/Lose, etc...) If there are more numbers of dependent variables, you should look into **multinomial logistic regression**
- As its purpose is mainly in binary classification, the model is often used for supervised machine learning.
- The dependent variables are labeled as 0 or 1.
- If the probability of predicting dependent variable to 0 is above 50%, the dependent variable will be classified as 0. Else, it would be classified as 1.
The Logistic Model derived from linear regression. If we set y as the probability of predicting one of the two dependent variables, classification becomes easy.
If y > 0.5, it will be labeled as dependent variable A. If y < 0.5, it will be labeled as dependent variable B.
$$y = ax+b$$
However, as we can see from the linear regression model above, y and x can have infinite values. As the name suggests, the logistic model is derived from the linear equation by transforming it with __logit function__.
<br>
### Logit Function
Logit function, or log-odds is __the logarithm__ of the __odds__(relative probability the event will happen)
$$ odds : \frac{p}{1-p}$$
p is the probability that the event will happen. Thus, the logit function and inverse logit function is:
$$ logit(p)= ln\frac{p}{1-p}$$
### Logistic Function
Using the logit function, the linear model can be transformed as following equation:
$$ logit(p)= ln\frac{p}{1-p}= ax + b $$
<br>
$$ \frac{1-p}{p}= \frac{1}{e^{ax+b}} $$
<br>
$$ p= \frac{e^{ax+b}}{e^{ax+b}+1} $$
<br>
### How to Use it on Python
<br>
Let's get into some practical stuff now. The code for logistic regression is pretty short and simple. You just need training dataset and testing dataset to build a model.
```python
from sklearn.linear_model import LogisticRegression
# Calling the function and training data
lg = LogisticRegression() #make an instance of the Model
lg.fit(x_train, y_train) #fit your training dataset
# Making predictions and calculating accuracy
predictions = lg.predict(x_test) # Predictions
accr = lg.score(x_test, y_test) # Accuracy
print(predictions)
print(accr)
```
| 31.294872 | 226 | 0.744367 | eng_Latn | 0.995997 |
2cb5136e7f6d1197db152692f9674390653e0f69 | 189 | md | Markdown | wiki/Research.md | nishad/tsukuba | e2a662d8952a185b616844f63e6c1610b59e098e | [
"CC-BY-3.0"
] | null | null | null | wiki/Research.md | nishad/tsukuba | e2a662d8952a185b616844f63e6c1610b59e098e | [
"CC-BY-3.0"
] | null | null | null | wiki/Research.md | nishad/tsukuba | e2a662d8952a185b616844f63e6c1610b59e098e | [
"CC-BY-3.0"
] | null | null | null | ---
title: Research
permalink: wiki/Research/
layout: wiki
---
- [Research Organizations](/wiki/Research_Organizations "wikilink")
- [Research Parks](/wiki/Research_Parks "wikilink")
| 18.9 | 69 | 0.730159 | ssw_Latn | 0.073278 |
2cb532d01e0a4eeb237a1f7d0e1c8a20d99d908b | 2,474 | md | Markdown | content/JavaScript/HTML_Form.md | crazy-canux/blog_pelican | 55e95fa0b00b07355404361138be18a532056287 | [
"Apache-2.0"
] | 1 | 2017-02-11T14:52:04.000Z | 2017-02-11T14:52:04.000Z | content/JavaScript/HTML_Form.md | crazy-canux/blog_pelican | 55e95fa0b00b07355404361138be18a532056287 | [
"Apache-2.0"
] | null | null | null | content/JavaScript/HTML_Form.md | crazy-canux/blog_pelican | 55e95fa0b00b07355404361138be18a532056287 | [
"Apache-2.0"
] | 1 | 2019-10-17T12:08:11.000Z | 2019-10-17T12:08:11.000Z | Title: HTML Form
Date: 2016-10-31 04:00:49
Tags: HTML, Form
# html表单
html表单用于搜集不同类型的用户输入。
***
# **form**
form元素定义html表单。支持全局属性和事件属性。
<form action="action_page.py">
<fieldset>
<legend>Form information:</legend>
First name:<br>
<input type="text" name="firstname">
<br>
Last name:<br>
<input type="text" name="lastname">
</fieldset>
</form>
# accept-charset属性规定服务器可处理的表单数据字符集。
# action属性规定当提交表单时向何处发送表单数据。
<form action="action_page.py">
# autocomplete属性规定是否启用表单的自动完成功能
on/off
# enctype属性规定在发送表单数据之前如何编码
application/x-www-form-urlencoded, 默认值,表示在发送前编码所有值.
multipart/form-data, 不编码,在使用包含文件上传控件的表单必须使用.
text/plain, 空格转换为"+"加号,不对特殊字符编码.
# method属性规定用于发送form-data的http方法
get/post
# name属性规定表单名称
# novalidate属性规定提交表单时不验证
# target属性规定在何处打开URL。
_blank/_self/_parent/_top
***
# **input**
input元素是最重要的表单元素。支持全局属性和事件属性。
# type属性规定输入元素类型
button
checkbox
file
hidden
image
password
radio
reset
submit
text
# name属性定义input元素名称
# value属性定义input元素默认值
readonly
disabled
size
maxlength
alt
accept
checked
src
autocomplete
autofocus
form
formaction
formenctype
formmethod
formnovalidate
formtarget
height
width
list
max
min
multiple
pattern
placeholder
required
step
***
# **fieldset**
fieldset元素组合表单中的相关数据,支持全局属性和事件属性
# disable属性规定应该禁用fieldset
# form属性规定fieldset所属的一个或多个表单。
# name属性规定fieldset名称。
***
# **legend**
legend元素为fieldset元素定义标题,支持全局属性和事件属性,支持样式。
***
# **select**
定义下拉列表。支持全局属性和事件属性。
<form action="action_page.py">
<select name="cars">
<option value="volvo">volvo</option>
<option value="audi">audi</option>
</select>
</form>
# autofocus属性规定在页面加载后文本区域自动获得焦点
# disable
# form
# multiple
# name
# required
# size
***
# **optgroup**
***
# **option**
定义选项。支持全局属性和事件属性。
# disabled
# label
# selected
# value
***
# **label**
***
# **button**
定义可点击的按钮,支持全局属性和事件属性。
<button type="button" onclick="alert("hello world")">Click Me</button>
# name属性规定按钮名称
# type属性规定按钮类型
# value属性规定按钮初始值
# autofocus
# disabled
# form
# formaction
# formenctype
# formmethod
# formnovalidate
# formtarget
***
# **textarea**
***
# **datalist**
***
# **keygen**
***
# **output**
| 13.089947 | 74 | 0.619644 | eng_Latn | 0.37294 |
2cb5bae566a392f9668ebb35b577d4a41c6a7303 | 7,964 | md | Markdown | articles/azure-resource-manager/resource-manager-request-limits.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-06T00:12:00.000Z | 2019-06-06T00:12:00.000Z | articles/azure-resource-manager/resource-manager-request-limits.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-resource-manager/resource-manager-request-limits.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Request limits and throttling - Azure Resource Manager
description: Describes how to use throttling with Azure Resource Manager requests when subscription limits have been reached.
author: tfitzmac
ms.service: azure-resource-manager
ms.topic: conceptual
ms.date: 05/14/2019
ms.author: tomfitz
ms.custom: seodec18
---
# Throttling Resource Manager requests
For each Azure subscription and tenant, Resource Manager allows up to 12,000 read requests per hour and 1,200 write requests per hour. These limits are scoped to the principal ID making the requests and the subscription ID or tenant ID. If your requests come from more than one principal ID, your limit across the subscription or tenant is greater than 12,000 and 1,200 per hour.
Requests are applied to either your subscription or your tenant. Subscription requests are ones that involve passing your subscription ID, such as retrieving the resource groups in your subscription. Tenant requests don't include your subscription ID, such as retrieving valid Azure locations.
These limits apply to each Azure Resource Manager instance. There are multiple instances in every Azure region, and Azure Resource Manager is deployed to all Azure regions. So, in practice, limits are effectively much higher than these limits, as user requests are usually serviced by many different instances.
If your application or script reaches these limits, you need to throttle your requests. This article shows you how to determine the remaining requests you have before reaching the limit, and how to respond when you've reached the limit.
When you reach the limit, you receive the HTTP status code **429 Too many requests**.
Azure Resource Graph limits the number of requests to its operations. The steps in this article to determine the remaining requests and how to respond when the limit is reached also apply to Resource Graph. However, Resource Graph sets its own limit and reset rate. For more information, see [Throttle in Azure Resource Graph](../governance/resource-graph/overview.md#throttling).
## Remaining requests
You can determine the number of remaining requests by examining response headers. Read requests return a value in the header for the number of remaining read requests. Write requests include a value for the number of remaining write requests. The following table describes the response headers you can examine for those values:
| Response header | Description |
| --- | --- |
| x-ms-ratelimit-remaining-subscription-reads |Subscription scoped reads remaining. This value is returned on read operations. |
| x-ms-ratelimit-remaining-subscription-writes |Subscription scoped writes remaining. This value is returned on write operations. |
| x-ms-ratelimit-remaining-tenant-reads |Tenant scoped reads remaining |
| x-ms-ratelimit-remaining-tenant-writes |Tenant scoped writes remaining |
| x-ms-ratelimit-remaining-subscription-resource-requests |Subscription scoped resource type requests remaining.<br /><br />This header value is only returned if a service has overridden the default limit. Resource Manager adds this value instead of the subscription reads or writes. |
| x-ms-ratelimit-remaining-subscription-resource-entities-read |Subscription scoped resource type collection requests remaining.<br /><br />This header value is only returned if a service has overridden the default limit. This value provides the number of remaining collection requests (list resources). |
| x-ms-ratelimit-remaining-tenant-resource-requests |Tenant scoped resource type requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service has overridden the default limit. Resource Manager adds this value instead of the tenant reads or writes. |
| x-ms-ratelimit-remaining-tenant-resource-entities-read |Tenant scoped resource type collection requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service has overridden the default limit. |
## Retrieving the header values
Retrieving these header values in your code or script is no different than retrieving any header value.
For example, in **C#**, you retrieve the header value from an **HttpWebResponse** object named **response** with the following code:
```cs
response.Headers.GetValues("x-ms-ratelimit-remaining-subscription-reads").GetValue(0)
```
In **PowerShell**, you retrieve the header value from an Invoke-WebRequest operation.
```powershell
$r = Invoke-WebRequest -Uri https://management.azure.com/subscriptions/{guid}/resourcegroups?api-version=2016-09-01 -Method GET -Headers $authHeaders
$r.Headers["x-ms-ratelimit-remaining-subscription-reads"]
```
For a complete PowerShell example, see [Check Resource Manager Limits for a Subscription](https://github.com/Microsoft/csa-misc-utils/tree/master/psh-GetArmLimitsViaAPI).
If you want to see the remaining requests for debugging, you can provide the **-Debug** parameter on your **PowerShell** cmdlet.
```powershell
Get-AzResourceGroup -Debug
```
Which returns many values, including the following response value:
```powershell
DEBUG: ============================ HTTP RESPONSE ============================
Status Code:
OK
Headers:
Pragma : no-cache
x-ms-ratelimit-remaining-subscription-reads: 11999
```
To get write limits, use a write operation:
```powershell
New-AzResourceGroup -Name myresourcegroup -Location westus -Debug
```
Which returns many values, including the following values:
```powershell
DEBUG: ============================ HTTP RESPONSE ============================
Status Code:
Created
Headers:
Pragma : no-cache
x-ms-ratelimit-remaining-subscription-writes: 1199
```
In **Azure CLI**, you retrieve the header value by using the more verbose option.
```azurecli
az group list --verbose --debug
```
Which returns many values, including the following values:
```azurecli
msrest.http_logger : Response status: 200
msrest.http_logger : Response headers:
msrest.http_logger : 'Cache-Control': 'no-cache'
msrest.http_logger : 'Pragma': 'no-cache'
msrest.http_logger : 'Content-Type': 'application/json; charset=utf-8'
msrest.http_logger : 'Content-Encoding': 'gzip'
msrest.http_logger : 'Expires': '-1'
msrest.http_logger : 'Vary': 'Accept-Encoding'
msrest.http_logger : 'x-ms-ratelimit-remaining-subscription-reads': '11998'
```
To get write limits, use a write operation:
```azurecli
az group create -n myresourcegroup --location westus --verbose --debug
```
Which returns many values, including the following values:
```azurecli
msrest.http_logger : Response status: 201
msrest.http_logger : Response headers:
msrest.http_logger : 'Cache-Control': 'no-cache'
msrest.http_logger : 'Pragma': 'no-cache'
msrest.http_logger : 'Content-Length': '163'
msrest.http_logger : 'Content-Type': 'application/json; charset=utf-8'
msrest.http_logger : 'Expires': '-1'
msrest.http_logger : 'x-ms-ratelimit-remaining-subscription-writes': '1199'
```
## Waiting before sending next request
When you reach the request limit, Resource Manager returns the **429** HTTP status code and a **Retry-After** value in the header. The **Retry-After** value specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request isn't processed and a new retry value is returned.
## Next steps
* For a complete PowerShell example, see [Check Resource Manager Limits for a Subscription](https://github.com/Microsoft/csa-misc-utils/tree/master/psh-GetArmLimitsViaAPI).
* For more information about limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../azure-subscription-service-limits.md).
* To learn about handling asynchronous REST requests, see [Track asynchronous Azure operations](resource-manager-async-operations.md).
| 56.084507 | 383 | 0.766323 | eng_Latn | 0.970334 |
2cb5e376ea464e6227cf22df8148d85355cd8e4f | 1,643 | md | Markdown | radicals-font/README.md | jjesus-dev/kanji-data-media | 9dcf86cc6f45a50209424ff02f3bb9deb45f72d4 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | radicals-font/README.md | jjesus-dev/kanji-data-media | 9dcf86cc6f45a50209424ff02f3bb9deb45f72d4 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | radicals-font/README.md | jjesus-dev/kanji-data-media | 9dcf86cc6f45a50209424ff02f3bb9deb45f72d4 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | Japanese Radicals font
===========
Derived from [Source Han Sans][1], _Japanese Radicals_ is a small (54KB), custom font (_JapaneseRadicals-Regular.otf_) with full support all Japanese radicals and their variants (321 characters). EOT, TTF, SVG and WOFF versions of the font can be found in the webfonts sub-directory.
To improve compatability with legacy encodings in existing files _Japanese Radicals_ includes all the glyphs in the Unicode Kangxi Radicals (U+2F00..U+2FDF) and CJK Radicals Supplement (U+2E80..U+2EFF) character ranges as well as a small number of Han ideographs. The remaining 60 radical glyphs not available in Unicode were encoded in the Private Use Area range U+E700..U+E759.
Also included is a PDF file which lists the radicals present in the font, their encodings, stroke numbers, meanings, readings and positions (a CSV version is included in the language-data directory on this repository). A PNG image illustrates the 60 custom glyphs added by us. The _Japanese Radicals_ font can be [viewed in use on the _Kanji alive_ website][2].
Unlike the other media files in this repository which are licensed under a [Creative Commons CC-BY 4.0][3] license, _Japanese Radicals_ is freely available for private or commercial use under an [Apache 2.0][4] license granted by Adobe Systems Inc. Please ensure that the original license file is always included with the font should you wish to redistribute it.
[1]: https://github.com/adobe-fonts/source-han-sans
[2]: http://kanjialive.com/214-traditional-kanji-radicals/
[3]: http://creativecommons.org/licenses/by/4.0/
[4]: http://www.apache.org/licenses/LICENSE-2.0.html
| 109.533333 | 380 | 0.783932 | eng_Latn | 0.995123 |
2cb67502c12a8dc06857d07e17c799a967c3340c | 1,599 | md | Markdown | docs/code-quality/ca1722-identifiers-should-not-have-incorrect-prefix.md | Kaunaj/visualstudio-docs | 47ed61d95acbb33fbdfa8ed43934cbdb451ad97c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca1722-identifiers-should-not-have-incorrect-prefix.md | Kaunaj/visualstudio-docs | 47ed61d95acbb33fbdfa8ed43934cbdb451ad97c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca1722-identifiers-should-not-have-incorrect-prefix.md | Kaunaj/visualstudio-docs | 47ed61d95acbb33fbdfa8ed43934cbdb451ad97c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "CA1722: Identifiers should not have incorrect prefix"
ms.date: 11/04/2016
ms.prod: visual-studio-dev15
ms.technology: vs-ide-code-analysis
ms.topic: reference
f1_keywords:
- "IdentifiersShouldNotHaveIncorrectPrefix"
- "CA1722"
helpviewer_keywords:
- "CA1722"
- "IdentifiersShouldNotHaveIncorrectPrefix"
ms.assetid: c3313c51-d004-4f9a-a0d1-6c4c4a1fb1e6
author: gewarren
ms.author: gewarren
manager: douge
ms.workload:
- "multiple"
---
# CA1722: Identifiers should not have incorrect prefix
|||
|-|-|
|TypeName|IdentifiersShouldNotHaveIncorrectPrefix|
|CheckId|CA1722|
|Category|Microsoft.Naming|
|Breaking Change|Breaking|
## Cause
An identifier has an incorrect prefix.
## Rule description
By convention, only certain programming elements have names that begin with a specific prefix.
Type names do not have a specific prefix and should not be prefixed with a 'C'. This rule reports violations for type names such as 'CMyClass' and does not report violations for type names such as 'Cache'.
Naming conventions provide a common look for libraries that target the common language runtime. This consistency reduces the learning curve that's required for new software libraries and increases customer confidence that the library was developed by someone who has expertise in developing managed code.
## How to fix violations
Remove the prefix from the identifier.
## When to suppress warnings
Do not suppress a warning from this rule.
## Related rules
[CA1715: Identifiers should have correct prefix](../code-quality/ca1715-identifiers-should-have-correct-prefix.md) | 34.76087 | 305 | 0.789869 | eng_Latn | 0.9899 |
2cb6a23a5c074c3a5756551fca4362fff9aea945 | 900 | md | Markdown | docs/themes/README.md | 54dxs/gbook | ddfc7436909fcc3cbf8bf0da60c207f8cf823c0d | [
"Apache-2.0"
] | null | null | null | docs/themes/README.md | 54dxs/gbook | ddfc7436909fcc3cbf8bf0da60c207f8cf823c0d | [
"Apache-2.0"
] | 1 | 2020-07-07T20:56:31.000Z | 2020-07-07T20:56:31.000Z | docs/themes/README.md | 54dxs/gbook | ddfc7436909fcc3cbf8bf0da60c207f8cf823c0d | [
"Apache-2.0"
] | null | null | null | # 主题
从3.0.0版本开始,GBook可以很容易地设置主题。默认情况下,书籍使用[主题默认值](https://github.com/54dxs/gbook-plugin-theme-default)主题。
> **注意**:自定义主题可能会阻止某些插件正常工作。
### 主题的结构
主题是包含模板和资源的插件。重写任何单个模板都是可选的,因为主题总是扩展默认主题。
| 文件夹 | 说明 |
| -------- | ----------- |
| `_layouts` | 包含所有模板的主文件夹 |
| `_layouts/website/page.html` | 普通页面模板 |
| `_layouts/ebook/page.html` | 生成电子书期间正常页面的模板 (PDF< ePub, Mobi) |
### 在书中扩展/自定义主题
作者可以直接从书的源代码扩展主题的模板(无需创建外部主题)。模板将首先在书的`_layouts`文件夹中解析,然后在已安装的插件/主题中解析。
### Extend instead of Forking
如果要使主题更改对多本书可用,而不是派生默认主题,则可以使用[模板语法](../templating/README.md)对其进行扩展:
```html
{% extends template.self %}
{% block body %}
{{ super() }}
... This will be added to the "body" block
{% endblock %}
```
查看[API](https://github.com/54dxs/theme-api)主题以获得更完整的示例。
### 发布主题
主题以带有 `theme-` 前缀的插件([参见相关文档](../plugins/README.md))形式发布。例如,主题`awesome`将从`theme-awesome`插件加载,然后从`gbook-plugin-theme-awesome`NPM包加载。
| 22.5 | 131 | 0.686667 | yue_Hant | 0.876692 |
2cb817e80922a46bc94e618fa642c8867e189928 | 38 | md | Markdown | README.md | goatzin/web-sever | f349dc5e5d727d193826bd2f35a27f2fe0ccb0b2 | [
"MIT"
] | null | null | null | README.md | goatzin/web-sever | f349dc5e5d727d193826bd2f35a27f2fe0ccb0b2 | [
"MIT"
] | null | null | null | README.md | goatzin/web-sever | f349dc5e5d727d193826bd2f35a27f2fe0ccb0b2 | [
"MIT"
] | null | null | null | # web-sever
A web server in Rust O_o
| 12.666667 | 25 | 0.710526 | eng_Latn | 0.922966 |
2cb81e5272a7cf678512ac805b939ada9eabeca0 | 2,811 | md | Markdown | windows-driver-docs-pr/display/displayconfiggetdeviceinfo-summary-and-scenarios.md | scottnoone/windows-driver-docs | 0d67834ab63cf2a8993bccdea23d1b0186a4aec6 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-30T20:31:06.000Z | 2021-11-30T20:31:06.000Z | windows-driver-docs-pr/display/displayconfiggetdeviceinfo-summary-and-scenarios.md | scottnoone/windows-driver-docs | 0d67834ab63cf2a8993bccdea23d1b0186a4aec6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/display/displayconfiggetdeviceinfo-summary-and-scenarios.md | scottnoone/windows-driver-docs | 0d67834ab63cf2a8993bccdea23d1b0186a4aec6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: DisplayConfigGetDeviceInfo Summary and Scenarios
description: DisplayConfigGetDeviceInfo Summary and Scenarios
ms.assetid: 19d9a77c-252e-4623-b4bc-f0b990ed31e2
keywords:
- connecting displays WDK Windows 7 display , CCD APIs, DisplayConfigGetDeviceInfo
- connecting displays WDK Windows Server 2008 R2 display , CCD APIs, DisplayConfigGetDeviceInfo
- configuring displays WDK Windows 7 display , CCD APIs, DisplayConfigGetDeviceInfo
- configuring displays WDK Windows Server 2008 R2 display , CCD APIs, DisplayConfigGetDeviceInfo
- CCD concepts WDK Windows 7 display , DisplayConfigGetDeviceInfo
- CCD concepts WDK Windows Server 2008 R2 display , DisplayConfigGetDeviceInfo
- DisplayConfigGetDeviceInfo WDK Windows 7 display
- DisplayConfigGetDeviceInfo WDK Windows Server 2008 R2 display
ms.date: 04/20/2017
ms.localizationpriority: medium
---
# DisplayConfigGetDeviceInfo Summary and Scenarios
This section applies only to Windows 7 and later, and Windows Server 2008 R2 and later versions of Windows operating system.
The following sections summarize how a caller uses the [**DisplayConfigGetDeviceInfo**](/windows/desktop/api/winuser/nf-winuser-displayconfiggetdeviceinfo) CCD function and provide scenarios for using **DisplayConfigGetDeviceInfo**.
### <span id="displayconfiggetdeviceinfo_summary"></span><span id="DISPLAYCONFIGGETDEVICEINFO_SUMMARY"></span>DisplayConfigGetDeviceInfo Summary
The caller can use [**DisplayConfigGetDeviceInfo**](/windows/desktop/api/winuser/nf-winuser-displayconfiggetdeviceinfo) to obtain more friendly names to display in the user interface. The caller can obtain names for the adapter, the source, and the target. The caller can also use **DisplayConfigGetDeviceInfo** to obtain the native resolution of the connected display device.
### <span id="displayconfiggetdeviceinfo_scenarios"></span><span id="DISPLAYCONFIGGETDEVICEINFO_SCENARIOS"></span>DisplayConfigGetDeviceInfo Scenarios
[**DisplayConfigGetDeviceInfo**](/windows/desktop/api/winuser/nf-winuser-displayconfiggetdeviceinfo) is called in the following scenarios:
- The display control panel applet calls [**DisplayConfigGetDeviceInfo**](/windows/desktop/api/winuser/nf-winuser-displayconfiggetdeviceinfo) to obtain the monitor name to display in the drop-down menu that lists all the connected monitors.
- The display control panel applet calls [**DisplayConfigGetDeviceInfo**](/windows/desktop/api/winuser/nf-winuser-displayconfiggetdeviceinfo) to obtain the name of the adapters that are connected to the system.
- The display control panel applet calls [**DisplayConfigGetDeviceInfo**](/windows/desktop/api/winuser/nf-winuser-displayconfiggetdeviceinfo) to obtain the native resolution of each connected monitor so the resolution can be highlighted in the user interface.
| 68.560976 | 376 | 0.824618 | eng_Latn | 0.470549 |
2cb9b26803d6572737df1406f5d6ab6bd0ce9b4c | 31 | md | Markdown | README.md | voky/OWL.Kultur-Test | 9e80c2918e6425a848db2ed6dabbd85a38a84f47 | [
"MIT"
] | null | null | null | README.md | voky/OWL.Kultur-Test | 9e80c2918e6425a848db2ed6dabbd85a38a84f47 | [
"MIT"
] | null | null | null | README.md | voky/OWL.Kultur-Test | 9e80c2918e6425a848db2ed6dabbd85a38a84f47 | [
"MIT"
] | null | null | null | # OWL.Kultur-Test
Test Project
| 10.333333 | 17 | 0.774194 | ron_Latn | 0.17813 |
2cb9ebb03911e7f32fe16c8a223ef90558f3cc54 | 3,912 | md | Markdown | resources/doc/7-multiple-boards.md | romgere/romgere_cockpit | 40ea163ebe1d82569b2f967ff54c7099b378b8c0 | [
"MIT"
] | 1 | 2021-09-21T19:01:42.000Z | 2021-09-21T19:01:42.000Z | resources/doc/7-multiple-boards.md | romgere/romgere_cockpit | 40ea163ebe1d82569b2f967ff54c7099b378b8c0 | [
"MIT"
] | 11 | 2019-05-06T02:44:49.000Z | 2022-02-26T20:36:15.000Z | resources/doc/7-multiple-boards.md | romgere/romgere_cockpit | 40ea163ebe1d82569b2f967ff54c7099b378b8c0 | [
"MIT"
] | null | null | null | # Multiple Arduino board
The library can run on multiple Arduino Board with only one Ethernet Shield and others boards connect by I2C protocol. The main board (with Ethernet shield) aka "master" received data and send command to X-Plane, the other board, aka "slave" are uses only to read/write on input/output PIN.
All the controls/commands registration are made on the master board. There is no input/output configuration on slave board, you only need to run the `SlaveBoardApplication` application.
For more information about how Master/Slave communication work, you can read Arduino "[Master Writer/Slave Receiver](https://www.arduino.cc/en/Tutorial/MasterWriter)" page.
## Active multiple board mode
To active multiple board support you need to enable [ACTIVE_MULTI_ARDUINO_BOARD_MODE](./1-configuration-reference.md#ACTIVE_MULTI_ARDUINO_BOARD_MODE) parameter when you compile the library.
Please see [configuration reference page](./1-configuration-reference.md) for more information.
## Declare controls on slave board
To declare control on slave board, you have to passed an additional parameter has last parameter of control's constructor. This parameter is the address of Slave Arduino board on I2C bus.
For example, to declare a LED control on PIN n°8 of Slave board n°1, just add board address as last parameter, like this : `new ArduinoLEDControl(8, 1)`.
# Arduino connection
Board view | Sketch view
---------- | -----------
 | 
# Code sample "Master Board"
```cpp
#include <Arduino.h>
#include "src/RomgereCockpit/Application/CockpitMainApplication.h"
#include "src/RomgereCockpit/CommunicationInterface/EthernetInterface.h"
#include "src/RomgereCockpit/ArduinoControl/ArduinoToggleSwitchControl.h"
#include "src/RomgereCockpit/ArduinoControl/ArduinoLEDControl.h"
#include "src/RomgereCockpit/ArduinoControl/ArduinoPushButtonControl.h"
CockpitMainApplication *cockpitApp;
EthernetInterface *ethernetInterface;
void setup()
{
//Create & start Ethernet interface + Create app with our Ethernet interface
byte arduinoMAC[6] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xEA, 0xED };
ethernetInterface = new EthernetInterface( 49001, 49000, { 192, 168, 1, 97 }, arduinoMAC, { 192, 168, 1, 21 });
cockpitApp = new CockpitMainApplication( ethernetInterface);
//Declare and bind control - MASTER BOARD
cockpitApp->RegisterInputControl(
new ArduinoPushButtonControl(8), //Create push button on PIN 8 on Master board
new XPlaneSimpleCommand("sim/annunciator/test_all_annunciators") //Send "Test all annunciators" command to X-Plane
);
//Declare and bind control - SLAVE BOARD
cockpitApp->RegisterOutputControl(
new ArduinoLEDControl(8, 1), //Create LED Control on PIN n°8 on Slave board n°1
new XPlaneInputData(67, 0)
);
//Declare and bind control - SLAVE BOARD
cockpitApp->RegisterInputControl(
new ArduinoToggleSwitchControl(9, 1), //Create toggle switch Control on PIN n°9 on Slave board n°1
new XPlaneSimpleCommand("sim/systems/avionics_on"),
new XPlaneSimpleCommand("sim/systems/avionics_off")
);
}
void loop()
{
cockpitApp->Loop();
}
```
*This sample come from [Master.ino](https://github.com/romgere/romgere_cockpit/blob/master/example/MultipleBoard/Master/Master.ino) file*
# Code sample "Slave Board"
```cpp
#include <Arduino.h>
#include "src/RomgereCockpit/Application/SlaveBoardApplication.h"
SlaveBoardApplication* slaveCockpit;
void setup()
{
slaveCockpit = new SlaveBoardApplication(1); //Declare our application on address 1
slaveCockpit->RegisterI2C(); //Register board on I2C bus
}
void loop()
{
delay(100);
slaveCockpit->loop();
}
```
*This sample come from [Slave.ino](https://github.com/romgere/romgere_cockpit/blob/master/example/MultipleBoard/Master/Slave.ino) file*
| 40.75 | 290 | 0.766871 | eng_Latn | 0.587765 |
2cba6b3984173169aec6eb5546f1bd21285c9ae7 | 722 | md | Markdown | README.md | julien-amar/space-shooter | 21105d4484f49d5948660b1a720abd9f2aa9671d | [
"Apache-2.0"
] | 5 | 2016-06-02T13:31:28.000Z | 2020-09-10T08:13:46.000Z | README.md | julien-amar/space-shooter | 21105d4484f49d5948660b1a720abd9f2aa9671d | [
"Apache-2.0"
] | null | null | null | README.md | julien-amar/space-shooter | 21105d4484f49d5948660b1a720abd9f2aa9671d | [
"Apache-2.0"
] | 2 | 2020-07-05T02:47:35.000Z | 2022-01-04T03:40:00.000Z | # space-shooter
Unreal Engine 4 Space Shooter via the Blueprint system

Following [this YouTube tutorial](https://www.youtube.com/playlist?list=PLwmGmCVti_dBUu-57WkLips2kq2bT_4wO) made by **Strigifo**.
# Features
* Controllers (using keyboard or other ones)
* Two types of enemies (with specific movement patterns)
* Enemies spawning at random positions
* Spawn animation
* Primary & alternative projectile
* Collision (with environment & enemies)
* Texturing
* Sound
* Infinite scrolling background
* Camera shaking & player bouncing on hit
* Scoring (with persistance)
* Particle system
| 32.818182 | 129 | 0.777008 | eng_Latn | 0.807414 |
2cbac9b15e2bf439125c339dc04e74eed2c39ef9 | 1,872 | md | Markdown | Documentation/Release-Notes/MaxScale-2.4.14-Release-Notes.md | mariadb-ThienLy/MaxScale | 0ba6fb79b930ba90c544594e3580fc46054f6666 | [
"MIT"
] | null | null | null | Documentation/Release-Notes/MaxScale-2.4.14-Release-Notes.md | mariadb-ThienLy/MaxScale | 0ba6fb79b930ba90c544594e3580fc46054f6666 | [
"MIT"
] | null | null | null | Documentation/Release-Notes/MaxScale-2.4.14-Release-Notes.md | mariadb-ThienLy/MaxScale | 0ba6fb79b930ba90c544594e3580fc46054f6666 | [
"MIT"
] | null | null | null | # MariaDB MaxScale 2.4.14 Release Notes -- 2020-11-25
Release 2.4.14 is a GA release.
This document describes the changes in release 2.4.14, when compared to the
previous release in the same series.
For any problems you encounter, please consider submitting a bug
report on [our Jira](https://jira.mariadb.org/projects/MXS).
**NOTE** 2.4.14 is the last release that is made available for RHEL6, which reaches its EOL at the end of November.
## Bug fixes
* [MXS-3297](https://jira.mariadb.org/browse/MXS-3297) Extended MariaDB capabilities are not read correctly
* [MXS-3295](https://jira.mariadb.org/browse/MXS-3295) Layout of classify REST API endpoint stores non-parameter data in parameters object
* [MXS-3293](https://jira.mariadb.org/browse/MXS-3293) Backticks not stripped in USE statements.
* [MXS-3273](https://jira.mariadb.org/browse/MXS-3273) Connection lost when unrelated server loses Slave status
* [MXS-3272](https://jira.mariadb.org/browse/MXS-3272) maxctrl not prompt directy for the password
* [MXS-3240](https://jira.mariadb.org/browse/MXS-3240) Uom variable from maxscale api /maxscale/threads
## Known Issues and Limitations
There are some limitations and known issues within this version of MaxScale.
For more information, please refer to the [Limitations](../About/Limitations.md) document.
## Packaging
RPM and Debian packages are provided for supported the Linux distributions.
Packages can be downloaded [here](https://mariadb.com/downloads/#mariadb_platform-mariadb_maxscale).
## Source Code
The source code of MaxScale is tagged at GitHub with a tag, which is identical
with the version of MaxScale. For instance, the tag of version X.Y.Z of MaxScale
is `maxscale-X.Y.Z`. Further, the default branch is always the latest GA version
of MaxScale.
The source code is available [here](https://github.com/mariadb-corporation/MaxScale).
| 45.658537 | 138 | 0.775641 | eng_Latn | 0.970166 |
2cbc235150698b92f286fd46bdbfefc54d29d0f6 | 81 | md | Markdown | README.md | timretout/bedford-bins | f4a187f3a1cc07c0eeb52e83de01ce80be635a43 | [
"MIT"
] | 1 | 2020-12-28T09:31:40.000Z | 2020-12-28T09:31:40.000Z | README.md | timretout/bedford-bins | f4a187f3a1cc07c0eeb52e83de01ce80be635a43 | [
"MIT"
] | null | null | null | README.md | timretout/bedford-bins | f4a187f3a1cc07c0eeb52e83de01ce80be635a43 | [
"MIT"
] | null | null | null | # bedford-bins
Golang client for Bedford Borough Council bin collections service
| 27 | 65 | 0.839506 | eng_Latn | 0.850765 |
2cbc893adb2bc28fdadf286f6c44ba11000c46e1 | 3,502 | md | Markdown | README.md | lalagreen/btc-payments-master | ab63bf14033641a4616af603125a902d90350780 | [
"MIT"
] | 5 | 2015-11-17T13:07:28.000Z | 2020-04-26T23:34:59.000Z | README.md | lalagreen/btc-payments-master | ab63bf14033641a4616af603125a902d90350780 | [
"MIT"
] | null | null | null | README.md | lalagreen/btc-payments-master | ab63bf14033641a4616af603125a902d90350780 | [
"MIT"
] | 2 | 2017-02-02T09:54:43.000Z | 2020-04-26T23:35:03.000Z | # Btc-Payments
An NPM module to easily configure and integrate a BTC payments processor into nodejs, using a hierarchical deterministic addresses you will receive all your payments on a single address. You will also don't need the bitcoin blockchain to push text, so its very lightweight and you can use the testnet network for testing.
## Install
1. Run:
```
npm install btc-payments
```
2. Create a config.js to run the processor with this format:
```
{
logLevel : 'debug', // none, normal, debug
dbURI : 'mongodb://USER:PASS@IP:PORT/DBNAME', //URI to use to connect to db
network : 'testnet', // testnet or livenet
seedBytes : "your secret string to recover all your balances", // String of the seed master key
btcMainAddress : "YOUR_BTC_MAIN_ADDRESS", // Address to receive the payments
paymentTimeout : 120, // The amount of time in minutes that the user have to make the payment
limitBalance : 0.005, //The max balance that your waiting addresses can have
txFee : 0.0001, // The fee amount to use in your transactions to teh BTC main address
functionTimeout : 10 // The amount of time of second that you want to wait beetwen processor updates
warningTimeout : 10 // When a paymentWaiting have this amount of minutes left a function gets executed
}
```
3. Create the processor object:
```
BTCPayments = new require('btc-payments')(btcPaymentsConfig,[],[].[],[]);
```
4. Add the onComplete, on Warning and onCancel payments functions:
```
BTCPayments.addOnCreate('Test',function(payment,callback){
console.log('Test payment type created');
console.log(payment.toString());
callback(null,'Success');
});
BTCPayments.addOnComplete('Test',function(payment,callback){
console.log('Test payment type completed');
console.log(payment.toString());
callback(null,'Success');
});
BTCPayments.addOnWarning('Test',function(payment,callback){
console.log('Test payment type warned');
console.log(payment.toString());
callback(null,'Success');
});
BTCPayments.addOnCancel('Test',function(payment,callback){
console.log('Test payment type canceled');
console.log(payment.toString());
callback(null,'Success');
});
```
5. Start the processor:
```
BTCPayments.start();
```
## Update Steps
1. Get all the addresses in the addressesPool that are waiting to receive a payment.
2. Get the paymentWaiting that the address is waiting.
3. Get all the utxos (unspent inputs) of the address.
4. Check all the utxos, get a total balance of the address.
5. Three possible cases:
* The address balance its the same and it didn't receive any btc or not the enough btc to complete the payment, finish.
* The address reach the timeout warning and the payment exec the warning function.
* The address reach the timeout waiting and the payment got canceled, finish.
* The address balance its enough to complete the payment, to step 6.
6. If the address balance its higher than the minimum balance that every address can have send the btcs to the main address using all the utxos, if not the address finish waiting and its free to be used for another payment.
## TO DO
- [x] Add onPaymentCanceled functions.
- [x] Add onPaymentCreated functions.
- [x] Add editOnComplete and editOnCancel functions.
- [x] Write basic tests.
- [x] Stop gracefully.
- [x] Added from address.
- [x] Add wariningTimeout functions.
- [ ] Tests with real data.
- [ ] Pause and start processor.
- [ ] Better error handling.
- [ ] Better documentation.
- [ ] improve performance.
| 42.707317 | 322 | 0.737579 | eng_Latn | 0.977444 |
2cbcc35bfcec87a85442b2d07e0f25668630139c | 5,976 | md | Markdown | Contents/07.Tree/01.Binary-Tree/01.Binary-Tree-Basic.md | AlgorithmAndLeetCode/itcharge-LeetCode-Py | 2266b6e9add5bd306a1e1eb59d54e9447e641fd3 | [
"MIT"
] | 1 | 2022-03-31T01:51:18.000Z | 2022-03-31T01:51:18.000Z | Contents/07.Tree/01.Binary-Tree/01.Binary-Tree-Basic.md | AlgorithmAndLeetCode/itcharge-LeetCode-Py | 2266b6e9add5bd306a1e1eb59d54e9447e641fd3 | [
"MIT"
] | null | null | null | Contents/07.Tree/01.Binary-Tree/01.Binary-Tree-Basic.md | AlgorithmAndLeetCode/itcharge-LeetCode-Py | 2266b6e9add5bd306a1e1eb59d54e9447e641fd3 | [
"MIT"
] | null | null | null | ## 1. 树简介
### 1.1 树的定义
> **树(Tree)**:由 $n \ge 0$ 个节点与节点之间的关系组成的有限集合。当 $n = 0$ 时称为空树,当 $n > 0$ 时称为非空树。
之所以把这种数据结构称为「树」是因为这种数据结构看起来就像是一棵倒挂的树,也就是说数据结构中的「树」是根朝上,而叶朝下的。如下图所示。

「树」具有以下的特点:
- 有且仅有一个节点没有前驱节点,该节点被称为树的 **「根节点(Root)」** 。
- 除了根节点以之,每个节点有且仅有一个直接前驱节点。
- 包括根节点在内,每个节点可以有多个后继节点。
- 当 $n > 1$ 时,除了根节点之外的其他节点,可分为 $m(m > 0)$ 个互不相交的有限集合 $T_1, T_2, ..., T_m$,其中每一个集合本身又是一棵树,并且被称为根的 **「子树(SubTree)」**。
如下图所示,红色节点 `A` 是根节点,除了根节点之外,还有 `3` 棵互不相交的子树 $T_1(B、E、H、I、G)$、$T_2(C)$、$T_3(D、F、G、K)$。

### 1.2 树的相关术语
下面我们来介绍一下树结构中的一些基本术语。
#### 1.2.1 节点分类
**「树的节点」** 由一个数据元素和若干个指向其子树的树的分支组成。而节点所含有的子树个数称为 **「节点的度」**。度为 `0` 的节点称为 **「叶子节点」** 或者 **「终端节点」**,度不为 `0` 的节点称为 **「分支节点」** 或者 **「非终端节点」**。树中各节点的最大度数称为 **「树的度」**。

- **树的节点**:由一个数据元素和若干个指向其子树的树的分支组成。
- **节点的度**:一个节点所含有的子树个数。
- **叶子节点(终端节点)**:度为 0 的节点。例如图中叶子节点为 `C`、`H`、`I`、`G`、`F`、`K`。
- **分支节点(非终端节点)**:度不为 0 的节点。例如图中分支节点为 `A`、`B`、`D`、`E`、`G`。
- **树的度**:树中节点的最大度数。例如图中树的度为 `3`。
#### 1.2.2 节点间关系
一个节点的子树的根节点称为该节点的 **「孩子节点」**,相应的,该节点称为孩子的 **「父亲节点」**。同一个父亲节点的孩子节点之间互称为 **「兄弟节点」**。

- **孩子节点(子节点)**:一个节点含有的子树的根节点称为该节点的子节点。例如图中 `B` 是 `A` 的孩子节点。
- **父亲节点(父节点)**:如果一个节点含有子节点,则这个节点称为其子节点的父节点。例如图中 `B` 是 `E` 的父亲节点。
- **兄弟节点**:具有相同父节点的节点互称为兄弟节点。例如图中 `F`、`G` 互为兄弟节点。
#### 1.2.3 树的其他术语
**「节点的层次」** 是从根节点开始定义,将根节点作为第 1 层,根的孩子节点作为第 2 层,以此类推,如果某个节点在第 `i` 层,则其孩子节点在第 `i + 1` 层。而父亲节点在同一层的节点互为 **「堂兄弟节点」**。树中所有节点最大的层数称为 **「树的深度」** 或 **「树的高度」**。树中,两个节点之间所经过节点序列称为 **「路径」**,两个节点之间路径上经过的边数称为 **「路径长度」**。

- **节点的层次**:从根节点开始定义,根为第 1 层,根的子节点为第 2 层,以此类推。
- **树的深度(高度)**:所有节点中最大的层数。例如图中树的深度为 `4`。
- **堂兄弟节点**:父节点在同一层的节点互为堂兄弟。例如图中 `G`、`K` 互为堂兄弟节点。
- **路径**:树中两个节点之间所经过的节点序列。例如图中 `E` 到 `G` 的路径为 `E - B - A - D - G`。
- **路径长度**:两个节点之间路径上经过的边数。例如图中 `E` 到 `G` 的路径长度为 `4`。
- **节点的祖先**:从该节点到根节点所经过的所有节点,被称为该节点的祖先。例如图中 `H` 的祖先为 `E`、`B`、`A`。
- **节点的子孙**:节点的子树中所有节点被称为该节点的子孙。例如图中 `D` 的子孙为 `F`、`G`、`K`。
### 1.3 树的分类
根据节点的子树是否可以互换位置,我们可以将树分为两种类型:**「有序树」** 和 **「无序树」**。
如果将树中节点的各个子树看做是从左到右是依次有序的(即不能互换),则称该树为 **「有序树」**。反之,如果节点的各个子树可以互换位置,则成该树为 **「无序树」**。
- **有序树**:节点的各个⼦树从左⾄右有序, 不能互换位置。
- **无序树**:节点的各个⼦树可互换位置。
## 2. 二叉树简介
### 2.1 二叉树的定义
> **二叉树(Binary Tree)**:树中各个节点的度不大于 `2` 个的有序树,称为二叉树。通常树中的分支节点被称为 **「左子树」** 或 **「右子树」**。二叉树的分支具有左右次序,不能随意互换位置。
下图就是一棵二叉树。

二叉树也可以使用递归方式来定义,即二叉树满足以下两个要求之一:
- **空树**:二叉树是一棵空树。
- **非空树**:二叉树是由一个根节点和两棵互不相交的子树 $T_1$、$T_2$,分别称为根节点的左子树、右子树组成的非空树;并且 $T_1$、$T_2$ 本身都是二叉树。
⼆叉树是种特殊的树,它最多有两个⼦树,分别为左⼦树和右⼦树,并且两个子树是有序的,不可以互换。也就是说,在⼆叉树中不存在度⼤于 `2` 的节点。
二叉树在逻辑上可以分为 `5` 种基本形态,如下图所示。

### 2.2 特殊的二叉树
下面我们来介绍一些特殊的二叉树。
#### 2.2.1 满二叉树
> **满二叉树(Full Binary Tree)**:如果所有分支节点都存在左子树和右子树,并且所有叶子节点都在同一层上,则称该二叉树为满二叉树。
满二叉树满足以下特点:
- 叶子节点只出现在最下面一层。
- 非叶子节点的度一定为 `2`。
- 在同等深度的二叉树中,满二叉树的节点个数最多,叶子节点个数最多。
如果我们对满二叉树的节点进行编号,根结点编号为 `1`,然后按照层次依次向下,每一层从左至右的顺序进行编号。则深度为 $k$ 的满二叉树最后一个节点的编号为 $2^k - 1$。
我们可以来看几个例子。

#### 2.2.2 完全二叉树
> **完全二叉树(Complete Binary Tree)**:如果叶子节点只能出现在最下面两层,并且最下层的叶子节点都依次排列在该层最左边的位置上,具有这种特点的二叉树称为完全二叉树。
完全二叉树满足以下特点:
- 叶子节点只能出现在最下面两层。
- 最下层的叶子节点一定集中在该层最左边的位置上。
- 倒数第二层如果有叶子节点,则该层的叶子节点一定集中在右边的位置上。
- 如果节点的度为 `1`,则该节点只偶遇左孩子节点,即不存在只有右子树的情况。
- 同等节点数的二叉树中,完全二叉树的深度最小。
完全二叉树也可以使用类似满二叉树的节点编号的方式来定义。即从根节点编号为 `1` 开始,按照层次从上至下,每一层从左至右进行编号。对于深度为 `i` 且有 `n` 个节点的二叉树,当且仅当每一个节点都与深度为 `k` 的满二叉树中编号从 `1` 至 `n` 的节点意义对应时,该二叉树为完全二叉树。
我们可以来看几个例子。

#### 2.2.3 二叉搜索树
> **二叉搜索树(Binary Search Tree)**:也叫做二叉查找树、有序二叉树或者排序二叉树。是指一棵空树或者具有下列性质的二叉树:
>
> - 如果任意节点的左子树不为空,则左子树上所有节点的值均小于它的根节点的值。
> - 如果任意节点的右子树不为空,则右子树上所有节点的值均大于它的根节点的值。
> - 任意节点的左子树、右子树均为二叉搜索树。
如图所示,这 `3` 棵树都是二叉搜索树。

#### 2.2.4 平衡二叉搜索树
> **平衡二叉搜索树(Balanced Binary Tree)**:一种结构平衡的二叉搜索树。即叶节点高度差的绝对值不超过 `1`,并且左右两个子树都是一棵平衡二叉搜索树。平衡二叉树可以在 $O(logn)$ 内完成插入、查找和删除操作。最早被发明的平衡二叉搜索树为 **「AVL 树(Adelson-Velsky and Landis Tree))」**。
>
> AVL 树满足以下性质:
>
> - 空二叉树是一棵 AVL 树。
> - 如果 T 是一棵 AVL 树,那么其左右子树也是 AVL 树,并且 $|h(ls) - h(rs)| \le 1$,$h(ls)$ 是左子树的高度,$h(rs)$ 是右子树的高度。
> - AVL 树的高度为 $O(log n)$。
如图所示,前 `2` 棵树是平衡二叉搜索树,最后一棵树不是平衡二叉搜索树,因为这棵树的左右子树的高度差的绝对值超过了 `1`。

### 2.3 二叉树的存储结构
二叉树的存储结构分为两种:「顺序存储结构」和「链式存储结构」,下面进行一一讲解。
#### 2.3.1 二叉树的顺序存储结构
其实,堆排序、优先队列中的二叉堆结构,采用的就是二叉树的顺序存储结构。
二叉树的顺序存储结构使用一维数组来存储二叉树中的节点,节点存储位置则采用完全二叉树的节点层次编号,按照层次从上至下,每一层从左至右的顺序依次存放二叉树的数据元素。在进行顺序存储时,如果对应的二叉树节点不存在,则设置为「空节点」。
下图为二叉树的顺序存储结构。

从图中我们也可以看出节点之间的逻辑关系。
- 如果某二叉树节点(非叶子节点)的下标为 `i`,那么其左孩子节点下标为 `2 * i + 1`,右孩子节点下标为 `2 * i + 2`。
- 如果某二叉树节点(非根结点)的下标为 `i`,那么其根节点下标为 `(i - 1) // 2`。`//` 表示整除。
对于完全二叉树(尤其是满二叉树)来说,采用顺序存储结构比较合适,它能充分利用存储空间;而对于一般二叉树,如果需要设置很多的「空节点」,则采用顺序存储结构就会浪费很多存储空间。并且,由于顺序存储结构固有的一些缺陷,会使得二叉树的插入、删除等操作不方便,效率也比较低。对于二叉树来说,当树的形态和大小经常发生动态变化时,更适合采用链式存储结构。
#### 2.3.2 二叉树的链式存储结构
二叉树采用链式存储结构时,每个链节点包含一个用于数据域 `val`,存储节点信息;还包含两个指针域 `left` 和 `right`,分别指向左右两个孩子节点,当左孩子或者右孩子不存在时,相应指针域值为空。二叉链节点结构如下图所示。

二叉链节点结构的对应代码为:
```Python
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
```
下面我们将值为 `1、2、3、4、5、6、7` 的二叉树使用链式存储结构进行存储,即为下图所示。

二叉树的链表存储结构具有灵活、方便的特点。节点的最大数目只受系统最大可存储空间的限制。一般情况下,二叉树的链表存储结构比顺序存储结构更省空间(用于存储指针域的空间开销只是二叉树中节点数的线性函数),而且对于二叉树实施相关操作也很方便,因此,一般我们使用链式存储结构来存储二叉树。
## 参考链接
- 【书籍】数据结构教程 第 3 版 - 唐发根 著
- 【书籍】大话数据结构 程杰 著
- 【书籍】算法训练营 陈小玉 著
- 【博文】[二叉树理论基础 - 代码随想录](https://programmercarl.com/二叉树理论基础.html)
- 【博文】[二叉树基础 - 袁厨的算法小屋](https://github.com/chefyuan/algorithm-base/blob/main/animation-simulation/二叉树/二叉树基础.md)
| 28.056338 | 208 | 0.700134 | yue_Hant | 0.642361 |
2cbd351520cfb1109cbbcefa4cf8c8160bdb8f82 | 1,528 | md | Markdown | README.md | manwar/starfish | 388d71509e8d2fcf027073235f038325cca501ab | [
"Artistic-1.0"
] | 1 | 2021-08-22T08:55:32.000Z | 2021-08-22T08:55:32.000Z | README.md | manwar/starfish | 388d71509e8d2fcf027073235f038325cca501ab | [
"Artistic-1.0"
] | null | null | null | README.md | manwar/starfish | 388d71509e8d2fcf027073235f038325cca501ab | [
"Artistic-1.0"
] | null | null | null | # starfish
## Text-Starfish — A Perl-based System for Text-Embedded Programming and Preprocessing
The Text::Starfish module and starfish command-line utility support
processing of embedded Perl code in an arbitrary text and provide for
flexible text patterns that identify such code or activate more
general text replacement.
The main Starfish web site is
http://web.cs.dal.ca/~vlado/srcperl/starfish/starfish.html.
Documentation is part of the file Starfish.pm in the Pod format. To
convert it into the manual page format, use the following commands
('$' is just a shell prompt, do not type it!):
```
$ pod2html Starfish.pm
```
## INSTALLATION
To install this module type the following:
```
$ perl Makefile.PL
$ make
$ make test
$ make install
```
If you do not have permissions to install the module in the
system-wide module repository, you can install it locally; e.g.,
```
$ perl Makefile.PL PREFIX=/home/mydir
```
## DEPENDENCIES
No significant dependencies, as far as I know. All used Perl modules
should be a part of Perl core. Let me know if you find something
important to add.
## AUTHORS
2001-2020 Vlado Keselj http://web.cs.dal.ca/~vlado
and contributing authors:
2007 Charles Ikeson (overhaul of test.pl)
## LICENSE
This script is provided "as is" without express or implied warranty.
This is free software; you can redistribute it and/or modify it under
the same terms as Perl itself, or more precisely, it is provided under
The Artistic License 1.0 (see the file LICENSE).
| 28.296296 | 92 | 0.754581 | eng_Latn | 0.992421 |
2cbdfd8b95871742c380f3f34cb2e55562783c5f | 186 | md | Markdown | README.md | talesbee/Lavoro | 1567f520fb72f516f7cfc9b7abef0fc92de68663 | [
"Unlicense"
] | null | null | null | README.md | talesbee/Lavoro | 1567f520fb72f516f7cfc9b7abef0fc92de68663 | [
"Unlicense"
] | null | null | null | README.md | talesbee/Lavoro | 1567f520fb72f516f7cfc9b7abef0fc92de68663 | [
"Unlicense"
] | null | null | null | # Lavoro
Linguagem de programação criada por Tales Iago Batista, para a disciplina de Compiladores(Eng. Computação, 6º semestre , IFMT-Cuiabá).
Compilador Lavoro -> Assemble Atmega328p
| 37.2 | 134 | 0.801075 | por_Latn | 0.982556 |
2cbe44aa52a0a73fe6b9f35450a5196ceeccbbf5 | 10,387 | md | Markdown | content/blog/end-to-end-encrypted-quadratic-voting-app.md | yehjxraymond/geeksg-blog | bbe039cbf12be58cefc63df4b2553bc39a4b7217 | [
"MIT"
] | null | null | null | content/blog/end-to-end-encrypted-quadratic-voting-app.md | yehjxraymond/geeksg-blog | bbe039cbf12be58cefc63df4b2553bc39a4b7217 | [
"MIT"
] | null | null | null | content/blog/end-to-end-encrypted-quadratic-voting-app.md | yehjxraymond/geeksg-blog | bbe039cbf12be58cefc63df4b2553bc39a4b7217 | [
"MIT"
] | null | null | null | ---
template: blog-post
title: Implementing End-to-End Encryption on a Quadratic Voting (QV) Application
publishedDate: 2019-12-20T12:00:28.345Z
description: Recently, my tribe held our promotion nomination exercise using my quadratic voting app. The exercise allow all members of the tribe to vote one another for the upcoming promotion. One of the concerns of using the quadratic voting application was that I could potentially read and change the votes since I've database access. How would I convince my colleagues to trust me?
featured: false
img: ../../static/images/lock-on-chain.png
imgAlt: Don't Ask, Don't Tell
tags:
- encryption
- quadratic-voting
---
Recently, my tribe held our promotion nomination exercise using my [quadratic voting app](https://qv.geek.sg/). The exercise allow all members of the tribe to vote one another for the upcoming promotion. One of the concerns of using the quadratic voting application was that I could potentially read and change the votes since I've database access.
As a result, I've decided to lock myself out of my own AWS account by allowing another colleague to set the password to my account while I maintain access to my MFA during the election period. At the end of the election period, we will log into the account and remove all the entries in the database. This allows no one to read or write the database while the election is ongoing. If I've attempted to reset my account password, my colleague would discover my attempt to peek or cheat when he is unable to log into the account when the election ends.
In theory, it was a great idea... But looking back, wouldn't it be better if the election results can be trusted even if I've access to the database directly? That's where I went back to the application and implemented E2E encryption on the election where it can be verified that:
1. The database owner is not privy to the individual votes
2. The election creator and voters can check that a specific vote has been accounted for
## Asymmetric Encryption to the Rescue
To achieve that, an asymmetric key pair would be used. The public key will be saved onto the database and used by voters to encrypt their votes on the client side. The voters will only transmit the encrypted vote to the database to be stored. The election creator, with his private key stored offline, would be able to decrypt individual votes on the client side.
Looking around, I found a suitable npm package for this purpose. [eccrypto](https://www.npmjs.com/package/eccrypto) provides a simple api to use ECIES encryption scheme for asymmetric encryption. I've decided to dumb the api down even further by wrapping the api to allow keys to be passes around in strings rather than buffers:
```js
const eccrypto = require("eccrypto");
const toBuffer = (txt) => Buffer.from(txt, "hex");
const toString = (buf) => buf.toString("hex");
const randomPrivateKey = () => {
const key = eccrypto.generatePrivate();
return toString(key);
};
const publicKeyFromPrivateKey = (privateKeyStr) => {
const privateKey = toBuffer(privateKeyStr);
return toString(eccrypto.getPublic(privateKey));
};
const encryptStringWithPublicKey = async (cleartext, publicKeyStr) => {
const publicKey = toBuffer(publicKeyStr);
const res = await eccrypto.encrypt(publicKey, Buffer.from(cleartext));
const { iv, ciphertext, mac, ephemPublicKey } = res;
return {
iv: toString(iv),
ciphertext: toString(ciphertext),
mac: toString(mac),
ephemPublicKey: toString(ephemPublicKey),
};
};
const decryptStringWithPrivateKey = async (cipher, privateKeyStr) => {
const privateKey = toBuffer(privateKeyStr);
const { iv, ciphertext, mac, ephemPublicKey } = cipher;
const encrypted = {
iv: toBuffer(iv),
ephemPublicKey: toBuffer(ephemPublicKey),
ciphertext: toBuffer(ciphertext),
mac: toBuffer(mac),
};
const cleartext = await eccrypto.decrypt(privateKey, encrypted);
return cleartext.toString();
};
```
[source code](https://github.com/yehjxraymond/qv-api/blob/master/src/encryption/index.js)
With the dumbed-down api for cryptography, I allowed the web app to generate it's own private key on the client side when the "E2E Encrypted Votes" options is checked.

On election creation, the payload to the endpoint looks like:
```json
{
"candidates": [
{ "title": "Candidate 1", "description": null },
{ "title": "Candidate 2", "description": null }
],
"id": "52512c56-3e5f-44cf-916d-3053b8864c3f",
"ttl": 1577372246,
"config": {
"name": "Demo Election",
"private": true,
"notifyInvites": false,
"invite": [
{
"name": "Person 1",
"voterId": "452c559d-219c-4fde-8be5-b03dc863622e",
"email": "person1@example.com"
},
{
"name": "Person 2",
"voterId": "32840834-fd23-4a38-9f48-12eba1c9786b",
"email": "person2@example.com"
}
],
"encryptionKey": "04d97a57c595835fa00d608345947bbbf9c42899df693a78535d9eb24d301574e0babfed36fea560cf56ca14fc89329d0660aa6976f10c8c10af3b7b7f67a3ef4b",
"budget": 99
},
"votes": []
}
```
Upon creating the election, the election creator will be redirected to a url where he is able to access the results of the private election with the following link:
```txt
https://qv.geek.sg/share-private?
election=52512c56-3e5f-44cf-916d-3053b8864c3f&
userId=f9367230-6141-46e1-89a6-3d09ea951466&
privateKey=3fe8dd4abe91f7312696ffdb1f06c818cc76464f24b561ccd08ad099e135ecaf
```
In the url:
- `election` is used to uniquely identity the election to view
- `userId` is used to "authenticate" the user to download the election results
- `privateKey` is used to decrypt the downloaded results

On this page, the election creator will have access to the various private voting link as well as the link to view the results. Each of these links can be used once to cast a vote on the election.

Visiting one of these link, the voter can then cast his vote. When submitting the vote, the web app encrypts the vote on the client side and only send the encrypted vote to the endpoint.
Below are two sample requests of votes being casted:
```json
{
"voter": "452c559d-219c-4fde-8be5-b03dc863622e",
"election": "52512c56-3e5f-44cf-916d-3053b8864c3f",
"encryptedVote": {
"iv": "2f7a2f053fe900eb44d2b4c14f40e074",
"ciphertext": "a8dc4a9f834bb1aeadc8f5d7fd4795b063cae8b9b3c92d8c8c21525b7da7104c0c27d3819e3176638840f490347c7ed1d29eb8a1a5608f758dd712419070b310",
"mac": "b8303e08cbf2128d7e72f1fa993b9f9e1d1c5047e1e99d768f589d3bcb515a05",
"ephemPublicKey": "04366010729df6b803f94c8aa1df18c245a2de3018a142339b587db953f4f283ac8e610ccc374cce7845fa585a068d77b0166a9da75cedf78975a96100e91b003d"
}
}
```
```json
{
"voter": "32840834-fd23-4a38-9f48-12eba1c9786b",
"election": "52512c56-3e5f-44cf-916d-3053b8864c3f",
"encryptedVote": {
"iv": "87572848a64ae46cbf4ff5881a81e2fd",
"ciphertext": "af26e53536b19b1497860a4a61657854d2b5e267c48df49b853339ac491a6101a3ea605f0006292ac002d2de7880febc76cb75612537890a1ced1e7ba93130b1",
"mac": "42190e7e33e9d198d58c3114e84e07c4ee562989dbad5b3114b193a90553355c",
"ephemPublicKey": "042e11a0fccbfffdf96e8d5e0b44093f204934c5edf0b6700e67eedeb2de8a41d5d24a3cc41474d9852dbd935cdc56dda7ac1e4e0f57cf292621f326806df5dc92"
}
}
```
Once votes have been casted, the election creator may view the election results with his link.

Snippet of endpoint response to election creator's web app:
```json
{
...,
"votes": [
{
"voter": "32840834-fd23-4a38-9f48-12eba1c9786b",
"encryptedVote": {
"ciphertext": "af26e53536b19b1497860a4a61657854d2b5e267c48df49b853339ac491a6101a3ea605f0006292ac002d2de7880febc76cb75612537890a1ced1e7ba93130b1",
"iv": "87572848a64ae46cbf4ff5881a81e2fd",
"mac": "42190e7e33e9d198d58c3114e84e07c4ee562989dbad5b3114b193a90553355c",
"ephemPublicKey": "042e11a0fccbfffdf96e8d5e0b44093f204934c5edf0b6700e67eedeb2de8a41d5d24a3cc41474d9852dbd935cdc56dda7ac1e4e0f57cf292621f326806df5dc92"
},
"ttl": 1577372246,
"id": "9145b753-baac-4a6f-a173-46427b1c1366",
"election": "52512c56-3e5f-44cf-916d-3053b8864c3f"
},
{
"voter": "452c559d-219c-4fde-8be5-b03dc863622e",
"encryptedVote": {
"ciphertext": "a8dc4a9f834bb1aeadc8f5d7fd4795b063cae8b9b3c92d8c8c21525b7da7104c0c27d3819e3176638840f490347c7ed1d29eb8a1a5608f758dd712419070b310",
"iv": "2f7a2f053fe900eb44d2b4c14f40e074",
"mac": "b8303e08cbf2128d7e72f1fa993b9f9e1d1c5047e1e99d768f589d3bcb515a05",
"ephemPublicKey": "04366010729df6b803f94c8aa1df18c245a2de3018a142339b587db953f4f283ac8e610ccc374cce7845fa585a068d77b0166a9da75cedf78975a96100e91b003d"
},
"ttl": 1577372246,
"id": "c8717d90-7eb8-43db-b911-4ea3e2cc8cb0",
"election": "52512c56-3e5f-44cf-916d-3053b8864c3f"
}
]
}
```
## Vote Integrity
Now that we have solved the problem of database administrator having access to the information, we have yet to ensure that valid votes are not deleted or replaced.
For that, we are able to verify the integrity of the vote by allowing voters to keep the `iv` of their votes as a receipt. They can check with the election creator that their votes have been accounted for.
## Ending Notes
Now that E2E encryption has been enabled on the QV app, can we use this app to run the next Singapore General Election?
Nope.
If you are searching for more kickass implementations of QV, check out [this paper on "End-to-End Verifiable Quadratic Votingwith Everlasting Privacy"](https://fc19.ifca.ai/voting/papers/PR19.pdf)
Most of the design around the application trades off security for usability to allow anyone to create a QV election. While its not fit for a general election, it is definitely sufficient for most application. If you have any cool ideas to use QV for, feel free to [drop me a note](/contact)!
Again, the QV application code is open source, if you like to contribute to the code or run a fork, feel free to visit:
- https://github.com/yehjxraymond/qv-api
- https://github.com/yehjxraymond/qv-app
| 47.429224 | 550 | 0.761047 | eng_Latn | 0.920677 |
2cc19c752c9e124dfb6c97a3750c6619b4e13496 | 834 | md | Markdown | _posts/2018-6-11-GitHub-Clone-Set-Remote-URL-Push.md | lymenlee/lymenlee.github.io | 22787a7f67408d93c2cb3a8df69f8f76bd631d67 | [
"CC-BY-4.0"
] | null | null | null | _posts/2018-6-11-GitHub-Clone-Set-Remote-URL-Push.md | lymenlee/lymenlee.github.io | 22787a7f67408d93c2cb3a8df69f8f76bd631d67 | [
"CC-BY-4.0"
] | null | null | null | _posts/2018-6-11-GitHub-Clone-Set-Remote-URL-Push.md | lymenlee/lymenlee.github.io | 22787a7f67408d93c2cb3a8df69f8f76bd631d67 | [
"CC-BY-4.0"
] | null | null | null | ---
published: true
layout: post
comments: true
title: GitHub: Clone, Set-Url, Push
date: "2018-06-11 22:59:56 -0400"
categories: GitHub
tag: Github, Git, DevOps
---
Sometimes you clone your own repo from GitHub on another machine and try to do quick updates on it, then you want to push back the changes. But when you try to get a `git push` running, you get an 403 error. How come? Well it's because when you clone your repo, you use the read-only URL provided on the repo page and your remote url is set to this read-only URL. Of course it won't let you log in and do push operation! Fortunately it's simple to address, just run below command:
```shell
git remote set-url https://yourusername@github.com/user/rego.git
```
Next time you try to do `git push`, you'll see the lovely password prompt. Enjoy and push more codes!
| 39.714286 | 480 | 0.745803 | eng_Latn | 0.997231 |
2cc24150ee032409a386108bbce79d19adde4c08 | 7,600 | md | Markdown | AndrewDavidoffDocset0426213909/articles/MDSL/bf861e4e-a22b-40c1-8210-1c4cc761aeac.md | AndrewDavidoff/AndrewDavidoffRepo0426213909 | 4ee7b3dbc81782000a5158731fcfdf6aeef7d397 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AndrewDavidoffDocset0426213909/articles/MDSL/bf861e4e-a22b-40c1-8210-1c4cc761aeac.md | AndrewDavidoff/AndrewDavidoffRepo0426213909 | 4ee7b3dbc81782000a5158731fcfdf6aeef7d397 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AndrewDavidoffDocset0426213909/articles/MDSL/bf861e4e-a22b-40c1-8210-1c4cc761aeac.md | AndrewDavidoff/AndrewDavidoffRepo0426213909 | 4ee7b3dbc81782000a5158731fcfdf6aeef7d397 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | <html dir="LTR" xmlns:mshelp="http://msdn.microsoft.com/mshelp" xmlns:ddue="http://ddue.schemas.microsoft.com/authoring/2003/5" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:tool="http://www.microsoft.com/tooltip">
<body>
<input type="hidden" id="userDataCache" class="userDataStyle">
<input type="hidden" id="hiddenScrollOffset">
<img id="dropDownImage" style="display:none; height:0; width:0;" src="../local/drpdown.gif">
<img id="dropDownHoverImage" style="display:none; height:0; width:0;" src="../local/drpdown_orange.gif">
<img id="collapseImage" style="display:none; height:0; width:0;" src="../local/collapse.gif">
<img id="expandImage" style="display:none; height:0; width:0;" src="../local/exp.gif">
<img id="collapseAllImage" style="display:none; height:0; width:0;" src="../local/collall.gif">
<img id="expandAllImage" style="display:none; height:0; width:0;" src="../local/expall.gif">
<img id="copyImage" style="display:none; height:0; width:0;" src="../local/copycode.gif">
<img id="copyHoverImage" style="display:none; height:0; width:0;" src="../local/copycodeHighlight.gif">
<div id="header"><h1 class="heading">4.169 DataGridRow</h1></div>
<div id="mainSection">
<div id="mainBody">
<div id="allHistory" class="saveHistory" onsave="saveAll()" onload="loadAll()"></div>
<p xmlns:wsd="http://wsdev.schemas.microsoft.com/authoring/2008/2" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:script="urn:script" xmlns:build="urn:build">
</p>
<div id="sectionSection0" class="section" name="collapseableSection">
<content xmlns="http://ddue.schemas.microsoft.com/authoring/2003/5" xmlns:wsd="http://wsdev.schemas.microsoft.com/authoring/2008/2" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:script="urn:script" xmlns:build="urn:build">
</content>
</div>
<div id="sectionSection1" class="section" name="collapseableSection">
<content xmlns="http://ddue.schemas.microsoft.com/authoring/2003/5" xmlns:wsd="http://wsdev.schemas.microsoft.com/authoring/2008/2" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:script="urn:script" xmlns:build="urn:build">
<table class="ProtocolAuthoredTable" xmlns="">
<tr><td colspan="2">
<mshelp:link keywords="86913f34-aa06-4c94-9f09-83936a822fd8" tabindex="0">x:Object</mshelp:link> > <mshelp:link keywords="22a604a1-b593-4464-91e4-488285506428" tabindex="0">DependencyObject</mshelp:link> > <mshelp:link keywords="d3c6fb79-d082-4257-aa16-84c18cbf6051" tabindex="0">Visual</mshelp:link> > <mshelp:link keywords="ce2d5941-a755-4517-b5ac-e99658cd1dd1" tabindex="0">UIElement</mshelp:link> > <mshelp:link keywords="07f9afc2-9f13-4a2a-871b-ac7caef0660d" tabindex="0">FrameworkElement</mshelp:link> > <mshelp:link keywords="f9528c9b-edc4-4e4e-8947-e16edb07c1d6" tabindex="0">Control</mshelp:link> > <mshelp:link keywords="bf861e4e-a22b-40c1-8210-1c4cc761aeac" tabindex="0">DataGridRow</mshelp:link>, <mshelp:link keywords="fb286ef6-72e1-445b-8b74-effc6b5e1777" tabindex="0">IInputElement</mshelp:link> </td>
</tr>
<tr><td colspan="2">
<b>
DataGridRow </b>
</td>
</tr>
<tr><td><div class="indent0">(usage)</div></td>
<td><DataGridRow /> </td>
</tr>
<tr><td><div class="indent0">(description)</div></td>
<td>Represents a DataGrid row. </td>
</tr>
<tr><td><div class="indent0">[name property]</div></td>
<td><mshelp:link keywords="07f9afc2-9f13-4a2a-871b-ac7caef0660d" tabindex="0">Name</mshelp:link> </td>
</tr>
<tr><td><div class="indent0">[xml lang property]</div></td>
<td><mshelp:link keywords="07f9afc2-9f13-4a2a-871b-ac7caef0660d" tabindex="0">Language</mshelp:link> </td>
</tr>
<tr><td><div class="indent0">(properties)</div></td>
<td> </td>
</tr>
<tr><td><div class="indent2">DetailsTemplate</div></td>
<td><mshelp:link keywords="2ff20c66-01b1-4315-bbc2-f2c27c537e3b" tabindex="0">DataTemplate</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>The template that is used to display the details section of the row. </td>
</tr>
<tr><td><div class="indent2">DetailsTemplateSelector</div></td>
<td><mshelp:link keywords="0e26fec0-45aa-4551-a552-94bfa5fe3299" tabindex="0">DataTemplateSelector</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>A template selector that provides custom logic for choosing a row details template. </td>
</tr>
<tr><td><div class="indent2">DetailsVisibility</div></td>
<td><mshelp:link keywords="4c86d0bf-4c88-4bef-b4ee-9ee3f0fd521b" tabindex="0">Visibility</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>A value that indicates when the details section of the row is displayed. </td>
</tr>
<tr><td><div class="indent2">Header</div></td>
<td><mshelp:link keywords="86913f34-aa06-4c94-9f09-83936a822fd8" tabindex="0">x:Object</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>An object that represents the row header contents. </td>
</tr>
<tr><td><div class="indent2">HeaderStyle</div></td>
<td><mshelp:link keywords="474ac96a-e49a-4316-9ea8-7c05ffc4bf9e" tabindex="0">Style</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>The style that is used when rendering the row header. </td>
</tr>
<tr><td><div class="indent2">HeaderTemplate</div></td>
<td><mshelp:link keywords="2ff20c66-01b1-4315-bbc2-f2c27c537e3b" tabindex="0">DataTemplate</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>The template that is used to display the row header. </td>
</tr>
<tr><td><div class="indent2">HeaderTemplateSelector</div></td>
<td><mshelp:link keywords="0e26fec0-45aa-4551-a552-94bfa5fe3299" tabindex="0">DataTemplateSelector</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>A template selector that provides custom logic for choosing a row header template. </td>
</tr>
<tr><td><div class="indent2">IsSelected</div></td>
<td><mshelp:link keywords="c179f5e8-f1d2-4665-a360-ea494307b744" tabindex="0">x:Boolean</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>A value that indicates whether the row is selected. </td>
</tr>
<tr><td><div class="indent2">Item</div></td>
<td><mshelp:link keywords="86913f34-aa06-4c94-9f09-83936a822fd8" tabindex="0">x:Object</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>The data item that the row represents. </td>
</tr>
<tr><td><div class="indent2">ItemsPanel</div></td>
<td><mshelp:link keywords="e25585f2-fbb1-4e59-87eb-69bdc45aa76a" tabindex="0">ItemsPanelTemplate</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>The template that defines the panel that controls the layout of cells in the row. </td>
</tr>
<tr><td><div class="indent2">ValidationErrorTemplate</div></td>
<td><mshelp:link keywords="0468cec1-4e1d-478d-8f64-a88feb3a1236" tabindex="0">ControlTemplate</mshelp:link> </td>
</tr>
<tr><td><div class="indent4">(description)</div></td>
<td>The template that is used to visually indicate an error in row validation. </td>
</tr>
<tr><td><div class="indent0">(events)</div></td>
<td> </td>
</tr>
<tr><td><div class="indent2">Selected</div></td>
<td>Occurs when the row is selected. </td>
</tr>
<tr><td><div class="indent2">Unselected</div></td>
<td>Occurs when the row selection is cleared. </td>
</tr>
</table>
</content>
</div>
<!--[if gte IE 5]>
<tool:tip element="languageFilterToolTip" avoidmouse="false"/>
<![endif]-->
</div>
<a name="feedback"></a><span></span>
</div>
</body></html>
| 55.882353 | 834 | 0.7025 | yue_Hant | 0.266937 |
2cc2a9c212aada2cbcbaac35d9120bb496f817cb | 2,460 | md | Markdown | README.md | JaewonAC/mpu6050test | 9ebaacaddc15386c49bf810cf1d02eccdf716e5d | [
"Apache-2.0"
] | null | null | null | README.md | JaewonAC/mpu6050test | 9ebaacaddc15386c49bf810cf1d02eccdf716e5d | [
"Apache-2.0"
] | null | null | null | README.md | JaewonAC/mpu6050test | 9ebaacaddc15386c49bf810cf1d02eccdf716e5d | [
"Apache-2.0"
] | null | null | null | # MPU6050 driver for Android Things
This driver supports Invensense <a href='https://www.invensense.com/products/motion-tracking/6-axis/mpu-6050/'>MPU6050</a> 6 axis IMU sensor.
<a href='https://store.invensense.com/datasheets/invensense/MPU-6050_DataSheet_V3%204.pdf'>Datasheet</a>\
<a href='https://www.invensense.com/wp-content/uploads/2015/02/MPU-6000-Register-Map1.pdf'>I2C register map</a>
If you're Korean please visit <a href='www.mechasolution'>our website</a> check our <a href='http://mechasolution.com/shop/goods/goods_view.php?goodsno=543077&category=048'>AndroidThings Kit</a>. we provide <a href='https://github.com/mechasolution/AndroidThingsTextBook'>textbook</a> and project code in github.
NOTE: these drivers are not production-ready. They are offered as sample implementations of Android Things user space drivers for common peripherals as part of the Developer Preview release. There is no guarantee of correctness, completeness or robustness.
## How to use the driver
### Gradle dependency
To use the mpu6050 driver, simply add the line below to your project's build.gradle. where <version> matches the last version of the driver available on <a href='https://bintray.com/mechasolution/androidthings/mpu6050/_latestVersion'>Jcenter</a>.
```
dependencies {
compile 'com.mechasolution:mpu6050:<version>'
}
```
### Sample usage
```
import mechasolution.mpu6050.mpu6050;
mpu6050 mMpu = new mpu6050();
try {
mMpu.open();
} catch ( IOException e ) { e.printStackTrace(); }
try {
Log.i("Acel", String.format("%f \t %f \t %f", mMpu.getAccelX(), mMpu.getAccelY(), mMpu.getAccelZ()));
Log.i("Temp", String.format("%f", mMpu.getTemp()));
Log.i("Gyro", String.format("%f \t %f \t %f", mMpu.getGyroX(), mMpu.getGyroY(), mMpu.getGyroZ()));
} catch ( IOException e ) { e.printStackTrace(); }
```
## License
Copyright 2017 Mechasolution
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
<a href='http://www.apache.org/licenses/LICENSE-2.0'>http://www.apache.org/licenses/LICENSE-2.0</a>
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 45.555556 | 312 | 0.746341 | eng_Latn | 0.707042 |
2cc2e46342f62ed60b94da8a62409ce1c3ecb2e7 | 4,687 | md | Markdown | Week 09/README.md | JessicaHamilton/PHYS-3210 | 997fb9fbc43852ed32badaca68bed39bef2a1b0b | [
"MIT"
] | 2 | 2020-07-04T17:21:29.000Z | 2021-04-06T07:15:14.000Z | Week 09/README.md | JessicaHamilton/PHYS-3210 | 997fb9fbc43852ed32badaca68bed39bef2a1b0b | [
"MIT"
] | null | null | null | Week 09/README.md | JessicaHamilton/PHYS-3210 | 997fb9fbc43852ed32badaca68bed39bef2a1b0b | [
"MIT"
] | 8 | 2019-08-19T14:18:03.000Z | 2021-04-27T15:28:46.000Z | ## Week 09: Data Fitting & Interpolation
### Readings
From _Computational Physics_:
1. Sections 7.5 & 7.7 (by Wednesday)
2. Section 8.1 – 8.3 (by Friday)
### Exercise 17: Fitting an Energy Spectrum w/ Lagrange Interpolation
_Problem 3 in Chapter 7 of Computational Physics_
The cross sections measured for the resonant scattering of neutrons from
a nucleus are given in Table 7.1 (pg. 151). As shown in the table, the data
was taken in energy steps of 25 MeV. However, to test theoretical predictions,
we need the data at a higher resolution.
There are a couple common ways to solve this problem: interpolation or
least-squares fitting. Today, we'll explore interpolation, or the process
of using known data to estimate values of intermediate data points (data
on a sub-grid scale).
Your tasks are to:
1. Write a subroutine to perform an _n_-point Lagrange interpolation using
the above equation (7.23 in your book). Treat _n_ as an arbitrary
input parameter. (actually, use routines in `scipy.interpolate`)
2. Use your Lagrange interpolation procedure to fit the entire experimental
spectrum given in Table 7.1 with one polynomial (i.e., fit all nine
data points with an eighth-order polynominal). Use your polyinomial
to plot the cross section in steps of 5 MeV.
3. Use your graph to deduce the resonance energy, E<sub>r</sub> (the
peak position) and Γ (the full-width at half-maximum). Compare
your results with those predicted by a theorist
(E<sub>r</sub>, Γ) = (78, 55) MeV.
4. A more realistic use of Lagrange interpolation is to perform local
interpolation over a small domain of the data, e.g., over three data
points. Interpolate the preceding cross-sectional data in 5 MeV steps
using 3-point Lagrange interpolation over each interval (Note that
the ends may be special cases).
5. Now, try fitting the data using a few different 1D interpolation
algorithms. Try performing 1D linear interpolation between data points
and try a 1D cubic spline interpolation. Plot the cross section in
steps of 5 MeV.
6. Discuss and compare your results.
### Exercise 18: Pi Meson Lifetime
Figure 7.6 in your book shows experimental data on the number of decays,
N_decay, of the π meson as a function of time (Stetz et al. 1993).
The data has been binned into intervals of Δt = 10 ns. Your problem
is to evaluate how well a typical radioactive decay model describes the
data and, if it provides a reasonable description of the data, to determine
the π meson's lifetime, τ.
To perform these tasks, you are going to perform linear and non-linear
least-squares regressions using the pre-packaged SciPy routine `curve_fit`
from the `scipy.optimize` library.
Your tasks are to:
1. Read in the data from `pi_meson_decays.dat`. Compare the times and
measured decays to Figure 7.6 and assess whether you think it looks
reasonable.
2. Estimate the error (or uncertainty) for each bin and construct a new
array of those uncertianties. (hint: we are essentially counting the
number of decays)
3. You can linearize the exopnential decay law (Taylor expand), which
should be valid over long time baselines. That is

which is linear in Δt and is therefore ammenable to a linear
regression analysis.
4. Perform a least-squares regression for a function that fits a straight
line (of the form given above) to the data. (hint: you'll need to
manipulate your data to put it in the correct form) Compare your inferred
π meson lifetime to the tabulated lifetime of 2.6 x 10<sup>−8</sup> s
and comment on the difference.
5. Plot the data and your best fit straight line on the same graph and
comment on the agreement.
6. Perform the a non-linear least-squares regression on the data using
the formula for exponential decay. How does your inferred π meson
lifetime compare to the value inferred from linear regression?
7. Plot your best fit exponential decay curve and the data on the same
graph and comment on the agreement.
8. For both cases, deduce goodness of fit of the fitted curve and the
estimate the approximate error on your inferred lifetime. How does
this look to your eye?
9. Discuss ways to improve the quality of your fit. Try one or two!
| 47.343434 | 276 | 0.743973 | eng_Latn | 0.997021 |
2cc385acc336884d615112b9aeee6156577e0a57 | 307 | md | Markdown | _posts/2014-3-3-Hello-World.md | rixy100/rixy100.github.io | 4394edcf5acb193ea880bba7ef3c993deccd8eaa | [
"MIT"
] | null | null | null | _posts/2014-3-3-Hello-World.md | rixy100/rixy100.github.io | 4394edcf5acb193ea880bba7ef3c993deccd8eaa | [
"MIT"
] | null | null | null | _posts/2014-3-3-Hello-World.md | rixy100/rixy100.github.io | 4394edcf5acb193ea880bba7ef3c993deccd8eaa | [
"MIT"
] | null | null | null | ---
In the I.T. lesson we changed the backgrond colour and the title colour. We did this by going into our editing page in GitHub. Once we had done this we went into the code and changed the colour to whatever we wanted, but we did have to do it in a code language. That was what we did in the I.T. lesson.
| 102.333333 | 302 | 0.7557 | eng_Latn | 1 |
2cc425b1cae7d440abf2ba85a65f1d30ad983ad3 | 1,277 | md | Markdown | gnuplot/md/602discrete/README.md | ruby-numo/numo-gnuplot-demo | cfc828b9ea15d066da1fca4a6951c76a35f08175 | [
"MIT"
] | 5 | 2019-02-13T01:21:57.000Z | 2020-07-15T14:47:50.000Z | gnuplot/md/602discrete/README.md | ruby-numo/gnuplot-demo | cfc828b9ea15d066da1fca4a6951c76a35f08175 | [
"MIT"
] | 1 | 2021-08-24T14:08:38.000Z | 2021-08-24T14:08:38.000Z | gnuplot/md/602discrete/README.md | ruby-numo/numo-gnuplot-demo | cfc828b9ea15d066da1fca4a6951c76a35f08175 | [
"MIT"
] | 3 | 2018-04-17T03:10:46.000Z | 2020-05-05T21:40:56.000Z | ## discrete contours
[Original Demo](http://gnuplot.sourceforge.net/demo_4.6/discrete.html)
### 1
```ruby
# set contour
# set title "Demo of specifying discrete contour levels - default contours"
# splot x*y
Numo.gnuplot do
set :contour
set title:"Demo of specifying discrete contour levels - default contours"
splot "x*y"
end
```

### 2
```ruby
# #set discrete levels
# set cntrparam levels discrete 0, 15, 75
# set title "3 discrete contours at 0 15 & 75"
# replot
Numo.gnuplot do
set :cntrparam, :levels, discrete:[0,15,75]
set title:"3 discrete contours at 0 15 & 75"
replot
end
```

### 3
```ruby
# #set incremental levels
# set cntrp level incr -20, 5, 9
# set title "9 incremental contours starting at -20, stepping by 5"
# replot
Numo.gnuplot do
set :cntrp, :level, incr:[-20,5,9]
set title:"9 incremental contours starting at -20, stepping by 5"
replot
end
```

| 25.54 | 125 | 0.725137 | eng_Latn | 0.30288 |
2cc432d56ed2fb058b55caded17df43daab2856d | 61 | md | Markdown | README.md | zhentian-wan/es6-unit-testing | c741866a48e8f993f5ff7ecfb967a3009f6b62d1 | [
"MIT"
] | null | null | null | README.md | zhentian-wan/es6-unit-testing | c741866a48e8f993f5ff7ecfb967a3009f6b62d1 | [
"MIT"
] | null | null | null | README.md | zhentian-wan/es6-unit-testing | c741866a48e8f993f5ff7ecfb967a3009f6b62d1 | [
"MIT"
] | null | null | null | # es6-unit-testing
```bash
npm install
npm run test
```
| 6.1 | 18 | 0.622951 | eng_Latn | 0.544866 |
2cc43664b623f2737ed9d920566bd037d494a0f5 | 779 | md | Markdown | README.md | AssafMalki/html-webpack-plugin-remove-before-html-processing | 79046034bb9b1e88e491211f04658824974d709b | [
"MIT"
] | 7 | 2017-03-15T11:46:07.000Z | 2021-06-03T03:30:59.000Z | README.md | AssafMalki/html-webpack-plugin-remove-before-html-processing | 79046034bb9b1e88e491211f04658824974d709b | [
"MIT"
] | 5 | 2016-04-24T18:22:43.000Z | 2020-11-12T17:08:06.000Z | README.md | AssafMalki/html-webpack-plugin-remove-before-html-processing | 79046034bb9b1e88e491211f04658824974d709b | [
"MIT"
] | 5 | 2018-03-20T10:45:01.000Z | 2022-03-16T13:31:57.000Z | # html-webpack-plugin-remove
[](https://www.npmjs.com/package/html-webpack-plugin-remove)
Remove parts of html emitted by the html-webpack-plugin using a regular expression.
The plugin hooks into events emitted by the html-webpack-plugin and simply replaces the parts that match a passed in regular expression.
### Install
```bash
npm i html-webpack-plugin-remove --save-dev
```
### webpack.config.js
```js
const HtmlPlugin = require('html-webpack-plugin')
const HtmlPluginRemove = require('html-webpack-plugin-remove')
module.exports = {
/* ... */
plugins: [
new HtmlPlugin(/* ... */),
new HtmlPluginRemove(/<script.*?src="style\..*?\.js".*?<\/script>/)
]
}
```
| 28.851852 | 163 | 0.715019 | kor_Hang | 0.640177 |
2cc4e0d19349fc9ca4f12a079282b1bd7936cc34 | 606 | md | Markdown | _posts/2021-11-17-OHC-Inhibition.md | 3x10e8/3x10e8.github.io | d44f921ad209edea022da5a88061e5f3d4e570fb | [
"MIT"
] | null | null | null | _posts/2021-11-17-OHC-Inhibition.md | 3x10e8/3x10e8.github.io | d44f921ad209edea022da5a88061e5f3d4e570fb | [
"MIT"
] | null | null | null | _posts/2021-11-17-OHC-Inhibition.md | 3x10e8/3x10e8.github.io | d44f921ad209edea022da5a88061e5f3d4e570fb | [
"MIT"
] | 1 | 2022-03-24T13:27:47.000Z | 2022-03-24T13:27:47.000Z | Question: why do the outer hair cells get all this negative feedback through efferent fibers?
[MEDS 5377](https://health.uconn.edu/meds5377/content-library/)
Slide 20 [here](https://health.uconn.edu/meds5377/wp-content/uploads/sites/151/2017/07/Fuchs-Salamanca-3-2014-handout-final.pdf):

Could this be linked to attention modulation?
Interaural attention modulates outer hair cell function [PMC4287465](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4287465/)
| 50.5 | 142 | 0.793729 | yue_Hant | 0.24405 |
2cc4f88082ba3873af933248015ee51837f8d22d | 6,688 | md | Markdown | README.md | ZXQAQ/PLDroidMediaStreaming | 47fbee943942ba69434dc802bf118ba48d56ee0f | [
"Apache-2.0"
] | 1 | 2020-05-02T06:21:48.000Z | 2020-05-02T06:21:48.000Z | README.md | ZXQAQ/PLDroidMediaStreaming | 47fbee943942ba69434dc802bf118ba48d56ee0f | [
"Apache-2.0"
] | null | null | null | README.md | ZXQAQ/PLDroidMediaStreaming | 47fbee943942ba69434dc802bf118ba48d56ee0f | [
"Apache-2.0"
] | null | null | null | # PLDroidCameraStreaming
PLDroidCameraStreaming 是一个适用于 Android 的 RTMP 直播推流 SDK,可高度定制化和二次开发。特色是同时支持 H.264 软编/硬编和 AAC 软编/硬编。支持 Android Camera 画面捕获,并进行 H.264 编码,以及支持 Android 麦克风音频采样并进行 AAC 编码;还实现了一套可供开发者选择的编码参数集合,以便灵活调节相应的分辨率和码率;同时,SDK 提供数据源回调接口,用户可进行 Filter 处理。借助 PLDroidCameraStreaming ,开发者可以快速构建一款类似 [Meerkat](https://meerkatapp.co/) 或 [Periscope](https://www.periscope.tv/) 的 Android 直播应用。
## 功能特性
- [x] 支持 H.264 和 AAC 软编(推荐)
- [x] 支持 H.264 和 AAC 硬编
- [x] 软编支持 Android Min API 15(Android 4.0.3)及其以上版本
- [x] 硬编支持 Android Min API 18(Android 4.3)及其以上版本
- [x] 支持构造带安全授权凭证的 RTMP 推流地址
- [x] 支持 RTMP 封包及推流
- [x] 支持 RTMP 推流自适应网络质量动态切换码率或自定义策略
- [x] 支持内置美颜,以及可动态调节美颜效果
- [x] 支持数据源回调接口,可自定义 Filter (滤镜) 特效处理
- [x] 支持前后置摄像头,以及动态切换
- [x] 支持自动对焦
- [x] 支持手动对焦
- [x] 支持 Encoding Mirror 设置
- [x] 支持 Zoom 操作
- [x] 支持 Mute/Unmute
- [x] 支持闪光灯操作
- [x] 支持纯音频推流,以及后台运行
- [x] 支持截帧功能
- [x] 支持动态更改 Encoding Orientation
- [x] 支持动态切换横竖屏
- [x] 支持动态水印
- [x] 支持动态文字与贴图
- [x] 支持蓝牙麦克风
- [x] 支持后台推流
- [x] 支持双声道立体声
- [x] 支持 QUIC 推流
- [x] 支持 ARM, ARMv7a, ARM64v8a, X86 主流芯片体系架构
- [x] 支持 SEI 信息发送
## PLDroidCameraStreaming 文档
详细的开发指南请参考[官方文档](https://developer.qiniu.com/pili/sdk/3715/PLDroidMediaStreaming-overview)
## 设备以及系统要求
- 设备要求:搭载 Android 系统的设备
- 系统要求:Android 4.0.3(API 15) 及其以上
## 版本升级须知
### v3.0.0
- **从 v3.0.0 版本开始,七牛直播推流 SDK 需要先获取授权才能使用。授权分为试用版和正式版,可通过 400-808-9176 转 2 号线联系七牛商务咨询,或者 [通过工单](https://support.qiniu.com/?ref=developer.qiniu.com) 联系七牛的技术支持。**
- **v3.0.0 之前的版本不受影响,请继续放心使用。**
- **老客户升级 v3.0.0 版本之前,请先联系七牛获取相应授权,以免发生鉴权不通过的现象。**
- 基于 114 dns 解析的不确定性,使用该解析可能会导致解析的网络 ip 无法做到最大的优化策略,进而出现推流质量不佳的现象。因此建议使用非 114 dns 解析
### v2.4.1
- 从 v2.4.1 开始,VideoProfile 对 H264 格式配置的参数由 annexb 改为 avcc,之前设置为 false 的客户,需要将配置改为 true。
例如目前设有如下配置的客户:
```java
StreamingProfile.VideoProfile vProfile =
new StreamingProfile.VideoProfile(20, 1000 * 1024, 60, false);
```
需将参数调整为:
```java
StreamingProfile.VideoProfile vProfile =
new StreamingProfile.VideoProfile(20, 1000 * 1024, 60, true);
```
### v2.3.0
- 从 v2.3.0 版本开始,增加 libpldroid_streaming_puic.so 库
- libpldroid_streaming_core.so 依赖于 libpldroid_streaming_puic.so,无论是否启用 QUIC 推流,都需要包含 libpldroid_streaming_puic.so 库
### v2.2.0
- 从 v2.2.0 版本开始,须要在 build.gradle 中删除 QoS 依赖
```
dependencies {
...
compile 'com.qiniu.pili:pili-android-qos:0.8.+'
...
}
```
### v2.1.0
- 使用录屏功能前,须要在 AndroidManifest.xml 中注册 SDK 内置的 Activity:
```
<activity
android:name="com.qiniu.pili.droid.streaming.screen.ScreenCaptureRequestActivity"
android:theme="@android:style/Theme.Translucent.NoTitleBar" >
</activity>
```
- pili-android-qos 最新版本为 0.8.13
- 更新 `StreamingPreviewCallback#onPreviewFrame`
```
StreamingPreviewCallback#onPreviewFrame(byte[] data, int width, int height)
调整为
/**
* Called if the {@link StreamingPreviewCallback} registered.
*
* @param data the contents of the preview frame in fmt format
* @param width the width of the frame
* @param height the height of the frame
* @param rotation set the clockwise rotation of frame in degrees to achieve the same effect of preview display.
* @param fmt the format of the frame. See also {@link com.qiniu.pili.droid.streaming.av.common.PLFourCC}
* @param tsInNanoTime the timestamp of the frame
*
* */
boolean StreamingPreviewCallback#onPreviewFrame(byte[] data, int width, int height, int rotation, int fmt, long tsInNanoTime);
```
### v2.0.1
从 v2.0.1 开始:
- 删掉废弃的 `CameraStreamingManager`,可使用 `MediaStreamingManager`
- 须在宿主项目中的 build.gradle 中加入如下语句:
```
dependencies {
...
compile 'com.qiniu:happy-dns:0.2.+'
compile 'com.qiniu.pili:pili-android-qos:0.8.+'
...
}
```
- 废弃的 `StreamingPreviewCallback#onPreviewFrame(byte[] bytes, Camera camera)` 被删掉,可使用 `StreamingPreviewCallback#onPreviewFrame(byte[] bytes, int width, int height)`
- `AudioSourceCallback#onAudioSourceAvailable(ByteBuffer byteBuffer, int size, boolean eof)` 接口回调中增加时间戳信息,更改为 `AudioSourceCallback#onAudioSourceAvailable(ByteBuffer byteBuffer, int size, long tsInNanoTime, boolean eof)`
### v2.0.0 Beta
从 [v2.0.0 Beta](https://github.com/pili-engineering/PLDroidMediaStreaming/releases/tag/v2.0.0-beta) 开始,SDK 由 PLDroidCameraStreaming 更名为 PLDroidMediaStreaming,将会提供更丰富的功能接口。有如下重大更新:
- 新增 `MediaStreamingManager`,废弃 `CameraStreamingManager` 且不再被维护
- 新增一些辅助类并废弃相关的类
- 新增 `StreamingStateChangedListener`,并废弃 `CameraStreamingManager#StreamingStateListener`
- 新增 `StreamingState`,并废弃 `CameraStreamingManager#STATE`
- 新增 `StreamingSessionListener`,并废弃 `CameraStreamingManager#StreamingSessionListener`
- 新增 `AVCodecType`,并废弃 `CameraStreamingManager#EncodingType`
- 包名更新为 `com.qiniu.pili.droid.streaming.*;`,因此需要更新混淆相关代码
### v1.6.1
从 [v1.6.1](https://github.com/pili-engineering/PLDroidMediaStreaming/releases/tag/v1.6.1) 开始,为了便于用户更好地定制化,将 TransformMatrix 信息加入到 `SurfaceTextureCallback#onDrawFrame`。因此更新到 v1.6.1 版本之后,若实现了 `SurfaceTextureCallback` 接口,需要将
``` java
int onDrawFrame(int texId, int texWidth, int texHeight);
```
更改为:
``` java
int onDrawFrame(int texId, int texWidth, int texHeight, float[] transformMatrix);
```
### v1.6.0
从 [v1.6.0](https://github.com/pili-engineering/PLDroidMediaStreaming/releases/tag/v1.6.0) 开始,在使用 SDK 之前,需要保证 `StreamingEnv` 被正确初始化 ,否则在构造核心类 `CameraStreamingManager` 的阶段会抛出异常。具体可参看 [Demo](https://github.com/pili-engineering/PLDroidMediaStreaming/blob/master/PLDroidMediaStreamingDemo/app/src/main/java/com/qiniu/pili/droid/streaming/demo/StreamingApplication.java)。
``` java
StreamingEnv.init(getApplicationContext());
```
### v1.4.6
从 v1.4.6 版本开始,需要在宿主项目中的 build.gradle 中加入如下语句:
```
dependencies {
...
compile 'com.qiniu:happy-dns:0.2.7'
...
}
```
否则,在运行时会发生找不到 happydns 相关类的错误。
### v1.6.0
从 [v1.6.0](https://github.com/pili-engineering/PLDroidCameraStreaming/releases/tag/v1.6.0) 开始,在使用 SDK 之前,需要保证 `StreamingEnv` 被正确初始化 ,否则在构造核心类 `CameraStreamingManager` 的阶段会抛出异常。具体可参看 [Demo](https://github.com/pili-engineering/PLDroidCameraStreaming/blob/master/PLDroidCameraStreamingDemo/app/src/main/java/com/pili/pldroid/streaming/camera/demo/StreamingApplication.java)。
``` java
StreamingEnv.init(getApplicationContext());
```
### v1.6.1
从 [v1.6.1](https://github.com/pili-engineering/PLDroidCameraStreaming/releases/tag/v1.6.1) 开始,为了便于用户更好地定制化,将 TransformMatrix 信息加入到 `SurfaceTextureCallback#onDrawFrame`。因此更新到 v1.6.1 版本之后,若实现了 `SurfaceTextureCallback` 接口,需要将
``` java
int onDrawFrame(int texId, int texWidth, int texHeight);
```
更改为:
``` java
int onDrawFrame(int texId, int texWidth, int texHeight, float[] transformMatrix);
```
### 反馈及意见
当你遇到任何问题时,可以通过在 GitHub 的 repo 提交 issues 来反馈问题,请尽可能的描述清楚遇到的问题,如果有错误信息也一同附带,并且在 Labels 中指明类型为 bug 或者其他。
[通过这里查看已有的 issues 和提交 Bug。](https://github.com/pili-engineering/PLDroidCameraStreaming/issues)
| 33.108911 | 371 | 0.750748 | yue_Hant | 0.799501 |
2cc50e38e574c42ed2048ce0b65467e07d234a29 | 294 | md | Markdown | site/jbase/jql/jql-keyword-cross-reference/no/README.md | taful/docs | b63111a831566f262ea9d57ce56c10d209804eeb | [
"MIT"
] | 7 | 2019-12-06T23:39:36.000Z | 2020-12-13T13:26:23.000Z | site/jbase/jql/jql-keyword-cross-reference/no/README.md | taful/docs | b63111a831566f262ea9d57ce56c10d209804eeb | [
"MIT"
] | 36 | 2020-01-21T00:17:12.000Z | 2022-02-28T03:24:29.000Z | site/jbase/jql/jql-keyword-cross-reference/no/README.md | taful/docs | b63111a831566f262ea9d57ce56c10d209804eeb | [
"MIT"
] | 33 | 2020-02-07T12:24:42.000Z | 2022-03-24T15:38:31.000Z | # NO
<PageHeader />
THe **NO** operator compares a field or EVAL expression to null.
## Syntax
```
WITH NO field
```
where:
**field** is the field or EVAL expression to be compared.
## Example
```
LIST PARTS WITH NO SIZE
```
Back to [Cross Reference](./../README.md)
<PageFooter />
| 11.307692 | 64 | 0.646259 | eng_Latn | 0.903237 |
2cc50fb5c84729f3fd59fb6db9f34e689de8b064 | 7,135 | md | Markdown | static-basedir/static-source/articles/filter-pattern-for-linq-query-filter.md | Grax32/pages-assemble | 70aaf17ed89fb272b69a0ab8ec8e3573de7298e7 | [
"BSD-3-Clause"
] | null | null | null | static-basedir/static-source/articles/filter-pattern-for-linq-query-filter.md | Grax32/pages-assemble | 70aaf17ed89fb272b69a0ab8ec8e3573de7298e7 | [
"BSD-3-Clause"
] | 7 | 2020-06-08T00:43:18.000Z | 2021-10-03T12:20:27.000Z | static-basedir/static-source/articles/filter-pattern-for-linq-query-filter.md | Grax32/pages-assemble | 70aaf17ed89fb272b69a0ab8ec8e3573de7298e7 | [
"BSD-3-Clause"
] | null | null | null | ---
layout: pages
route: /2013/07/filter-pattern-for-linq-query-filter.html
title: Custom LINQ Filter Operators
tags:
- coding
- csharp
systemTags:
- page:tech
- page:csharp
category: tech
---
<div class="NoSpacing">
Generics, expressions, and extension methods are amazing
features that open the doors to incredible new features and abilities in C#
(and .NET in general).<br />
<br />
<h4>
Quick Summary</h4>
</div>
<div class="NoSpacing">
Using C# extension methods, we will rewrite commonly used Linq "where" clauses into filter methods that we can re-use in our code. We will turn<br />
"DbSet.Where(v=> v..EffectiveFrom < DateTime.Now && v.EffectiveTo > DateTime.Now)"<br />
into "DbSet.WhereActive()" and better satisfy the Don't Repeat Yourself (DRY) principle.<br />
<br />
<h4>
Introduction</h4>
Using generics, expressions, and extension methods we can build reusable and testable Linq
filters. In this post, we are going to demonstrate how to create a Linq filter using C# extension methods.</div>
<div class="NoSpacing">
<br /></div>
<div class="NoSpacing">
We will be using the method syntax as opposed to the query syntax as that makes it easier to visualize the queryable as a pipeline that resembles "from -> where -> orderby -> select".<br />
<br />
A simple Linq query consists of those 4 parts; from, where, orderby, and select. The "from" clause
specifies the data source, the "where" clause can optionally limit the results,
the "orderby" clause optionally sorts the results, and the "select"
clause optionally projects the results. Our projection matches the default projection and just outputs the same type of object as DbSet is. Since it matches the default, this clause is optional in this case.</div>
<div class="NoSpacing">
<br />
Our filters are specifically about replacing "where" clauses with our filter methods. There are some interesting things we can do with "orderby" and "select" but we will leave that as a subject for another day.<br />
<br /></div>
<div class="NoSpacing">
In projects, we commonly see certain clauses used
repeatedly. One such clause checks if an
object is active. In several of our
tables, "active" is defined as "the datetime value in the
EffectiveFrom column is less than the current time and the datetime value in
the EffectiveTo column is greater than the current time." Rather than repeating this clause over and
over, we have created an WhereActive() extension method that we can drop into any
query against this object type.</div>
<div class="NoSpacing">
<br />
<h4>
Initial Query</h4>
<div>
Our example DbSet is a collection of "StatusCode" objects and the simple query we are converting to use our filter looks like this</div>
<br />
<pre>DbSet
.Where(v => v.EffectiveFrom < DateTime.Now &&
v.EffectiveTo > DateTime.Now)
.OrderBy(v => v.Id)
.Select(v => v)
</pre>
<div>
<br /></div>
<h4>
About Extension Methods</h4>
</div>
<div class="NoSpacing">
In order to implement that clause as an extention method, we
first need an extension methods class. (Additional reading on extension methods at <a href="http://www.hanselman.com/blog/HowDoExtensionMethodsWorkAndWhyWasANewCLRNotRequired.aspx">Scott Hanselman's blog</a>) Like any extension methods class, this class must be a static
class. It doesn't matter what the class is called but when you want to use that extension method anywhere in your project, that class must be in the same project or in the references and the namespace of your extension methods class must be in a using statement in the file your use the extensions in.<br />
<br />
<h4>
Create an Extension Method to Implement our WhereActive filter</h4>
Now we will implement an
extension method that extends IQueryable<StatusCode> and returns IQueryable<StatusCode>. This gives us a method that we can put next
to any existing query to produce a result that has only active values. For example, where “DbSet” would return all
values from a table, “DbSet.WhereActive()” would return only the active values.</div>
<div class="NoSpacing">
<br /></div>
<pre class="brush: csharp" name="code">namespace Project.Extensions
{
public static class FilterExtensions
{
public static IQueryable<StatusCode> WhereActive(this IQueryable<StatusCode> query)
{
return query.Where(v => v.EffectiveFrom < DateTime.Now &&
v.EffectiveTo > DateTime.Now);
}
}
}
</pre>
<div class="NoSpacing">
<br />
<h4>
Rewrite our query</h4>
At this point, we can re-write the above query using our filter.<br />
<br />
<pre>DbSet
.WhereActive()
.OrderBy(v => v.Id)
.Select(v => v)</pre>
<br />
This is the same exact query as our original one but it is easier to read and debug and it allows us to put the definition for what it means to be "Active" in a single place in the code, satisfying our need to avoid repeating ourselves.<br />
<br />
It also makes it easy to change our definition of what "Active" means. If we later decide that “active” is defined as “has the active flag set,” we only have to rewrite the “WhereActive” filter to implement this change.<br />
<br />
<h4>
Add Another Filter</h4>
Now I can add another filter implement another commonly used clause. We have defined "Approved" for this entity as “ApprovalDate column is set to a non-null value.” As you can see, this is not something that a later developer can easily see by looking at the code or the data model. Our filter, however, makes that definition much more obvious.</div>
<div class="NoSpacing">
<br /></div>
<pre class="brush: csharp" name="code"> public static IQueryable<StatusCode> WhereApproved(this IQueryable<StatusCode> query)
{
return query.Where(v => v.ApprovalDate != null);
}
</pre>
<br />
<div class="NoSpacing">
Now our code can execute a query consisting of<br />
<br />
<pre>DbSet
.WhereActive()
.WhereApproved()
.OrderBy(v => v.Id)
.Select(v => v)</pre>
<br />
A later developer looking at this code can more easily understand what our result set should include and why. It should be apparent looking at this code that we want all StatusCode objects that are currently active and approved.</div>
<div class="NoSpacing">
<br />
<h4>
Summary</h4>
We looked at creating filter methods that turns commonly used where clauses into filter functions that we can re-use within our code. We demonstrated how to build them and showed what the Linq query looks like before and after switching to filter functions.</div>
<div class="NoSpacing">
<br /></div>
<div class="NoSpacing">
Next time we will look at how we can write simple unit tests to verify that our filter methods are operating as intended.</div>
<div class="NoSpacing">
<br /></div>
<div class="NoSpacing">
<br /></div>
<div class="NoSpacing">
<br /></div>
| 48.869863 | 380 | 0.731324 | eng_Latn | 0.993016 |
2cc5210bc5b9bb7044492695220660945b267e3f | 1,387 | md | Markdown | docs/UpSampling3D.md | ematejska/swift-1 | e7464be4052a93553c82b3e2e67ba670e45f0c5c | [
"Apache-2.0"
] | null | null | null | docs/UpSampling3D.md | ematejska/swift-1 | e7464be4052a93553c82b3e2e67ba670e45f0c5c | [
"Apache-2.0"
] | null | null | null | docs/UpSampling3D.md | ematejska/swift-1 | e7464be4052a93553c82b3e2e67ba670e45f0c5c | [
"Apache-2.0"
] | null | null | null | # UpSampling3D
An upsampling layer for 3-D inputs.
``` swift
@frozen public struct UpSampling3D<Scalar: TensorFlowFloatingPoint>: ParameterlessLayer
```
## Inheritance
[`ParameterlessLayer`](/ParameterlessLayer)
## Initializers
### `init(size:)`
Creates an upsampling layer.
``` swift
public init(size: Int)
```
#### Parameters
- size: - size: The upsampling factor for rows and columns.
## Properties
### `size`
``` swift
let size: Int
```
## Methods
### `repeatingElements(_:alongAxis:count:)`
Repeats the elements of a tensor along an axis, like `np.repeat`.
Function adapted from `def repeat_elements`:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/backend.py
``` swift
@differentiable private func repeatingElements(_ input: Tensor<Scalar>, alongAxis axis: Int, count: Int) -> Tensor<Scalar>
```
### `_vjpRepeatingElements(_:alongAxis:count:)`
``` swift
private func _vjpRepeatingElements(_ input: Tensor<Scalar>, alongAxis axis: Int, count: Int) -> (value: Tensor<Scalar>, pullback: (Tensor<Scalar>) -> (TangentVector, Tensor<Scalar>))
```
### `callAsFunction(_:)`
Returns the output obtained from applying the layer to the given input.
``` swift
@differentiable public func callAsFunction(_ input: Tensor<Scalar>) -> Tensor<Scalar>
```
#### Parameters
- input: - input: The input to the layer.
#### Returns
The output.
| 20.397059 | 182 | 0.718818 | eng_Latn | 0.781341 |
2cc642b6aeecd3ac06b8705faa2cccb9e04553fd | 4,918 | md | Markdown | docs/controls/data-management/treelist/how-to/update-a-field-in-all-child-nodes-angular.md | konnecteam/kendo-ui-core | 003c1ad9f2ed418273b5913ef85e6f14596038b9 | [
"Apache-2.0"
] | null | null | null | docs/controls/data-management/treelist/how-to/update-a-field-in-all-child-nodes-angular.md | konnecteam/kendo-ui-core | 003c1ad9f2ed418273b5913ef85e6f14596038b9 | [
"Apache-2.0"
] | null | null | null | docs/controls/data-management/treelist/how-to/update-a-field-in-all-child-nodes-angular.md | konnecteam/kendo-ui-core | 003c1ad9f2ed418273b5913ef85e6f14596038b9 | [
"Apache-2.0"
] | null | null | null | ---
title: Update Field in All Child Nodes in AngularJS
page_title: Update Field in All Child Nodes in AngularJS | Kendo UI TreeList
description: "Learn how to update the checked state on all child nodes of a Kendo UI TreeList widget in AngularJS."
previous_url: /controls/data-management/treelist/how-to/AngularJS/update-a-field-in-all-child-nodes
slug: howto_updatefieldinallchildnodes_angularjs_treelist
position: 1
---
# Update Fields in All Child Nodes in AngularJS
The following example demonstrates how to update the checked state on all child nodes of the TreeList in AngularJS.
```dojo
<div id="example" ng-app="KendoDemos">
<div ng-controller="MyCtrl">
<kendo-treelist options="treelistOptions"></kendo-treelist>
</div>
</div>
<script>
angular.module("KendoDemos", ["kendo.directives"]).controller("MyCtrl", function ($scope) {
var dataSource = new kendo.data.TreeListDataSource({
transport: {
read: function(options) {
setTimeout(function() {
if (!options.data.id) {
options.success([
{ id: 1, Name: "Daryl Sweeney", Position: "CEO", Phone: "(555) 924-9726", parentId: null, hasChildren: true },
{ id: 2, Name: "Guy Wooten", Position: "Chief Technical Officer", Phone: "(438) 738-4935", parentId: null, hasChildren: true },
{ id: 32, Name: "Buffy Weber", Position: "VP, Engineering", Phone: "(699) 838-6121", parentId: 2 },
{ id: 11, Name: "Hyacinth Hood", Position: "Team Lead", Phone: "(889) 345-2438", parentId: 32 },
{ id: 60, Name: "Akeem Carr", Position: "Junior Software Developer", Phone: "(738) 136-2814", parentId: 11 },
{ id: 78, Name: "Rinah Simon", Position: "Software Developer", Phone: "(285) 912-5271", parentId: 11 },
{ id: 42, Name: "Gage Daniels", Position: "Software Architect", Phone: "(107) 290-6260", parentId: 32 },
{ id: 43, Name: "Constance Vazquez", Position: "Director, Engineering", Phone: "(800) 301-1978", parentId: 32 },
{ id: 46, Name: "Darrel Solis", Position: "Team Lead", Phone: "(327) 977-0216", parentId: 43 },
{ id: 47, Name: "Brian Yang", Position: "Senior Software Developer", Phone: "(565) 146-5435", parentId: 46 },
{ id: 50, Name: "Lillian Bradshaw", Position: "Software Developer", Phone: "(323) 509-3479", parentId: 46 },
{ id: 3, Name: "Priscilla Frank", Position: "Chief Product Officer", Phone: "(217) 280-5300", parentId: 1 },
{ id: 4, Name: "Ursula Holmes", Position: "EVP, Product Strategy", Phone: "(370) 983-8796", parentId: 3 },
{ id: 24, Name: "Melvin Carrillo", Position: "Director, Developer Relations", Phone: "(344) 496-9555", parentId: 3 },
{ id: 29, Name: "Martha Chavez", Position: "Developer Advocate", Phone: "(140) 772-7509", parentId: 24 },
{ id: 30, Name: "Oren Fox", Position: "Developer Advocate", Phone: "(714) 284-2408", parentId: 24 },
{ id: 41, Name: "Amos Barr", Position: "Developer Advocate", Phone: "(996) 587-8405", parentId: 24 }
]);
}
}, 1000);
}
},
schema: {
model: {
id: "id"
}
}
});
$scope.checkChildren = function(node) {
function check(nodes, state) {
for (var i = 0; i < nodes.length; i++) {
nodes[i].set("checked", state);
check(dataSource.childNodes(nodes[i]), state);
}
}
check(dataSource.childNodes(node), node.checked);
};
$scope.treelistOptions =
{
height: 540,
dataSource: dataSource,
columns:
[
{ template: "<input type='checkbox' ng-model='dataItem.checked' ng-change='checkChildren(dataItem)' />", width: 32 },
{ field: "Position", expandable: true },
{ field: "Name" },
{ field: "Phone" }
],
};
});
</script>
```
## See Also
* [Basic Usage of the TreeList (Demo)](http://demos.telerik.com/kendo-ui/treelist/index)
* [Using the API of the TreeList (Demo)](https://demos.telerik.com/kendo-ui/treelist/api)
* [TreeList JavaScript API Reference](/api/javascript/ui/treelist)
| 52.88172 | 159 | 0.517283 | yue_Hant | 0.303608 |
2cc6def929fbf23f60168b103c96864cc35ff7e2 | 4,387 | md | Markdown | README.md | Amerr/flutter_stories | 226756da8fd3c0ab09978a471933910270ab9d16 | [
"MIT"
] | null | null | null | README.md | Amerr/flutter_stories | 226756da8fd3c0ab09978a471933910270ab9d16 | [
"MIT"
] | null | null | null | README.md | Amerr/flutter_stories | 226756da8fd3c0ab09978a471933910270ab9d16 | [
"MIT"
] | null | null | null | # flutter_stories

Widget that brings stories mechanism to your apps
## Advantages:
- Simple to use and intuitive API
- Lightweight (~200 lines of code)
- Feels familiar if you've used Instagram or Snapchat stories before
- Can be used with Cupertino and Material packages independently
## Usage
Add `flutter_stories` to your `pubspec.yaml`
## Example
Full version can be found in [example](https://github.com/vanelizarov/flutter_stories/tree/master/example) dir

## Supported gestures
- Tap the right portion of the screen to switch to the next moment. You can specify `onFlashForward` callback to control app behavior in this case or when story finishes
- Tap the left portion of the screen to switch to the previous moment. Similar to right tap, but uses `onFlashBack`
- Long press (hold) the screen to hide the progress segments and pause story, release to show controls and unpause
## API
| property | type | required | description |
| ------------------------- | --------------------------------------------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `momentCount` | `int` | true | sets the number of moments in story |
| `momentDurationGetter` | `(int index) => Duration` | true | function that must return Duration for each moment |
| `momentBuilder` | `(BuildContext context, int index) => Widget` | true | builder that gets executed for each moment |
| `onFlashForward` | `() => void` | false | gets executed when user taps the right portion of the screen on the last moment in story or when story finishes playing |
| `onFlashBack` | `() => void` | false | gets executed when user taps the left portion of the screen on the first moment in story |
| `onClose` | `() => void` | false | Shows a close button at the right of screen below progress bar and executes the callback when user taps on the close icon. |
| `startAt` | `int` | false | sets the index of the first moment that will be displayed. defaults to `0` |
| `momentSwitcherFraction` | `double` | false | defaults to `0.33`. sets the ratio of left and right tappable portions of the screen: left for switching back, right for switching forward |
| `progressSegmentBuilder` | `(BuildContext context, double progress, double gap) => Widget` | false | defaults to `Story.instagramProgressSegmentBuilder`. builder for each progress segment. defaults to Instagram-like minimalistic segment builder |
| `progressSegmentGap` | `double` | false | defaults to `2.0`. sets the gap between each progress segment |
| `progressOpacityDuration` | `Duration` | false | defaults to `Duration(milliseconds: 300)`. sets the duration for the progress bar show/hide animation |
| 97.488889 | 252 | 0.46501 | eng_Latn | 0.9787 |
2cc72f8e3571f899c5dd465f5fab908260b1d35c | 1,238 | md | Markdown | tools/wol-e.md | ParthaDhar/kaliwiki | 3f29708c7b82ef8ac22b639bbc6e7827be49350b | [
"MIT"
] | 116 | 2015-01-09T15:47:47.000Z | 2021-11-18T19:20:54.000Z | tools/wol-e.md | ParthaDhar/kaliwiki | 3f29708c7b82ef8ac22b639bbc6e7827be49350b | [
"MIT"
] | 8 | 2015-01-22T15:16:57.000Z | 2015-02-24T06:06:08.000Z | tools/wol-e.md | ParthaDhar/kaliwiki | 3f29708c7b82ef8ac22b639bbc6e7827be49350b | [
"MIT"
] | 49 | 2015-01-10T15:35:53.000Z | 2021-11-18T19:20:57.000Z | # wol-3
Notes
-------
Help Text
-------
```
[*] WOL-E 1.0
[*] Wake on LAN Explorer - A collection a WOL tools.
[*] by Nathaniel Carew
-m
Waking up single computers.
If a password is required use the -k 00:12:34:56:78:90 at the end of the above command.
wol-e.py -m 00:12:34:56:78:90 -b 192.168.1.255 -p <port> -k <pass>
Defaults:
Port: 9
Broadcast: 255.255.255.255
Pass: empty
-s
Sniffing the network for WOL requests and passwords.
All captured WOL requests will be displayed on screen and written to /usr/share/wol-e/WOLClients.txt.
wol-e.py -s -i eth0
-a
Bruteforce powering on WOL clients.
wol-e.py -a -p <port>
Place the address ranges into the bfmac.lst that you wish to bruteforce.
They should be in the following format:
00:12:34:56
Default port: 9
-f
Detecting Apple devices on the network for WOL enabling.
This will output to the screen and write to /usr/share/wol-e/AppleTargets.txt for detected Apple MAC's.
wol-e.py -f
-fa
Attempt to wake all detected Apple targets in /usr/share/wol-e/AppleTargets.txt.
This will send a single WOL packet to each client in the list and tell you how many clients were attempted.
wol-e.py -fa
```
Example Usage
-------
Links
-------
| 24.27451 | 109 | 0.691438 | eng_Latn | 0.980804 |
2cc85588457040461185107afac30ebd6c620bfb | 1,024 | markdown | Markdown | notes/2.0.0.M4.markdown | glorat/scalatra | e412d9155f259ea3a27f0a4e0679c1f9718ba80d | [
"BSD-2-Clause"
] | 1 | 2015-04-16T08:55:27.000Z | 2015-04-16T08:55:27.000Z | notes/2.0.0.M4.markdown | glorat/scalatra | e412d9155f259ea3a27f0a4e0679c1f9718ba80d | [
"BSD-2-Clause"
] | 2 | 2017-09-27T18:05:41.000Z | 2018-03-14T08:39:06.000Z | notes/2.0.0.M4.markdown | glorat/scalatra | e412d9155f259ea3a27f0a4e0679c1f9718ba80d | [
"BSD-2-Clause"
] | 1 | 2022-03-13T09:33:04.000Z | 2022-03-13T09:33:04.000Z | * Build for Scala 2.9.0-1 and 2.9.0-1.
* Dropped support for Scala 2.8.0.
* Specs2 integration for test framework
* JsonSupport trait for actions that return lift-json objects.
* Support route matchers in before and after filters.
* Zero-copy file rendering.
* New SslRequirement handler redirects non-SSL requests to SSL.
* New GetResponseStatus handler stores and retrieve the HTTP status code.
* Make FlashMap entries available to current request, like Rack::Flash.
* Allow CSRFTokenSupport-derived traits to redefine the forgery test.
* multiparams now also understands Ruby-style multiparams (suffixed with [])
* [GH-46](http://github.com/scalatra/scalatra/issues/46): Scentry says an invalid request is authenticated but fails with 500 later
* [GH-41](http://github.com/scalatra/scalatra/issues/41), [GH-57](http://github.com/scalatra/scalatra/issues/57): FlashMap misbehavior with nested filters or multiple servlets
* [GH-64](http://github.com/scalatra/scalatra/issues/64): fix thread-safety issue in route parsers.
| 68.266667 | 175 | 0.78125 | eng_Latn | 0.877924 |
2cc85c2fd9c8f0d642fb1da3c357af4eed53f876 | 1,615 | md | Markdown | content/posts/2020-08-13-בית-ספר.md | rudymalhi/allergy-israel | 6b245ad3b59da98715ffa7a93344432b6f2aced8 | [
"MIT"
] | null | null | null | content/posts/2020-08-13-בית-ספר.md | rudymalhi/allergy-israel | 6b245ad3b59da98715ffa7a93344432b6f2aced8 | [
"MIT"
] | null | null | null | content/posts/2020-08-13-בית-ספר.md | rudymalhi/allergy-israel | 6b245ad3b59da98715ffa7a93344432b6f2aced8 | [
"MIT"
] | null | null | null | ---
template: SinglePost
title: בית ספר יסודי
status: Published
date: '2020-08-13'
featuredImage: 'https://ucarecdn.com/fc6a52b7-1810-42c0-bfc2-52a20ee10d1a/'
excerpt: >-
הרשמה- דיווח על האלרגיה, במידה ורוצים סייעת יש להביא את כל המסמכים יחד עם
ההרשמה. לחשוב מראש מה ניתן ורצוי לבקש ולהכין רשימת בקשות, לא להציף. למשל
בוטנים, אגוזים,שומשום אפשר לדרוש שלא יהיו בכיתה. (דין חומוס לא כדין חלבה. דין
גבנצ לא כדין שוקו).
אצל מולטי אלרגיים מומלץ לתעדף את הוצאת האלרגנים לפי חומרת התגובות.
categories:
- category: מוסדות חינוך
meta:
description: איך להתנהל עם אלרגיות למזון מול בתי ספר
title: בית ספר יסודי
---
##
* הרשמה- דיווח על האלרגיה, במידה ורוצים סייעת יש להביא את כל המסמכים יחד עם ההרשמה.
* לחשוב מראש מה ניתן ורצוי לבקש ולהכין רשימת בקשות, לא להציף. למשל בוטנים, אגוזים,שומשום אפשר לדרוש שלא יהיו בכיתה. (דין חומוס לא כדין חלבה. דין גבנצ לא כדין שוקו). \
אצל מולטי אלרגיים מומלץ לתעדף את הוצאת האלרגנים לפי חומרת התגובות.
* להביא מסמכים רפואיים גם לצוות + אישור מתן תרופות וטופס לשעת חירום
* לספק ערכת תרופות שתשאר במקום ידוע ונגיש לצוות בכל שעות היום, לא נגיש לילדים
* לשוחח עם הילד על נהלים
* לבקש להזמין הדרכה לצוות (חברי עמותה יכולים להזמין גם מאיתנו)
* הגדרת אזורים נקיים מאלרגן
* לבקש לשלט את השער ומקומות נוספים (מזכירות, חדר מורים)
* לבקש לשוחח ביום הורים ולהסביר את חשיבות הנושא, לחלק דפים של דברים מותרים
* קביעת נהלים לימי הולדת, טיולים וכו (להכין רשימה יחד עם הצוות של מה מותר להביא
* להכין הפעלה בנושא (דפי עבודה, ארתור, ספר וכדו)
* להכין קופסת הפתעות למקרה הצורך
* לדבר על פעילויות אחה”צ, היבטים חברתיים וכדו’. לא להדיר. לא להפריד. לא לעשות פעילויות עם אוכל וכדו.
##
| 42.5 | 166 | 0.749226 | heb_Hebr | 1.000004 |
2cc86c2a67718e76600f563338c24d71ea614111 | 149 | md | Markdown | README.md | moeing-chain/MoeingADS | bb5d53c748160c50b6205c881457db65d61ca26e | [
"Apache-2.0"
] | 4 | 2021-03-18T18:01:38.000Z | 2021-09-23T01:32:37.000Z | README.md | moeing-chain/MoeingADS | bb5d53c748160c50b6205c881457db65d61ca26e | [
"Apache-2.0"
] | 1 | 2022-03-21T22:33:53.000Z | 2022-03-21T22:33:53.000Z | README.md | moeing-chain/MoeingADS | bb5d53c748160c50b6205c881457db65d61ca26e | [
"Apache-2.0"
] | 2 | 2021-11-10T05:03:56.000Z | 2021-12-25T02:08:00.000Z | # MoeingADS
Authenticated Data Structure from Moeing

| 29.8 | 93 | 0.805369 | kor_Hang | 0.410653 |
2cc882da5db5c484267ccbb1e16efae8cc6db423 | 1,264 | md | Markdown | templates/pelican/files/README.md | Jamesap/template-builder | d8ecc452f871ef3b763cbaa17b257d261438aa84 | [
"MIT"
] | null | null | null | templates/pelican/files/README.md | Jamesap/template-builder | d8ecc452f871ef3b763cbaa17b257d261438aa84 | [
"MIT"
] | null | null | null | templates/pelican/files/README.md | Jamesap/template-builder | d8ecc452f871ef3b763cbaa17b257d261438aa84 | [
"MIT"
] | null | null | null | # Pelican for Platform.sh
<p align="center">
<a href="https://console.platform.sh/projects/create-project?template=https://raw.githubusercontent.com/platformsh/template-builder/master/templates/pelican/.platform.template.yaml&utm_content=pelican&utm_source=github&utm_medium=button&utm_campaign=deploy_on_platform">
<img src="https://platform.sh/images/deploy/lg-blue.svg" alt="Deploy on Platform.sh" width="180px" />
</a>
</p>
This template provides a basic Pelican skeleton. All files are generated at build time, so at runtime only static files need to be served.
Pelican is a static site generator written in Python and using Jinja for templating.
## Services
* Python 3.7
## Customizations
The following changes have been made relative to a plain Pelican project. If using this project as a reference for your own existing project, replicate the changes below to your project.
* The `.platform.app.yaml`, `.platform/services.yaml`, and `.platform/routes.yaml` files have been added. These provide Platform.sh-specific configuration and are present in all projects on Platform.sh. You may customize them as you see fit.
## References
* [Moin Moin](https://moinmo.in/)
* [Python on Platform.sh](https://docs.platform.sh/languages/python.html)
| 46.814815 | 270 | 0.773734 | eng_Latn | 0.962115 |
2cc8d7d1e06186ad940ba746915e97b9496ae8d3 | 541 | md | Markdown | README.md | itMcdull/auto_size | 490eb5c7cd2e282e51277f73e672bf316fc8a87d | [
"BSD-3-Clause"
] | 125 | 2019-06-03T07:15:13.000Z | 2021-04-18T02:16:31.000Z | README.md | itMcdull/auto_size | 490eb5c7cd2e282e51277f73e672bf316fc8a87d | [
"BSD-3-Clause"
] | 12 | 2019-06-13T10:06:42.000Z | 2019-12-13T11:55:22.000Z | README.md | itMcdull/auto_size | 490eb5c7cd2e282e51277f73e672bf316fc8a87d | [
"BSD-3-Clause"
] | 11 | 2019-06-19T02:58:21.000Z | 2021-04-09T08:09:27.000Z | # auto_size
Flutter 屏幕适配,Flutter AutoSize plugin。
### 注意
本库暂未正式发布,希望大家帮忙测试。如有任何问题请及时反馈,谢谢大家!
### 使用方式
```dart
Step 1:
dependencies:
auto_size:
git:
url: git://github.com/flutterchina/auto_size.git
Step 2:
flutter packages get
Step 3:
import 'package:auto_size/auto_size.dart';
/// 默认设计稿尺寸360*640,单位dp。
void main() => runAutoSizeApp(MyApp());
void main() => runAutoSizeApp(MyApp(), width: designWidth, height: designeight);
```
### Abort
本库屏幕适配Idea来源[genius158](https://github.com/genius158/FlutterTest)作者。
| 16.393939 | 80 | 0.700555 | yue_Hant | 0.223747 |
2cc9fc173abba8e72cff4d3a67a43450d1267269 | 872 | md | Markdown | README.md | legend80s/is-uglified | 86d2c3964936d4e9a991433abf398de4182d6810 | [
"MIT"
] | 1 | 2021-11-11T14:50:52.000Z | 2021-11-11T14:50:52.000Z | README.md | legend80s/is-uglified | 86d2c3964936d4e9a991433abf398de4182d6810 | [
"MIT"
] | 1 | 2021-05-10T20:38:01.000Z | 2021-05-10T20:38:01.000Z | README.md | legend80s/is-uglified | 86d2c3964936d4e9a991433abf398de4182d6810 | [
"MIT"
] | 1 | 2021-11-11T14:46:30.000Z | 2021-11-11T14:46:30.000Z | # is-uglified
Detect if a javascript file is uglified

## How?
We use `Mean Identify Length` measure. For one handwriting javascript file, the average length of identifiers `MUST` bigger than it of an uglified one.
E.g.,
* The `mean identify length` of [react.development.js](https://unpkg.com/react@16.7.0/umd/react.development.js) is `10.8`
* The `mean identify length` of the minimized version [react.production.js](https://unpkg.com/react@16.7.0/umd/react.production.min.js) is `1.7`, which is much more smaller than before.
We set the threshold value default to `3`, detecting whether a javascript file is uglified.
## Installing
```
npm install is-uglified
```
## Usage
```javascript
import isUglified from 'is-uglified';
isUglified('local_file_to_detect.js') // get result;
```
| 28.129032 | 185 | 0.738532 | eng_Latn | 0.96669 |
2cca190c87b05ee96aef8bbf4089c6ef646d0b4d | 1,557 | md | Markdown | node_modules/loadjs/CHANGELOG.md | funduval/spin-spin | f07e0b9214bb1a6da777057e8f4b10491b3c2de3 | [
"MIT"
] | null | null | null | node_modules/loadjs/CHANGELOG.md | funduval/spin-spin | f07e0b9214bb1a6da777057e8f4b10491b3c2de3 | [
"MIT"
] | null | null | null | node_modules/loadjs/CHANGELOG.md | funduval/spin-spin | f07e0b9214bb1a6da777057e8f4b10491b3c2de3 | [
"MIT"
] | 1 | 2018-05-06T23:38:23.000Z | 2018-05-06T23:38:23.000Z | # LoadJS Changelog
## 3.5.1 - August 9, 2017
* Upgraded devDependencies and re-built payload
## 3.5.0 - March 28, 2017
* Added support for "css!" prefix to force treating file as stylesheet
* Added support for DOM insertion bypass if `before` callback returns `false`
## 3.4.0 - February 23, 2017
* Added isDefined() method to check if a bundle is already defined
## 3.3.1 - January 11, 2017
* Minor code cleanup
## 3.3.0 - January 9, 2017
* Added reset() method to reset dependency trackers
## 3.2.1 - December 18, 2016
* Minor code cleanup
## 3.2.0 - December 11, 2016
* Added `before` callback hook to modify script/link elements before adding
them to the DOM
## 3.1.0 - December 9, 2016
* Added numRetries option
## 3.0.0 - August 25, 2016
* Changed 'fail' callback name to 'error'
* Fixed bug in main attribute of bower.json
## 2.1.2 - August 22, 2016
* Upgraded devDependencies, rebuilt packaged, saved a byte
## 2.1.1 - July 25, 2016
* Fixed bug causing issues with external css files
## 2.1.0 - June 19, 2016
* Added support for loading CSS files
## 2.0.0 - June 15, 2016
* Changed API to accept object with success/fail functions
* Added support for async: false
## 1.0.4 - May 25, 2016
* Added support for ad blocked script failures
## 1.0.3 - May 18, 2016
* Shaved off 3 more bytes (minified + gzipped)
## 1.0.2 - May 18, 2016
* Added bower.json
* Removed onload script deletion
## 1.0.1 - March 22, 2016
* Small improvement to internal code to save a few bytes
## 1.0.0 - March 21, 2016
* Added UMD support
| 19.708861 | 77 | 0.692357 | eng_Latn | 0.964368 |
2cca4ee068fbe98028ac5019d0909836ac322976 | 3,756 | md | Markdown | note/ToolBox/template.md | nd-yi/blog | 35ccfaa326e0baf3b09b5efab278f8825ee54c23 | [
"Apache-2.0"
] | 5 | 2021-01-15T10:07:51.000Z | 2021-12-14T11:13:29.000Z | note/ToolBox/template.md | nd-yi/blog | 35ccfaa326e0baf3b09b5efab278f8825ee54c23 | [
"Apache-2.0"
] | null | null | null | note/ToolBox/template.md | nd-yi/blog | 35ccfaa326e0baf3b09b5efab278f8825ee54c23 | [
"Apache-2.0"
] | null | null | null | ## Toast
=========================================================
```js
// constants
TOAST_DISTRICT_LOOKUP : 'districtLookup',
TOAST_DISTRICT_LOOKUP_FAILURE_TYPE : 'districtLookupFailure',
// actionCreator
const toastText = 'success!';
dispatch(ToastActions.updateToastInfo({
id : ToastConstants.TOAST_DISTRICT_LOOKUP,
options : {
toastType: ToastConstants.TOAST_DISTRICT_LOOKUP_FAILURE_TYPE,
toastText,
},
}));
setTimeout(() => {
dispatch(ToastActions.resetToast(
ToastConstants.TOAST_DISTRICT_LOOKUP_FAILURE_TYPE,
ToastConstants.TOAST_DISTRICT_LOOKUP,
));
}, ToastConstants.TOAST_TIMEOUT_DURATION);
// view
import ToastNotification from '../../../users/common/ToastNotification';
import ToastConstants from '../../../../constants/ToastConstants';
import { selectSpecificToastInfo } from '../../../../selectors/ToastSelectors';
import ToastHelper from '../../../../model_helpers/ToastHelper';
// mapStateToPorps
toastInfo : selectSpecificToastInfo(state, ToastConstants.TOAST_DISTRICT_LOOKUP),
// render
const toastText = ToastHelper.getToastText(toastInfo);
const toastType = ToastHelper.getToastType(toastInfo);
<ToastNotification toastText={toastText} isOpen={!!toastText} toastType={toastType} />
```
## loading
===============================================================
```js
// constants
BEGIN_NEW_FEATURE_ANNOUNCEMENT_FETCH_AJAX_CALL : 'BEGIN_NEW_FEATURE_ANNOUNCEMENT_FETCH_AJAX_CALL',
BEGIN_NEW_FEATURE_ANNOUNCEMENT_FETCH_AJAX_CALL_COMPLETED : 'BEGIN_NEW_FEATURE_ANNOUNCEMENT_FETCH_AJAX_CALL_COMPLETED',
// actionCreator
dispatch(beginAjaxCallAction(
AjaxCallStatusConstants.BEGIN_NEW_FEATURE_ANNOUNCEMENT_FETCH_AJAX_CALL,
StoreStateConstants.VIEW_ALL_ANNOUNCEMENT_FETCH_STATUS,
));
dispatch(beginAjaxCallAction(
AjaxCallStatusConstants.BEGIN_NEW_FEATURE_ANNOUNCEMENT_FETCH_AJAX_CALL_COMPLETED,
StoreStateConstants.VIEW_ALL_ANNOUNCEMENT_FETCH_STATUS,
));
// view
function mapStateToProps(state, ownProps) {
return {
activeAnnouncementsStatus : selectSpecificStatus(state, StoreStateConstants.VIEW_ALL_ANNOUNCEMENT_FETCH_STATUS),
};
}
```
## tooltip
=================================================================
```js
const [tooltipOpen, setTooltipOpen] = useState(false);
const toggleTooltip = () => { setTooltipOpen(!tooltipOpen); };
<i id="Audience" className="fas fa-exclamation-circle cursor-pointer" />
<Tooltip placement="top" isOpen={tooltipOpen} target="Audience" toggle={toggleTooltip}>
{
`
A/T/S/P =
Administrator/Teacher/Student/Parent
`
}
</Tooltip>
```
## 分页
==============================================================
1. view
```js
import { useLocation } from 'react-router-dom';
import queryString from 'query-string';
const location = useLocation();
const [page, setPage] = useState(1);
useEffect(() => {
const { page: currPage = 1 } = queryString.parse(location.search) || {};
if (Number(currPage) !== page) {
setPage(Number(currPage));
}
}, [location]);
<PaginationNav totalPages={totalPages} currentPage={page} path={location.pathname} />
function mapStateToProps(state, ownProps) {
// 这里有细节 意味着ownProps要传进去page参数 那意味着page只能是在父组件传入
return {
totalPages : selectTotalPages(state, ownProps),
};
}
```
## class 样式名覆盖
由于ruby webpack 打包顺序有问题 可采用这种命名空间加样式名方式增加优先级
```css
.CountryBulkActionTab {
.bulk-btn-selected {
color: #fff;
background-color: #2971bb;
border-color: #276bb0;
box-shadow: 0 0 0 0.2rem rgba(85, 150, 218, 50%);
}
}
```
##
bootstrap_overrides 引入bootstrap样式
### 添加新的路由页面
1. App.js页面注册路由
2. LeftColumn 添加侧边栏选项
3. 上面两个页面用到各种常量
4. routes.rb 页面配置路由权限
| 26.265734 | 122 | 0.678115 | yue_Hant | 0.663732 |
2ccb371c92b641366459a478e4548dd57702df28 | 2,307 | md | Markdown | _posts/2021-10-30-Photos of Model.md | yawenzh/YZmar | 38f83fef3b04f0a5f12cbc69ec244d6b719050e5 | [
"MIT"
] | null | null | null | _posts/2021-10-30-Photos of Model.md | yawenzh/YZmar | 38f83fef3b04f0a5f12cbc69ec244d6b719050e5 | [
"MIT"
] | null | null | null | _posts/2021-10-30-Photos of Model.md | yawenzh/YZmar | 38f83fef3b04f0a5f12cbc69ec244d6b719050e5 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Photos of Model"
categories: Studio Process
author:
- Martha Zheng



















| 85.444444 | 116 | 0.824447 | yue_Hant | 0.08748 |
2ccb6a7bb2731ca49ab115c3f0256e4076a05a35 | 359 | md | Markdown | CHANGELOG.md | raandree/DpDscWorkshop | 6a17efe0316c79278765865d30d4d5db98ec72b5 | [
"MIT"
] | null | null | null | CHANGELOG.md | raandree/DpDscWorkshop | 6a17efe0316c79278765865d30d4d5db98ec72b5 | [
"MIT"
] | null | null | null | CHANGELOG.md | raandree/DpDscWorkshop | 6a17efe0316c79278765865d30d4d5db98ec72b5 | [
"MIT"
] | null | null | null | # Changelog for DscPipeline
The format is based on and uses the types of changes according to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
### Changed
- Migration to 'Sampler' and 'Sampler.DscPipeline'
- Migration to Pester 5+
| 25.642857 | 123 | 0.732591 | eng_Latn | 0.959845 |
2ccbcfa8469eada2a2d4d180c877187dedbb5340 | 901 | md | Markdown | README.md | evoluteur/isomorphic-table-cards | 9ce82578d6e16b534e91eeb610113f9b0d797708 | [
"MIT"
] | 4 | 2020-12-10T06:41:18.000Z | 2021-11-25T12:29:13.000Z | README.md | evoluteur/isomorphic-table-cards | 9ce82578d6e16b534e91eeb610113f9b0d797708 | [
"MIT"
] | null | null | null | README.md | evoluteur/isomorphic-table-cards | 9ce82578d6e16b534e91eeb610113f9b0d797708 | [
"MIT"
] | null | null | null | # Isomorphic-Table-Cards · [](https://github.com/evoluteur/isomorphic-table-cards/blob/master/LICENSE)
Isomorphic Table and Cards views with animated transitions.
Check out [the demo](https://evoluteur.github.io/isomorphic-table-cards/index.html).
[](https://evoluteur.github.io/isomorphic-table-cards/index.html)
This [code](https://github.com/evoluteur/isomorphic-table-cards) has no dependencies. It's just Vanilla Javascript, CSS, and HTML under [MIT license](https://github.com/evoluteur/isomorphic-table-cards/blob/master/LICENSE).
Note: Of course these animated transitions can also be [done using D3](https://evoluteur.github.io/d3-table-cards/).
(c) 2020 [Olivier Giulieri](https://evoluteur.github.io/).
| 60.066667 | 224 | 0.778024 | yue_Hant | 0.23049 |
2ccc376d3199dc9d06f2dfefd0b7b114ddeebe0b | 301 | md | Markdown | docs/proceso_Galton_Watson.md | DrakeWhu/foam | 18580711bb8b7e1750cecd3c197ccdeabc3c2f28 | [
"MIT"
] | null | null | null | docs/proceso_Galton_Watson.md | DrakeWhu/foam | 18580711bb8b7e1750cecd3c197ccdeabc3c2f28 | [
"MIT"
] | null | null | null | docs/proceso_Galton_Watson.md | DrakeWhu/foam | 18580711bb8b7e1750cecd3c197ccdeabc3c2f28 | [
"MIT"
] | null | null | null | ---
tags:
type: matematicas
---
# Proceso de Galton Watson
Un proceso de Galton-Watson es un [proceso estocástico](proceso_estocastico.md) de branching que surge de la investigación de las extinciones de nombres de familias, llevada a cabo por Galton. También modela la transmisión del cromosoma Y.
| 37.625 | 239 | 0.787375 | spa_Latn | 0.989253 |
2ccca5fb16afcf9029900bca58e7fc2348dd2185 | 869 | md | Markdown | dev-docs/bidders/fidelity.md | yieldmo/prebid.github.io | 46913c1d78db0ad37bfbfbf054a1c32518133fb2 | [
"Apache-2.0"
] | 58 | 2015-11-02T17:02:25.000Z | 2022-03-28T08:27:42.000Z | dev-docs/bidders/fidelity.md | yieldmo/prebid.github.io | 46913c1d78db0ad37bfbfbf054a1c32518133fb2 | [
"Apache-2.0"
] | 1,732 | 2015-08-24T15:17:43.000Z | 2022-03-31T16:27:38.000Z | dev-docs/bidders/fidelity.md | yieldmo/prebid.github.io | 46913c1d78db0ad37bfbfbf054a1c32518133fb2 | [
"Apache-2.0"
] | 994 | 2015-07-22T22:30:03.000Z | 2022-03-31T09:46:59.000Z | ---
layout: bidder
title: Fidelity Media
description: Prebid Fidelity Media Bidder Adapter
pbjs: true
schain_supported: true
biddercode: fidelity
media_types: banner
gdpr_supported: true
usp_supported: true
gvl_id: 408
pbjs_version_notes: not in 5.x
---
### Bid Params
{: .table .table-bordered .table-striped }
| Name | Scope | Description | Example | Type |
|--------|----------|--------------------------------------------------|--------------------------|----------|
| zoneid | required | The ad zone or tag specific ID | `'27248'` | `string` |
| floor | optional | The floor CPM price for the request | `0.1234` | `float` |
| server | optional | Bidder domain (default `'x.fidelity-media.com'`) | `'x.fidelity-media.com'` | `string` |
| 37.782609 | 110 | 0.516686 | eng_Latn | 0.601805 |
2ccce5243338d1b5d3d814b0dc1362404f62feae | 100 | md | Markdown | README.md | Harish-Israel/Hello-World | da8cc1609009e8fdf69153ca40c7e188fa3142aa | [
"MIT"
] | null | null | null | README.md | Harish-Israel/Hello-World | da8cc1609009e8fdf69153ca40c7e188fa3142aa | [
"MIT"
] | 1 | 2021-05-14T11:47:48.000Z | 2021-05-14T11:47:48.000Z | README.md | Harish-Israel/Hello-World | da8cc1609009e8fdf69153ca40c7e188fa3142aa | [
"MIT"
] | null | null | null | # Hello-World
My first repository on GitHub
I love :coffee: :pizza:, and :dancer:.
Thanks to all ❤️
| 20 | 38 | 0.71 | eng_Latn | 0.992861 |