hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
9a9c932683a949dbba3cd30969afc982d002d237
200
md
Markdown
programme/armstrong-number/index.md
SpadeQ22/codinasion
239e5a4ce3cdc0fbf3ff45fd2e2b7f8207951448
[ "MIT" ]
null
null
null
programme/armstrong-number/index.md
SpadeQ22/codinasion
239e5a4ce3cdc0fbf3ff45fd2e2b7f8207951448
[ "MIT" ]
4
2022-01-27T11:32:33.000Z
2022-01-29T18:30:13.000Z
programme/armstrong-number/index.md
harshraj8843/codinasion
c4ecdef7933d9c0112185850381d3f40443d3316
[ "MIT" ]
null
null
null
--- title: Check armstrong number description: Write a program to check armstrong number contributors: - Badboy-16 - kzhang01 - vaishnavikumar8 --- import README from "./README.md" <README />
15.384615
54
0.72
eng_Latn
0.463376
9a9ce176cb072adc342f231a8d1e834c1788b9d6
2,398
md
Markdown
docs/workloads/task/example.md
naxty/cortex
89a8128c668e52ed8994c51c095f5e8d405aeca3
[ "Apache-2.0" ]
null
null
null
docs/workloads/task/example.md
naxty/cortex
89a8128c668e52ed8994c51c095f5e8d405aeca3
[ "Apache-2.0" ]
null
null
null
docs/workloads/task/example.md
naxty/cortex
89a8128c668e52ed8994c51c095f5e8d405aeca3
[ "Apache-2.0" ]
null
null
null
# TaskAPI Deploy a task API that trains a model on the iris flower dataset and uploads it to an S3 bucket. ## Key features * Lambda-style execution * Task monitoring * Scale to 0 ## How it works ### Install cortex ```bash $ pip install cortex ``` ### Spin up a cluster on AWS ```bash $ cortex cluster up ``` ### Define a task API ```python # task.py import cortex def train_iris_model(config): import os import boto3, pickle from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression s3_filepath = config["dest_s3_dir"] bucket, key = s3_filepath.replace("s3://", "").split("/", 1) # get iris flower dataset iris = load_iris() data, labels = iris.data, iris.target training_data, test_data, training_labels, test_labels = train_test_split(data, labels) # train the model model = LogisticRegression(solver="lbfgs", multi_class="multinomial") model.fit(training_data, training_labels) accuracy = model.score(test_data, test_labels) print("accuracy: {:.2f}".format(accuracy)) # upload the model pickle.dump(model, open("model.pkl", "wb")) s3 = boto3.client("s3") s3.upload_file("model.pkl", bucket, os.path.join(key, "model.pkl")) requirements = ["scikit-learn==0.23.2", "boto3"] api_spec = { "name": "trainer", "kind": "TaskAPI", } cx = cortex.client("aws") cx.create_api(api_spec, task=train_iris_model, requirements=requirements) ``` ### Deploy to your Cortex cluster on AWS ```bash $ python task.py ``` ### Describe the task API ```bash $ cortex get trainer ``` ### Submit a job ```python import cortex import requests cx = cortex.client("aws") task_endpoint = cx.get_api("trainer")["endpoint"] dest_s3_dir = # specify S3 directory where the trained model will get pushed to job_spec = { "config": { "dest_s3_dir": dest_s3_dir } } response = requests.post(task_endpoint, json=job_spec) print(response.text) # > {"job_id":"69b183ed6bdf3e9b","api_name":"trainer", "config": {"dest_s3_dir": ...}} ``` ### Monitor the job ```bash $ cortex get trainer 69b183ed6bdf3e9b ``` ### View the results Once the job is complete, you should be able to find the trained model of the task job in the S3 directory you've specified. ### Delete the Task API ```bash $ cortex delete trainer ```
20.322034
124
0.687656
eng_Latn
0.691999
9a9cff095396b249bcc198882fc67fcd8ce971a9
160
md
Markdown
docs/components/cascader.md
FLYSASA/KOMA-UI
f4ec41acfedb4071bd9ad83519a08915f6394018
[ "MIT" ]
11
2019-04-23T01:31:19.000Z
2021-04-06T02:04:30.000Z
docs/components/cascader.md
FLYSASA/KOMA-UI
f4ec41acfedb4071bd9ad83519a08915f6394018
[ "MIT" ]
1
2022-03-02T09:51:05.000Z
2022-03-02T09:51:05.000Z
docs/components/cascader.md
FLYSASA/KOMA-UI
f4ec41acfedb4071bd9ad83519a08915f6394018
[ "MIT" ]
null
null
null
--- title: Cascader - 级联选择器 --- # Cascader 级联选择器 <ClientOnly> <cascader-demos></cascader-demos> </ClientOnly> <cascader-attributes></cascader-attributes>
12.307692
43
0.70625
eng_Latn
0.202838
9a9d66bda22aae0d09f1c55ba91e1485e1b0a0f7
1,743
md
Markdown
docs/access/desktop-database-reference/persisting-filtered-and-hierarchical-recordsets.md
isabella232/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
2
2020-05-19T18:52:16.000Z
2021-04-21T00:13:46.000Z
docs/access/desktop-database-reference/persisting-filtered-and-hierarchical-recordsets.md
MicrosoftDocs/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
2
2021-12-08T03:25:19.000Z
2021-12-08T03:43:48.000Z
docs/access/desktop-database-reference/persisting-filtered-and-hierarchical-recordsets.md
isabella232/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
5
2018-07-17T08:19:45.000Z
2021-10-13T10:29:41.000Z
--- title: Speichern von gefilterten und hierarchischen Recordsets TOCTitle: Persisting filtered and hierarchical Recordsets ms:assetid: 3648a997-dac7-d8a3-3cca-a6827f26a4f0 ms:mtpsurl: https://msdn.microsoft.com/library/JJ249120(v=office.15) ms:contentKeyID: 48544162 ms.date: 09/18/2015 mtps_version: v=office.15 ms.localizationpriority: medium ms.openlocfilehash: 9da0cb3e06be240a9782c8fec1e537216290ae2b ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 09/24/2021 ms.locfileid: "59581007" --- # <a name="persisting-filtered-and-hierarchical-recordsets"></a>Speichern von gefilterten und hierarchischen Recordsets **Gilt für**: Access 2013, Office 2013 Wenn die [Filter](filter-property-ado.md)-Eigenschaft für das **Recordset** -Objekt aktiviert ist, werden nur die Zeilen gespeichert, auf die über den Filter zugegriffen werden kann. Wenn das **Recordset** -Objekt hierarchisch ist, werden das aktuelle untergeordnete **Recordset** -Objekt und dessen untergeordneten Elemente gespeichert, einschließlich des übergeordneten **Recordset** -Objekts. Falls die **Save** -Methode eines untergeordneten **Recordset** -Objekts aufgerufen wird, werden das untergeordnete Element und alle ihm untergeordneten Elemente gespeichert, das übergeordnete Element jedoch nicht. Weitere Informationen zu hierarchischen **Recordset** -Objekten finden Sie in [Kapitel 9: Datenstrukturierung](chapter-9-data-shaping.md). > [!NOTE] > [!HINWEIS] Beim Speichern hierarchischer **Recordset** -Objekte (Datenstrukturen) im XML-Format gelten einige Einschränkungen. Weitere Informationen finden Sie unter [Hierarchische Recordset-Objekte in XML](hierarchical-recordsets-in-xml.md).
60.103448
749
0.811819
deu_Latn
0.948379
9a9e05ea63d9943bab6e213595cd03ef2f406f40
16,894
md
Markdown
README.md
brian-intel/video-analytics-serving
5d4900f93c481bc2a5fea7587e689ee96ce007c0
[ "BSD-3-Clause" ]
null
null
null
README.md
brian-intel/video-analytics-serving
5d4900f93c481bc2a5fea7587e689ee96ce007c0
[ "BSD-3-Clause" ]
null
null
null
README.md
brian-intel/video-analytics-serving
5d4900f93c481bc2a5fea7587e689ee96ce007c0
[ "BSD-3-Clause" ]
null
null
null
# Video Analytics Serving | [Getting Started](#getting-started) | [Documentation](#further-reading) | [Reference Guides](#further-reading) | [Related Links](#related-links) | Video Analytics Serving is a python package and microservice for deploying optimized media analytics pipelines. It supports pipelines defined in [GStreamer](https://gstreamer.freedesktop.org/documentation/?gi-language=c)* or [FFmpeg](https://ffmpeg.org/)* and provides APIs to discover, start, stop, customize and monitor pipeline execution. Video Analytics Serving is based on [OpenVINO<sup>&#8482;</sup> Toolkit DL Streamer](https://github.com/opencv/gst-video-analytics) and [FFmpeg Video Analytics](https://github.com/VCDP/FFmpeg-patch). ## Features Include: | | | |---------------------------------------------|------------------| | **Customizable Media Analytics Containers** | Scripts and dockerfiles to build and run container images with the required dependencies for hardware optimized media analytics pipelines. | | **No-Code Pipeline Definitions and Templates** | JSON based definition files, a flexible way for developers to define and parameterize pipelines while abstracting the low level details from their users. | | **Deep Learning Model Integration** | A simple way to package and reference [OpenVINO<sup>&#8482;</sup>](https://software.intel.com/en-us/openvino-toolkit) based models in pipeline definitions. The precision of a model can be auto-selected at runtime based on the chosen inference device. | | **Video Analytics Serving Python API** | A python module to discover, start, stop, customize and monitor pipelines based on their no-code definitions. | | **Video Analytics Serving Microservice** &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;| A RESTful microservice providing endpoints and APIs matching the functionality of the python module. | > **IMPORTANT:** Video Analytics Serving is provided as a _sample_. It > is not intended to be deployed into production environments without > modification. Developers deploying Video Analytics Serving should > review it against their production requirements. # Getting Started The sample microservice includes three media analytics pipelines. | | | |---------------------------------------------|---------| | **object_detection** | Detect and label objects such as bottles and bicycles. | **emotion_recognition** | Detect the emotions of a person within a video stream. | **audio_detection** | Analyze audio streams for events such as breaking glass or barking dogs. ## Prerequisites | | | |---------------------------------------------|------------------| | **Docker** | Video Analytics Serving requires Docker for it's build, development, and runtime environments. Please install the latest for your platform. [Docker](https://docs.docker.com/install). | | **bash** | Video Analytics Serving's build and run scripts require bash and have been tested on systems using versions greater than or equal to: `GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)`. Most users shouldn't need to update their version but if you run into issues please install the latest for your platform. Instructions for macOS&reg;* users [here](docs/installing_bash_macos.md). | | **curl** | The samples below use the `curl` command line program to issue standard HTTP requests to the microservice. Please install the latest for your platform. Note: any other tool or utility that can issue standard HTTP requests can be used in place of `curl`. | ## Building the Microservice Build the sample microservice with the following command: ```bash ./docker/build.sh ``` The script will automatically include the sample models, pipelines and required dependencies. > **Note:** When running this command for the first time, the default > base image for Video Analytics Serving will take a long time to > build (likely over an hour). For instructions on how to re-use > pre-built base images to speed up the build time please see the > following [documentation](docs/building_video_analytics_serving.md#using-pre-built-media-analytics-base-images). To verify the build succeeded execute the following command: ```bash docker images video-analytics-serving-gstreamer:latest ``` Expected output: ```bash REPOSITORY TAG IMAGE ID CREATED SIZE video-analytics-serving-gstreamer latest f51f2695639f 2 minutes ago 1.39GB ``` ## Running the Microservice Start the sample microservice with the following command: ```bash ./docker/run.sh -v /tmp:/tmp ``` This script issues a standard docker run command to launch the container, run a Tornado based web service on port 8080, and mount the `/tmp` folder. The `/tmp` folder is mounted to share sample results with the host and is optional in actual deployments. Expected output: ``` {"levelname": "INFO", "asctime": "2020-08-06 12:37:12,139", "message": "=================", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:12,139", "message": "Loading Pipelines", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:12,139", "message": "=================", "module": "pipeline_manager"} (gst-plugin-scanner:14): GStreamer-WARNING **: 12:37:12.476: Failed to load plugin '/root/gst-video-analytics/build/intel64/Release/lib/libvasot.so': libopencv_video.so.4.4: cannot open shared object file: No such file or directory {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,207", "message": "FFmpeg Pipelines Not Enabled: ffmpeg not installed\n", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,208", "message": "Loading Pipelines from Config Path /home/video-analytics-serving/pipelines", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,223", "message": "Loading Pipeline: audio_detection version: 1 type: GStreamer from /home/video-analytics-serving/pipelines/audio_detection/1/pipeline.json", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,230", "message": "Loading Pipeline: object_detection version: 1 type: GStreamer from /home/video-analytics-serving/pipelines/object_detection/1/pipeline.json", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,240", "message": "Loading Pipeline: emotion_recognition version: 1 type: GStreamer from /home/video-analytics-serving/pipelines/emotion_recognition/1/pipeline.json", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,241", "message": "===========================", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,241", "message": "Completed Loading Pipelines", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,241", "message": "===========================", "module": "pipeline_manager"} {"levelname": "INFO", "asctime": "2020-08-06 12:37:13,333", "message": "Starting Tornado Server on port: 8080", "module": "__main__"} ``` ## Detecting Objects in a Video <br/> <br/> ### Example Request: <table> <tr> <th> Endpoint </th> <th> Verb </th> <th> Request </th> <th> Response </th> </tr> <tr> <td> pipelines/object_detection/1 </td> <td> POST </td> <td> <pre lang="json"> JSON { "source": { "uri": "https://example.mp4", "type": "uri" }, "destination": { "type": "file", "path": "/tmp/results_objects.txt", "format": "json-lines" } } </pre> </td> <td> 200 <br/> <br/> Pipeline Instance Id </td> </tr> </table> ### Curl Command: Start a new shell and execute the following command to issue an HTTP POST request, start a pipeline and analyze a sample [video](https://github.com/intel-iot-devkit/sample-videos/blob/master/preview/bottle-detection.gif). ```bash curl localhost:8080/pipelines/object_detection/1 -X POST -H \ 'Content-Type: application/json' -d \ '{ "source": { "uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/bottle-detection.mp4?raw=true", "type": "uri" }, "destination": { "type": "file", "path": "/tmp/results_objects.txt", "format": "json-lines" } }' ``` ### Detection Results: To view incremental results, execute the following command from the shell. ```bash tail -f /tmp/results_objects.txt ``` As the video is being analyzed and as objects appear and disappear you will see detection results in the output. Expected Output: ```json {"objects":[{"detection":{"bounding_box":{"x_max":0.9022353887557983,"x_min":0.7940621376037598,"y_max":0.8917602300643921,"y_min":0.30396613478660583},"confidence":0.7093080282211304,"label":"bottle","label_id":5},"h":212,"roi_type":"bottle","w":69,"x":508,"y":109}],"resolution":{"height":360,"width":640},"source":"https://github.com/intel-iot-devkit/sample-videos/blob/master/bottle-detection.mp4?raw=true","timestamp":39553072625} ``` After pretty-printing: ```json { "objects": [ { "detection": { "bounding_box": { "x_max": 0.9022353887557983, "x_min": 0.7940621376037598, "y_max": 0.8917602300643921, "y_min": 0.30396613478660583 }, "confidence": 0.7093080282211304, "label": "bottle", "label_id": 5 }, "h": 212, "roi_type": "bottle", "w": 69, "x": 508, "y": 109 } ], "resolution": { "height": 360, "width": 640 }, "source": "https://github.com/intel-iot-devkit/sample-videos/blob/master/bottle-detection.mp4?raw=true", "timestamp": 39553072625 } ``` # More Examples <details> <summary>Emotion Recognition</summary> ## Recognizing Emotions in a Video ### Example Request: <table> <tr> <th> Endpoint </th> <th> Verb </th> <th> Request </th> <th> Response </th> </tr> <tr> <td> /pipelines/emotion_recognition/1 </td> <td> POST </td> <td> <pre lang="json"> JSON { "source": { "uri": "https://example.mp4", "type": "uri" }, "destination": { "type": "file", "path": "/tmp/results_emotions.txt", "format": "json-lines" } } </pre> </td> <td> 200 <br/> <br/> Pipeline Instance Id </td> </tr> </table> ### Curl Command: Start a new shell and execute the following command to issue an HTTP POST request, start a pipeline and analyze a sample [video](https://github.com/intel-iot-devkit/sample-videos/blob/master/preview/head-pose-face-detection-male.gif). ```bash curl localhost:8080/pipelines/object_detection/1 -X POST -H \ 'Content-Type: application/json' -d \ '{ "source": { "uri": "https://github.com/intel-iot-devkit/sample-videos/blob/master/head-pose-face-detection-male.mp4?raw=true", "type": "uri" }, "destination": { "type": "file", "path": "/tmp/results_emotions.txt", "format": "json-lines" } }' ``` ### Detection Results: To view incremental results, execute the following command from the shell. ```bash tail -f /tmp/results_emotions.txt ``` As the video is being analyzed and as the person's emotions appear and disappear you will see recognition results in the output. Expected Output: ```json {"objects":[{"detection":{"bounding_box":{"x_max":0.567557156085968,"x_min":0.42375022172927856,"y_max":0.5346322059631348,"y_min":0.15673652291297913},"confidence":0.9999996423721313,"label":"face","label_id":1},"emotion":{"label":"neutral","model":{"name":"0003_EmoNet_ResNet10"}},"h":163,"roi_type":"face","w":111,"x":325,"y":68}],"resolution":{"height":432,"width":768},"source":"https://github.com/intel-iot-devkit/sample-videos/blob/master/head-pose-face-detection-male.mp4?raw=true","timestamp":13333333333} ``` After pretty-printing: ```json { "objects": [ { "detection": { "bounding_box": { "x_max": 0.567557156085968, "x_min": 0.42375022172927856, "y_max": 0.5346322059631348, "y_min": 0.15673652291297913 }, "confidence": 0.9999996423721313, "label": "face", "label_id": 1 }, "emotion": { "label": "neutral", "model": { "name": "0003_EmoNet_ResNet10" } }, "h": 163, "roi_type": "face", "w": 111, "x": 325, "y": 68 } ], "resolution": { "height": 432, "width": 768 }, "source": "https://github.com/intel-iot-devkit/sample-videos/blob/master/head-pose-face-detection-male.mp4?raw=true", "timestamp": 13333333333 } ``` </details> <details> <summary>Audio Event Detection</summary> ## Detecting Audio Events in an Audio Recording ### Example Request: <table> <tr> <th> Endpoint </th> <th> Verb </th> <th> Request </th> <th> Response </th> </tr> <tr> <td> /pipelines/audio_detection/1 </td> <td> POST </td> <td> <pre lang="json"> JSON { "source": { "uri": "https://example.wav", "type": "uri" }, "destination": { "type": "file", "path": "/tmp/results_audio_events.txt", "format": "json-lines" } } </pre> </td> <td> 200 <br/> <br/> Pipeline Instance Id </td> </tr> </table> ### Curl Command: Start a new shell and execute the following command to issue an HTTP POST request, start a pipeline and analyze a sample [audio](https://github.com/opencv/gst-video-analytics/blob/preview/audio-detect/samples/gst_launch/audio_detect/how_are_you_doing.wav?raw=true). ```bash curl localhost:8080/pipelines/object_detection/1 -X POST -H \ 'Content-Type: application/json' -d \ '{ "source": { "uri": "https://github.com/opencv/gst-video-analytics/blob/preview/audio-detect/samples/gst_launch/audio_detect/how_are_you_doing.wav?raw=true", "type": "uri" }, "destination": { "type": "file", "path": "/tmp/results_audio_events.txt", "format": "json-lines" } }' ``` ### Detection Results: To view incremental results, execute the following command from the shell. ```bash tail -f /tmp/results_audio_events.txt ``` As the audio is being analyzed and as events start and stop you will see detection results in the output. Expected Output: ```json {"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":2200000000,"start_timestamp":1200000000}},"end_timestamp":2200000000,"event_type":"Speech","start_timestamp":1200000000}],"rate":16000} ``` After pretty-printing: ```json { "channels": 1, "events": [ { "detection": { "confidence": 1, "label": "Speech", "label_id": 53, "segment": { "end_timestamp": 2200000000, "start_timestamp": 1200000000 } }, "end_timestamp": 2200000000, "event_type": "Speech", "start_timestamp": 1200000000 } ], "rate": 16000 } ``` </details> # Further Reading | **Documentation** | **Reference Guides** | | ------------ | ------------------ | | **-** [Defining Media Analytics Pipelines](docs/defining_pipelines.md) <br/> **-** [Building Video Analytics Serving](docs/building_video_analytics_serving.md) <br/> **-** [Running Video Analytics Serving](docs/running_video_analytics_serving.md) | **-** [Video Analytics Serving Architecture Diagram](docs/images/video_analytics_service_architecture.png) <br/> **-** [Microservice Endpoints](docs/restful_microservice_interfaces.md) <br/> **-** [Build Script Reference](docs/build_script_reference.md) <br/> **-** [Run Script Reference](docs/run_script_reference.md) | ## Related Links | **Media Frameworks** | **Media Analytics** | **Samples and Reference Designs** | ------------ | ------------------ | -----------------| | **-** [GStreamer](https://gstreamer.freedesktop.org/documentation/?gi-language=c)* <br/> **-** [GStreamer* Overview](docs/gstreamer_overview.md) <br/> **-** [FFmpeg](https://ffmpeg.org/)* | **-** [OpenVINO<sup>&#8482;</sup> Toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) <br/> **-** [OpenVINO<sup>&#8482;</sup> Toolkit DL Streamer](https://github.com/opencv/gst-video-analytics) <br/> **-** [FFmpeg* Video Analytics](https://github.com/VCDP/FFmpeg-patch) | **-** [Open Visual Cloud Smart City Sample](https://github.com/OpenVisualCloud/Smart-City-Sample) <br/> **-** [Open Visual Cloud Ad-Insertion Sample](https://github.com/OpenVisualCloud/Ad-Insertion-Sample) <br/> **-** [Edge Insights for Retail](https://software.intel.com/content/www/us/en/develop/articles/real-time-sensor-fusion-for-loss-detection.html) --- \* Other names and brands may be claimed as the property of others.
34.904959
858
0.670889
eng_Latn
0.613573
9a9e3b74fb2216ea810577f187a68abd2aaf5567
2,978
md
Markdown
README.md
unt-libraries/edtf-validate
0ca6d3b4b27a35fabe76f93480dccef62bcaae1b
[ "BSD-3-Clause" ]
5
2016-10-12T21:10:06.000Z
2020-05-13T14:09:12.000Z
README.md
unt-libraries/edtf-validate
0ca6d3b4b27a35fabe76f93480dccef62bcaae1b
[ "BSD-3-Clause" ]
28
2015-08-12T21:05:18.000Z
2021-11-09T21:16:11.000Z
README.md
unt-libraries/ExtendedDateTimeFormat
0ca6d3b4b27a35fabe76f93480dccef62bcaae1b
[ "BSD-3-Clause" ]
1
2019-02-13T21:11:20.000Z
2019-02-13T21:11:20.000Z
edtf-validate ========================= [![PyPI](https://img.shields.io/pypi/v/edtf-validate.svg)](https://pypi.python.org/pypi/edtf-validate) [![Build Status](https://github.com/unt-libraries/edtf-validate/actions/workflows/test.yml/badge.svg?branch=master)](https://github.com/unt-libraries/edtf-validate/actions) Valid EDTF provides validity testing against levels 0-2 of the official [EDTF Specification](https://www.loc.gov/standards/datetime/edtf.html) released February 2019. You might find it most useful for tasks involving date validation and comparison. Typical usage often looks like this: ```python >>> from edtf_validate.valid_edtf import is_valid, isLevel2 >>> is_valid('2015-03-05') True >>> is_valid('Jan 12, 1990') False >>> isLevel2('1998?-12-23') True >>> conformsLevel1('-1980-11-01/1989-11-30') True ``` Or just straight from the command line... ```console $ edtf-validate 2015 2015 True ``` NOTE ---- Please take special care to note the name difference between command line usage and the other usage cases: * When importing into python, use an underscore separator, e.g. `import edtf_validate`. * When using the command line (or when talking about the package name), use a dash separator, e.g. `$ edtf-validate`. What exactly does edtf-validate do? =============================================== This program will: * Determine if a string is valid EDTF according to the specifications provided by the Library of Congress. * Allow the user to test if a date is a feature of each level of EDTF using `isLevel*` functions. i.e. '1964/2008' is a feature introduced in Level 0 rules, and '1964~/2008~' is a feature introduced in Level1. * Allow the user to test if a date is valid for each level of EDTF using `conformsLevel*` functions. i.e. '2014' is a feature introduced in Level 0 and valid for it, but also valid in Level 1 and Level 2 as all EDTF levels validate dates of itself and levels below it. Another example, '2001-25' is a feature introduced in Level 2 hence valid for Level 2, but it is not a valid date in Level 0 and Level 1. If you're confused what exactly the different levels of EDTF validation implicate, you can read about it in exhaustive detail [here](https://www.loc.gov/standards/datetime). Installation ------------ The easiest way to install is through pip. To use pip to install edtf-validate, along with all the dependencies, use: ```console $ pip install edtf-validate ``` License ------- See LICENSE.txt Acknowledgements ---------------- The edtf-validate was developed at the UNT Libraries and has been worked on by a number of developers over the years including: [Joey Liechty](https://github.com/yeahdef) [Lauren Ko](https://github.com/ldko) [Mark Phillips](https://github.com/vphill) [Gio Gottardi](https://github.com/somexpert) [Madhulika Bayyavarapu](https://github.com/madhulika95b) If you have questions about the project feel free to contact Mark Phillips at mark.phillips@unt.edu.
36.317073
173
0.731363
eng_Latn
0.957587
9a9f300351e8eb6fb34495605e0acee8c33307b9
1,937
md
Markdown
_posts/2019-6-5-leetcode-hash-tbl.md
kevinsblog/kevinsblog.github.io
f55f64c21869d8897de7bf1e72fa56bd930bff37
[ "MIT" ]
1
2019-06-12T12:14:56.000Z
2019-06-12T12:14:56.000Z
_posts/2019-6-5-leetcode-hash-tbl.md
kevinsblog/kevinsblog.github.io
f55f64c21869d8897de7bf1e72fa56bd930bff37
[ "MIT" ]
null
null
null
_posts/2019-6-5-leetcode-hash-tbl.md
kevinsblog/kevinsblog.github.io
f55f64c21869d8897de7bf1e72fa56bd930bff37
[ "MIT" ]
null
null
null
--- layout: post title: LeetCode专题-哈希表 tags: [Leetcode, C/C++] bigimg: /img/path.jpg comments: true --- * toc {:toc} ## 423. Reconstruct Original Digits from English Medium Given a non-empty string containing an out-of-order English representation of digits 0-9, output the digits in ascending order. Note: Input contains only lowercase English letters. Input is guaranteed to be valid and can be transformed to its original digits. That means invalid inputs such as "abc" or "zerone" are not permitted. Input length is less than 50,000. Example 1: Input: "owoztneoer" Output: "012" 题目大意:一堆代表数字的英文单词被打乱,要求还原出它代表的数字。 解题思路:将英文单词的字符记录在哈希表中,一个个找出其中的数字。 ```c++ class Solution { public: string originalDigits(string s) { const vector<string> digits = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" }; unordered_map<char, int> freq; string ans; for (auto c : s) { freq[c]++; } for (size_t i = 0; i < digits.size(); i++) { int nDigitsNum = INT_MAX; for (auto d : digits[i]) { if (freq.count(d) == 0) { nDigitsNum = INT_MAX; //we can have none break; } else { // the minimum num of digit i we can have nDigitsNum = min(nDigitsNum, freq[d]); } } if (nDigitsNum == INT_MAX) { continue; } //reduce the available digit characters for (auto d : digits[i]) { freq[d] -= nDigitsNum; } //append digits while(nDigitsNum--) ans.append(to_string(i)); } return ans; } }; ``` 测试一下,有一些用例还是不能通过测试, ``` Input "zeroonetwothreefourfivesixseveneightnine" Output "01113356" Expected "0123456789" ```
23.059524
153
0.550852
eng_Latn
0.874187
9a9fceb38ceb564cd109eded742a6a5ff429e62b
199
md
Markdown
_snapshot.md
curiohoiho/ng2-material-design-study
e0a7db54c981e7d617e0fb00bc85884098b90f11
[ "MIT" ]
null
null
null
_snapshot.md
curiohoiho/ng2-material-design-study
e0a7db54c981e7d617e0fb00bc85884098b90f11
[ "MIT" ]
null
null
null
_snapshot.md
curiohoiho/ng2-material-design-study
e0a7db54c981e7d617e0fb00bc85884098b90f11
[ "MIT" ]
null
null
null
# latest commit from the original that this re-write is on 1. Oct 10, 2016 ### previous commit ### current commit ### files that I already re-wrote which have changed between these 2 commits:
15.307692
77
0.718593
eng_Latn
1.000004
9a9fd63d82ebd7fd1a5356a7c1a4fc21a9d2fcf7
98
md
Markdown
content/docs/intelligence/traffic-sign/index.en.md
ducnguyenhuynh/via-docs
5f007a0c2769dadaf5f7939aa0ac4b7ce7876e39
[ "CC0-1.0" ]
1
2022-02-09T10:41:20.000Z
2022-02-09T10:41:20.000Z
content/docs/intelligence/traffic-sign/index.en.md
ducnguyenhuynh/via-docs
5f007a0c2769dadaf5f7939aa0ac4b7ce7876e39
[ "CC0-1.0" ]
1
2021-05-31T09:20:00.000Z
2021-05-31T09:20:00.000Z
content/docs/intelligence/traffic-sign/index.en.md
ducnguyenhuynh/via-docs
5f007a0c2769dadaf5f7939aa0ac4b7ce7876e39
[ "CC0-1.0" ]
5
2021-03-21T07:33:15.000Z
2021-12-02T18:42:27.000Z
--- version: 0 title: Nhận dạng biển báo giao thông weight: 20 draft: true --- Đang cập nhật...
9.8
36
0.663265
vie_Latn
0.999937
9aa048bcd78bd3f6ac29269536d85a1a784e38e5
2,744
md
Markdown
desktop-src/WmiSdk/implementing-the-primary-interface-for-an-event-provider.md
KrupalJoshi/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
3
2020-04-24T13:02:42.000Z
2021-07-17T15:32:03.000Z
desktop-src/WmiSdk/implementing-the-primary-interface-for-an-event-provider.md
KrupalJoshi/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/WmiSdk/implementing-the-primary-interface-for-an-event-provider.md
KrupalJoshi/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
1
2022-03-09T23:50:05.000Z
2022-03-09T23:50:05.000Z
--- Description: An event provider must implement the IWbemEventProvider interface to generate event notifications. ms.assetid: ae33c9f5-61f7-4488-a281-01d7f9c62c46 ms.tgt_platform: multiple title: Implementing the Primary Interface for an Event Provider ms.topic: article ms.date: 05/31/2018 --- # Implementing the Primary Interface for an Event Provider An event provider must implement the [**IWbemEventProvider**](/windows/desktop/api/Wbemprov/nn-wbemprov-iwbemeventprovider) interface to generate event notifications. WMI calls the [**IWbemEventProvider::ProvideEvents**](/windows/desktop/api/Wbemprov/nf-wbemprov-iwbemeventprovider-provideevents) method of the provider and passes in a pointer to the sink object, which is an implementation of the [**IWbemObjectSink**](iwbemobjectsink.md) interface. When the event provider is ready to generate a notification, the provider calls the [**IWbemObjectSink::Indicate**](/windows/desktop/api/Wbemcli/nf-wbemcli-iwbemobjectsink-indicate) method. An event provider should place the notifications generated through [**IWbemEventProvider**](/windows/desktop/api/Wbemprov/nn-wbemprov-iwbemeventprovider) in event objects. You should implement event objects as entries in an array of [**IWbemClassObject**](/windows/desktop/api/WbemCli/nn-wbemcli-iwbemclassobject) interfaces represented by the *ppObjArray* parameter of the [**Indicate**](/windows/desktop/api/Wbemcli/nf-wbemcli-iwbemobjectsink-indicate) method. Because **IWbemClassObjects** are COM objects, the provider must increment the reference count for the sink by calling the [**IWbemObjectSink::AddRef**](https://msdn.microsoft.com/en-us/library/ms691379(v=VS.85).aspx) method. Event providers that must supply many notifications (for example, 400 events) should create a unique event object for each notification by either spawning a new instance or cloning an existing one. WMI never holds onto an event object past the completion of the **Indicate** call, and has no special requirements for **AddRef** above and beyond the COM standard. Consider the following guidelines when implementing an event provider: - Do not make any class changes while servicing a client call. - Do not issue any event-related calls, such as a call that modifies an event filter. - Process any requests that the Windows Management service issues, such as [**CancelQuery**](/windows/desktop/api/Wbemprov/nf-wbemprov-iwbemeventproviderquerysink-cancelquery), before refiring an event. If you do not process the request, then refiring the event might block the event from ever being accepted. - Never call [**IWbemObjectSink::SetStatus**](/windows/desktop/api/Wbemcli/nf-wbemcli-iwbemobjectsink-setstatus) from within a provider.    
85.75
1,051
0.800292
eng_Latn
0.966413
9aa1534a281bd79c7eb36076fe5d59b46b3f26fa
18,121
md
Markdown
docs/relational-databases/backup-restore/restore-a-transaction-log-backup-sql-server.md
v-brlaz/sql-docs
5d902e328b551bb619fd95106ce3d320a8fdfbe9
[ "CC-BY-4.0", "MIT" ]
1
2019-02-06T20:12:14.000Z
2019-02-06T20:12:14.000Z
docs/relational-databases/backup-restore/restore-a-transaction-log-backup-sql-server.md
v-brlaz/sql-docs
5d902e328b551bb619fd95106ce3d320a8fdfbe9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/backup-restore/restore-a-transaction-log-backup-sql-server.md
v-brlaz/sql-docs
5d902e328b551bb619fd95106ce3d320a8fdfbe9
[ "CC-BY-4.0", "MIT" ]
1
2019-04-09T18:16:06.000Z
2019-04-09T18:16:06.000Z
--- title: "Restore a Transaction Log Backup (SQL Server) | Microsoft Docs" ms.custom: "" ms.date: "03/14/2017" ms.prod: "sql" ms.prod_service: "database-engine" ms.service: "" ms.component: "backup-restore" ms.reviewer: "" ms.suite: "sql" ms.technology: - "dbe-backup-restore" ms.tgt_pltfrm: "" ms.topic: "article" f1_keywords: - "sql13.swb.restoretlog.general.f1" - "sql13.swb.restoretlog.options.f1" helpviewer_keywords: - "restore log" - "backing up transaction logs [SQL Server], restoring" - "transaction log backups [SQL Server], restoring" - "restoring transaction logs [SQL Server], restoring backups" - "transaction log restores [SQL Server], SQL Server Management Studio" ms.assetid: 1de2b888-78a6-4fb2-a647-ba4bf097caf3 caps.latest.revision: 36 author: "MikeRayMSFT" ms.author: "mikeray" manager: "craigg" ms.workload: "On Demand" --- # Restore a Transaction Log Backup (SQL Server) [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] This topic describes how to restore a transaction log backup in [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] by using [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] or [!INCLUDE[tsql](../../includes/tsql-md.md)]. **In This Topic** - **Before you begin:** [Prerequisites](#Prerequisites) [Security](#Security) - **To restore a transaction log backup, using:** [SQL Server Management Studio](#SSMSProcedure) [Transact-SQL](#TsqlProcedure) - [Related Tasks](#RelatedTasks) ## <a name="BeforeYouBegin"></a> Before You Begin ### <a name="Prerequisites"></a> Prerequisites - Backups must be restored in the order in which they were created. Before you can restore a particular transaction log backup, you must first restore the following previous backups without rolling back uncommitted transactions, that is WITH NORECOVERY: - The full database backup and the last differential backup, if any, taken before the particular transaction log backup. Before the most recent full or differential database backup was created, the database must have been using the full recovery model or bulk-logged recovery model. - All transaction log backups taken after the full database backup or the differential backup (if you restore one) and before the particular transaction log backup. Log backups must be applied in the sequence in which they were created, without any gaps in the log chain. For more information about transaction log backups, see [Transaction Log Backups &#40;SQL Server&#41;](../../relational-databases/backup-restore/transaction-log-backups-sql-server.md) and [Apply Transaction Log Backups &#40;SQL Server&#41;](../../relational-databases/backup-restore/apply-transaction-log-backups-sql-server.md). ### <a name="Security"></a> Security #### <a name="Permissions"></a> Permissions RESTORE permissions are given to roles in which membership information is always readily available to the server. Because fixed database role membership can be checked only when the database is accessible and undamaged, which is not always the case when RESTORE is executed, members of the **db_owner** fixed database role do not have RESTORE permissions. ## <a name="SSMSProcedure"></a> Using SQL Server Management Studio > [!WARNING] > The normal process of a restore is to select the log backups in the **Restore Database** dialog box along with the data and differential backups. #### To restore a transaction log backup 1. After connecting to the appropriate instance of the [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssDEnoversion](../../includes/ssdenoversion-md.md)], in Object Explorer, click the server name to expand the server tree. 2. Expand **Databases**, and, depending on the database, either select a user database or expand **System Databases** and select a system database. 3. Right-click the database, point to **Tasks**, point to **Restore**, and then click **Transaction Log**, which opens the **Restore Transaction Log** dialog box. > [!NOTE] > If **Transaction Log** is grayed out, you may need to restore a full or differential backup first. Use the **Database** backup dialog box. 4. On the **General** page, in the **Database** list box, select the name of a database. Only databases in the restoring state are listed. 5. To specify the source and location of the backup sets to restore, click one of the following options: - **From previous backups of database** Select the database to restore from the drop-down list. The list contains only databases that have been backed up according to the **msdb** backup history. - **From file or tape** Click the browse (**...**) button to open the **Select backup devices** dialog box. In the **Backup media type** box, select one of the listed device types. To select one or more devices for the **Backup media** box, click **Add**. After you add the devices you want to the **Backup media** list box, click **OK** to return to the **General** page. 6. In the **Select the transaction log backups to restore** grid, select the backups to restore. This grid lists the transaction log backups available for the selected database. A log backup is available only if its **First LSN** greater than the **Last LSN** of the database. Log backups are listed in the order of the log sequence numbers (LSN) they contain, and they must be restored in this order. The following table lists the column headers of the grid and describes their values. |Header|Value| |------------|-----------| |**Restore**|Selected check boxes indicate the backup sets to be restored.| |**Name**|Name of the backup set.| |**Component**|Backed-up component: **Database**, **File**, or \<blank> (for transaction logs).| |**Database**|Name of the database involved in the backup operation.| |**Start Date**|Date and time when the backup operation began, presented in the regional setting of the client.| |**Finish Date**|Date and time when the backup operation finished, presented in the regional setting of the client.| |**First LSN**|Log sequence number of the first transaction in the backup set. Blank for file backups.| |**Last LSN**|Log sequence number of the last transaction in the backup set. Blank for file backups.| |**Checkpoint LSN**|Log sequence number of the most recent checkpoint at the time the backup was created.| |**Full LSN**|Log sequence number of the most recent full database backup.| |**Server**|Name of the Database Engine instance that performed the backup operation.| |**User Name**|Name of the user who performed the backup operation.| |**Size**|Size of the backup set in bytes.| |**Position**|Position of the backup set in the volume.| |**Expiration**|Date and time the backup set expires.| 7. Select one of the following: - **Point in time** Either retain the default (**Most recent possible**) or select a specific date and time by clicking the browse button, which opens the **Point in Time Restore** dialog box. - **Marked transaction** Restore the database to a previously marked transaction. Selecting this option launches the **Select Marked Transaction** dialog box, which displays a grid listing the marked transactions available in the selected transaction log backups. By default, the restore is up to, but excluding, the marked transaction. To restore the marked transaction also, select **Include marked transaction**. The following table lists the column headers of the grid and describes their values. |Header|Value| |------------|-----------| |\<blank>|Displays a checkbox for selecting the mark.| |**Transaction Mark**|Name of the marked transaction specified by the user when the transaction was committed.| |**Date**|Date and time of the transaction when it was committed. Transaction date and time are displayed as recorded in the **msdbgmarkhistory** table, not in the client computer's date and time.| |**Description**|Description of marked transaction specified by the user when the transaction was committed (if any).| |**LSN**|Log sequence number of the marked transaction.| |**Database**|Name of the database where the marked transaction was committed.| |**User Name**|Name of the database user who committed the marked transaction.| 8. To view or select the advanced options, click **Options** in the **Select a page** pane. 9. In the **Restore options** section, the choices are: - **Preserve the replication settings (WITH KEEP_REPLICATION)** Preserves the replication settings when restoring a published database to a server other than the server where the database was created. This option is available only with the **Leave the database ready for use by rolling back the uncommitted transactions...** option (described later), which is equivalent to restoring a backup with the **RECOVERY** option. Checking this option is equivalent to using the **KEEP_REPLICATION** option in a [!INCLUDE[tsql](../../includes/tsql-md.md)]**RESTORE** statement. - **Prompt before restoring each backup** Before restoring each backup set (after the first), this option brings up the **Continue with Restore** dialog box, which asks you to indicate whether you want to continue the restore sequence. This dialog displays the name of the next media set (if available), the backup set name, and backup set description. This option is particularly useful when you must swap tapes for different media sets. For example, you can use it when the server has only one tape device. Wait until you are ready to proceed before clicking **OK**. Clicking **No** leaves the database in the restoring state. At your convenience, you can continue the restore sequence after the last restore that completed. If the next backup is a data or differential backup, use the **Restore Database** task again. If the next backup is a log backup, use the **Restore Transaction Log** task. - **Restrict access to the restored database (WITH RESTRICTED_USER)** Makes the restored database available only to the members of **db_owner**, **dbcreator**, or **sysadmin**. Checking this option is synonymous to using the **RESTRICTED_USER** option in a [!INCLUDE[tsql](../../includes/tsql-md.md)]**RESTORE** statement. 10. For the **Recovery state** options, specify the state of the database after the restore operation. - **Leave the database ready for use by rolling back uncommitted transactions. Additional transaction logs cannot be restored. (RESTORE WITH RECOVERY)** Recovers the database. This option is equivalent to the **RECOVERY** option in a [!INCLUDE[tsql](../../includes/tsql-md.md)]**RESTORE** statement. Choose this option only if you have no log files you want to restore. - **Leave the database non-operational, and do not roll back uncommitted transactions. Additional transaction logs can be restored. (RESTORE WITH NORECOVERY)** Leaves the database unrecovered, in the **RESTORING** state. This option is equivalent to using the **NORECOVERY** option in a [!INCLUDE[tsql](../../includes/tsql-md.md)]**RESTORE** statement. When you choose this option, the **Preserve replication settings** option is unavailable. > [!IMPORTANT] > For a mirror or secondary database, always select this option. - **Leave the database in read-only mode. Undo uncommitted transactions, but save the undo actions in a file so that recovery effects can be reversed. (RESTORE WITH STANDBY)** Leaves the database in a standby state. This option is equivalent to using the **STANDBY** option in a [!INCLUDE[tsql](../../includes/tsql-md.md)]**RESTORE** statement. Choosing this option requires that you specify a standby file. 11. Optionally, specify a standby file name in the **Standby file** text box. This option is required if you leave the database in read-only mode. You can browse for the standby file or type its pathname in the text box. ## <a name="TsqlProcedure"></a> Using Transact-SQL > [!IMPORTANT] > We recommend that you always explicitly specify either WITH NORECOVERY or WITH RECOVERY in every RESTORE statement to eliminate ambiguity. This is particularly important when writing scripts. #### To restore a transaction log backup 1. Execute the RESTORE LOG statement to apply the transaction log backup, specifying: - The name of the database to which the transaction log will be applied. - The backup device where the transaction log backup will be restored from. - The NORECOVERY clause. The basic syntax for this statement is as follows: RESTORE LOG *database_name* FROM <backup_device> WITH NORECOVERY. Where *database_name* is the name of database and <backup_device>is the name of the device that contains the log backup being restored. 2. Repeat step 1 for each transaction log backup you have to apply. 3. After restoring the last backup in your restore sequence, to recover the database use one of the following statements: - Recover the database as part of the last RESTORE LOG statement: ``` RESTORE LOG <database_name> FROM <backup_device> WITH RECOVERY; GO ``` - Wait to recover the database by using a separate RESTORE DATABASE statement: ``` RESTORE LOG <database_name> FROM <backup_device> WITH NORECOVERY; RESTORE DATABASE <database_name> WITH RECOVERY; GO ``` Waiting to recover the database gives you the opportunity to verify that you have restored all of the necessary log backups. This approach is often advisable when you are performing a point-in-time restore. > [!IMPORTANT] > If you are creating a mirror database, omit the recovery step. A mirror database must remain in the RESTORING state. ### <a name="TsqlExample"></a> Examples (Transact-SQL) By default, the [!INCLUDE[ssSampleDBobject](../../includes/sssampledbobject-md.md)] database uses the simple recovery model. The following examples require modifying the database to use the full recovery model, as follows: ```sql ALTER DATABASE AdventureWorks2012 SET RECOVERY FULL; ``` #### A. Applying a single transaction log backup The following example starts by restoring the [!INCLUDE[ssSampleDBobject](../../includes/sssampledbobject-md.md)] database by using a full database backup that resides on a backup device named `AdventureWorks2012_1`. The example then applies the first transaction log backup that resides on a backup device named `AdventureWorks2012_log`. Finally, the example recovers the database. ```sql RESTORE DATABASE AdventureWorks2012 FROM AdventureWorks2012_1 WITH NORECOVERY; GO RESTORE LOG AdventureWorks2012 FROM AdventureWorks2012_log WITH FILE = 1, WITH NORECOVERY; GO RESTORE DATABASE AdventureWorks2012 WITH RECOVERY; GO ``` #### B. Applying multiple transaction log backups The following example starts by restoring the [!INCLUDE[ssSampleDBobject](../../includes/sssampledbobject-md.md)] database by using a full database backup that resides on a backup device named `AdventureWorks2012_1`. The example then applies, one by one, the first three transaction log backups that reside on a backup device named `AdventureWorks2012_log`. Finally, the example recovers the database. ```sql RESTORE DATABASE AdventureWorks2012 FROM AdventureWorks2012_1 WITH NORECOVERY; GO RESTORE LOG AdventureWorks2012 FROM AdventureWorks2012_log WITH FILE = 1, NORECOVERY; GO RESTORE LOG AdventureWorks2012 FROM AdventureWorks2012_log WITH FILE = 2, WITH NORECOVERY; GO RESTORE LOG AdventureWorks2012 FROM AdventureWorks2012_log WITH FILE = 3, WITH NORECOVERY; GO RESTORE DATABASE AdventureWorks2012 WITH RECOVERY; GO ``` ## <a name="RelatedTasks"></a> Related Tasks - [Back Up a Transaction Log &#40;SQL Server&#41;](../../relational-databases/backup-restore/back-up-a-transaction-log-sql-server.md) - [Restore a Database Backup Using SSMS](../../relational-databases/backup-restore/restore-a-database-backup-using-ssms.md) - [Restore a Database to the Point of Failure Under the Full Recovery Model &#40;Transact-SQL&#41;](../../relational-databases/backup-restore/restore-database-to-point-of-failure-full-recovery.md) - [Restore a SQL Server Database to a Point in Time &#40;Full Recovery Model&#41;](../../relational-databases/backup-restore/restore-a-sql-server-database-to-a-point-in-time-full-recovery-model.md) - [Restore a Database to a Marked Transaction &#40;SQL Server Management Studio&#41;](../../relational-databases/backup-restore/restore-a-database-to-a-marked-transaction-sql-server-management-studio.md) ## See Also [RESTORE &#40;Transact-SQL&#41;](../../t-sql/statements/restore-statements-transact-sql.md) [Apply Transaction Log Backups &#40;SQL Server&#41;](../../relational-databases/backup-restore/apply-transaction-log-backups-sql-server.md)
58.266881
405
0.697533
eng_Latn
0.974536
9aa15d813b13f942ff5bb2642a9f813565690b79
18,368
markdown
Markdown
src/content/es/fundamentals/performance/optimizing-content-efficiency/http-caching.markdown
sshyran/WebFundamentals
5556a0756b410de95e9547b78bce6d7310b836e6
[ "Apache-2.0" ]
4
2017-04-04T04:51:09.000Z
2022-02-10T17:10:28.000Z
src/content/es/fundamentals/performance/optimizing-content-efficiency/http-caching.markdown
sshyran/WebFundamentals
5556a0756b410de95e9547b78bce6d7310b836e6
[ "Apache-2.0" ]
3
2021-05-20T20:19:33.000Z
2022-02-26T09:21:33.000Z
src/content/es/fundamentals/performance/optimizing-content-efficiency/http-caching.markdown
sshyran/WebFundamentals
5556a0756b410de95e9547b78bce6d7310b836e6
[ "Apache-2.0" ]
2
2017-07-20T22:00:47.000Z
2020-01-22T08:18:27.000Z
--- title: "Almacenar HTTP en caché" description: "La tarea de obtener un elemento de la red es lenta y cara: las respuestas de gran tamaño suponen muchos recorridos de ida y vuelta entre el cliente y el servidor, y el proceso se dilata cuando están disponibles y el navegador puede procesarlas. Además, suponen costes de datos para el visitante. Por lo tanto, la capacidad de almacenar en memoria caché y reutilizar recursos obtenidos anteriormente es un aspecto esencial para optimizar el rendimiento." updated_on: 2014-01-05 key-takeaways: validate-etags: - "El servidor informa del token de validación mediante el encabezado HTTP `ETag`." - "El token de validación posibilita las comprobaciones de actualizaciones eficientes: si el recurso no ha cambiado, no se transfieren datos." cache-control: - "Cada recurso puede definir su política de almacenamiento en memoria caché mediante el encabezado HTTP `Cache-Control`." - "Las directivas de `Cache-Control` controlan quién puede almacenar la respuesta en memoria caché, en qué condiciones y durante cuánto tiempo." invalidate-cache: - "Las repuestas almacenadas en la memoria caché del dispositivo se utilizan hasta que el recurso `caduca`." - "La capacidad de insertar una huella de contenido de archivo en la URL nos permite obligar al cliente a actualizar a una versión nueva de la respuesta." - "Cada aplicación tiene que definir su propia jerarquía de memoria caché a fin de optimizar el rendimiento." notes: webview-cache: - "Si utilizas una vista web para obtener y mostrar contenido web en la aplicación, puede que tengas que proporcionar marcas de configuración adicionales para garantizar que la memoria caché HTTP esté habilitada, que su tamaño se haya establecido en un valor razonable para tu caso de uso y que la memoria caché se almacene. Consulta la documentación de la plataforma y verifica tu configuración." boilerplate-configs: - "Consejo: El proyecto HTML5 Boilerplate contiene <a href='https://github.com/h5bp/server-configs'>archivos de configuración de muestra</a> para todos los servidores más utilizados con comentarios detallados para cada marca y parámetro de configuración. Localiza tu servidor preferido en la lista, busca la configuración adecuada y cópiala o verifica que el servidor esté configurado con los parámetros recomendados." cache-control: - "El encabezado `Cache-Control` se ha definido como parte de la especificación HTTP/1.1 y sustituye los encabezados anteriores (por ejemplo, `Expires`) que se utilizaban para definir las políticas de almacenamiento de respuestas en memoria caché. Todos los navegadores modernos son compatibles con `Cache-Control`, así que eso es todo lo que necesitamos." --- <p class="intro"> La tarea de obtener un elemento de la red es lenta y cara: las respuestas de gran tamaño suponen muchos recorridos de ida y vuelta entre el cliente y el servidor, y el proceso se dilata cuando están disponibles y el navegador puede procesarlas. Además, suponen costes de datos para el visitante. Por lo tanto, la capacidad de almacenar en memoria caché y reutilizar recursos obtenidos anteriormente es un aspecto esencial para optimizar el rendimiento. </p> {% include shared/toc.liquid %} Te alegrará saber que todos los navegadores se suministran con una implementación de una memoria caché HTTP. Lo único que tenemos que hacer es asegurarnos de que todas las respuestas del servidor proporcionen directivas correctas de encabezado HTTP que indiquen al navegador cuándo y durante cuánto tiempo puede almacenar la respuesta en memoria caché. {% include shared/remember.liquid character="{" position="left" title="" list=page.notes.webview-cache %} <img src="images/http-request.png" class="center" alt="Solicitud HTTP"> Cuando el servidor ofrece una respuesta, también emite una colección de encabezados HTTP que describen su tipo de contenido, la longitud, las directivas de almacenamiento en memoria caché, el token de validación, etc. Por ejemplo, en el intercambio anterior, el servidor ofrece una respuesta de 1.024 bytes, indica al cliente que la almacene en memoria caché durante un máximo de 120 segundos y proporciona un token de validación (`x234dff`) que se puede utilizar después de que la respuesta haya caducado para comprobar si el recurso se ha modificado. ## Validar respuestas almacenadas en memoria caché con `ETags` {% include shared/takeaway.liquid list=page.key-takeaways.validate-etags %} Supongamos que han pasado 120 segundos desde nuestra tarea de obtención inicial y que el navegador ha iniciado una solicitud nueva para el mismo recurso. En primer lugar, el navegador comprueba la memoria caché local y detecta la respuesta anterior. Por desgracia, no puede utilizarla porque ha `caducado`. Llegado este momento, podría emitir una solicitud nueva y obtener la nueva respuesta completa, pero sería de dudosa eficiencia hacerlo porque, si el recurso no ha cambiado, no hay motivo para descargar los mismos bytes que ya se encuentran en la memoria caché. Ese es el problema que ha servido como base para el diseño de los tokens de validación, tal como se especifica en el encabezado `ETag`: el servidor genera y devuelve un token arbitrario, que suele ser una almohadilla u otro tipo de huella del contenido del archivo. El cliente no necesita saber cómo se genera la huella, sino que solo la tiene que enviar al servidor en la siguiente solicitud. Entonces, si la huella sigue siendo la misma, el recurso no ha cambiado y podemos evitarnos la descarga. <img src="images/http-cache-control.png" class="center" alt="Ejemplo de "Cache-Control" HTTP"> En el ejemplo anterior, el cliente proporciona automáticamente el token de `ETag` dentro del encabezado de solicitud HTTP `If-None-Match`; el servidor compara el token con el recurso actual; y, si no ha cambiado, devuelve una respuesta `304 No modificado` que indica al navegador que la respuesta que tiene en la memoria caché no ha cambiado y se puede renovar durante 120 segundos más. Observa que no tenemos que volver a descargar la respuesta, de modo que supone un ahorro de tiempo y de ancho de banda. Como desarrollador web, ¿cómo sacas aprovecho de una revalidación eficiente? El navegador hace todo el trabajo por ti: detecta automáticamente si se ha especificado un token de validación anteriormente, lo añade a una solicitud saliente y actualiza las marcas de tiempo de la memoria caché según sea necesario en función de la respuesta recibida del servidor. **Lo único que nos queda por hacer es asegurarnos de que el servidor proporcione los tokens de `ETag` necesarios. Para ello, revisa la documentación del servidor sobre marcas de configuración necesarias.** {% include shared/remember.liquid list=page.notes.boilerplate-configs %} ## `Cache-Control` {% include shared/takeaway.liquid list=page.key-takeaways.cache-control %} La mejor solicitud es aquella no que no necesita comunicarse con el servidor. Una copia local de la respuesta nos permite eliminar toda la latencia de red y evitar costes de datos para la transferencia de datos. Para lograrlo, la especificación HTTP permite que el servidor ofrezca una [serie de diferentes directivas de `Cache-Control`](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) que controlan cómo y durante cuánto tiempo el navegador y otras memorias caché intermedias pueden almacenar la respuesta concreta en memoria caché. {% include shared/remember.liquid list=page.notes.cache-control %} <img src="images/http-cache-control-highlight.png" class="center" alt="Ejemplo de `Cache-Control` HTTP"> ### Elementos `no-cache` y `no-store` El elemento `no-cache` indica que la respuesta devuelta no se puede utilizar para satisfacer una solicitud posterior enviada a la misma URL sin antes consultar en el servidor si la respuesta ha cambiado. Por eso, si está presente un token de validación (ETag) adecuado, `no-cache` utilizará un recorrido de ida y vuelta para validar la respuesta almacenada en memoria caché, pero puede eliminar la necesidad de descarga si el recurso no ha cambiado. Por el contrario, el elemento `no-store` es mucho más sencillo, ya que tan solo prohíbe que el navegador y todas las memorias caché intermedias almacenen una versión de la respuesta devuelta; por ejemplo, una que contenga datos personales o bancarios. Cada vez que el usuario solicita este elemento, se envía una solicitud al servidor y se descarga una respuesta. ###Elementos `public` y `private` Si la respuesta está marcada como `public`, se puede almacenar en memoria caché aunque tenga autenticación HTTP asociada y aunque el código de estado de la respuesta normalmente no se pueda almacenar en la memoria caché. En la mayoría de los casos, el elemento `public` no es necesario porque el almacenamiento explícito de información en memoria caché (por ejemplo, `max-age`) indica que la respuesta se puede almacenar en la memoria caché. Por el contrario, el navegador puede almacenar las respuestas marcadas como `private` en la memoria caché, pero normalmente están destinadas para un solo usuario, por lo que no se permite que una memoria caché intermedia las almacene. Por ejemplo, el navegador de un usuario puede almacenar una página HTML con información privada del usuario en la memoria caché, pero una CDN no puede. ### Elemento `max-age` Esta directiva especifica en segundos el tiempo máximo durante el que se puede reutilizar la respuesta obtenida desde el momento de la solicitud. Por ejemplo, `max-age=60` indica que la respuesta se puede almacenar en memoria caché y reutilizar durante los 60 segundos posteriores. ## Definir una política óptima de `Cache-Control` <img src="images/http-cache-decision-tree.png" class="center" alt="Árbol de decisiones de la memoria caché"> Sigue el árbol de decisiones anterior para determinar la política de almacenamiento en memoria caché que resulte óptima para un recurso concreto o para un conjunto de recursos que tu aplicación utilice. Lo ideal es que te marques como objetivo almacenar en memoria caché tantas respuestas como sea posible en el cliente durante el periodo más largo posible, y que proporciones tokens de validación para cada respuesta a fin de poder ofrecer una revalidación eficiente. <table class="mdl-data-table mdl-js-data-table"> <thead> <tr> <th width="30%">Directivas de `Cache-Control`</th> <th>Explicación</th> </tr> </thead> <tr> <td data-th="cache-control">max-age=86400</td> <td data-th="explicación">El navegador y todas las memorias caché intermedias pueden almacenar la respuesta en memoria caché (es decir, es `public`) durante un máximo de un día (60 segundos x 60 minutos x 24 horas).</td> </tr> <tr> <td data-th="cache-control">private, max-age=600</td> <td data-th="explicación">Solo el navegador del cliente puede almacenar la respuesta en memoria caché durante un máximo de diez minutos (60 segundos x 10 minutos).</td> </tr> <tr> <td data-th="cache-control">no-store</td> <td data-th="explicación">No se permite almacenar la respuesta en memoria caché y se tiene que recuperar entera con cada solicitud.</td> </tr> </table> Según el archivo HTTP, entre los 300.000 sitios más importantes (según la clasificación Alexa), el navegador [puede almacenar en memoria caché casi la mitad de todas las respuestas bajadas] (http://httparchive.org/trends.php#maxage0), lo que supone un gran ahorro para las visualizaciones y las visitas de páginas que se repiten. Por supuesto, esto no quiere decir que el 50% de los recursos de tu aplicación en concreto se podrá almacenar en memoria caché. De hecho, algunos sitios pueden almacenar más del 90% de sus recursos, mientras que otros tienen muchos datos privados o de tiempo limitado que no se pueden almacenar en memoria caché. **Revisa tus páginas para identificar los recursos que se pueden almacenar en memoria caché y asegúrate de que devuelvan encabezados `Cache-Control` y `ETag` adecuados.** ## Invalidar y actualizar las respuestas almacenadas en memoria caché {% include shared/takeaway.liquid list=page.key-takeaways.invalidate-cache %} Todas las solicitudes HTTP que el navegador realiza se enrutan primero a la memoria caché del navegador para comprobar si hay una respuesta almacenada en memoria caché que se pueda utilizar para satisfacer la solicitud. Si alguna coincide, la respuesta se lee en la memoria caché y eliminamos tanto la latencia de red como los costes de datos que supone la transferencia. **Sin embargo, ¿qué sucede si queremos actualizar o invalidar una respuesta almacenada en memoria caché? Por ejemplo, supongamos que hemos indicados a nuestros visitantes que almacenen en memoria caché una hoja de estilo CSS durante un máximo de 24 horas (max-age=86400), pero nuestro diseñador acaba de aplicar una actualización que queremos que esté disponible para todos los usuarios. ¿Cómo notificamos a todos los visitantes que tienen una copia anticuada de nuestro CSS almacenada en memoria caché que actualicen sus memorias caché? Es una pregunta complicada, porque no podemos si no cambiamos la URL del recurso. Una vez el navegador almacena la respuesta en memoria caché, la versión almacenada se utiliza hasta que ya no está al día, según lo determina el elemento `max-age`, porque caduca o hasta que se expulsa de la memoria caché por otro motivo (por ejemplo, el usuario borra los datos de la memoria caché del navegador). A resultas de esta situación, es posible que diferentes usuarios acaben utilizando diferentes versiones del archivo cuando se cree la página. Los usuarios que acaben de obtener el recurso verán la versión nueva, mientras que los usuarios que hayan almacenado en memoria caché una copia anterior (pero aún válida) utilizarán una versión anterior de la respuesta. **Por lo tanto, ¿cómo conseguimos lo mejor de ambas situaciones, almacenamiento en memoria caché por parte del cliente y actualizaciones rápidas?** Es muy sencillo: podemos cambiar la URL del recurso y obligar a los usuarios a descargar la respuesta nueva siempre que el contenido cambie. Normalmente, para conseguirlo se inserta una huella del archivo o un número de versión al nombre de archivo (por ejemplo, style.**x234dff**.css). <img src="images/http-cache-hierarchy.png" class="center" alt="Jerarquía de la memoria caché"> La capacidad de definir políticas de almacenamiento en memoria caché por recurso nos permite definir `jerarquías de la memoria caché` que, a su vez, nos permiten no solo controlar durante cuánto tiempo se almacena en memoria caché, sino también con qué rapidez un visitante ve versiones nuevas. Como muestra analizaremos el ejemplo anterior: *El HTML está marcado con el elemento `no-cache`, que quiere decir que el navegador siempre revalidará el documento con cada solicitud y, si el contenido cambia, obtendrá la versión más reciente. Además, dentro del marcado HTML insertamos huellas en las URL para los elementos CSS y JavaScript. Si el contenido de esos archivos cambia, el HTML de la página también cambiará y se descargará una nueva copia de la respuesta HTML. *Los navegadores y las memorias caché intermedias (por ejemplo, una CDN) tienen permiso para almacenar el CSS en memoria caché, y está definido que caduque pasado un año. Ten en cuenta que podemos usar sin problemas fechas de caducidad mucho superiores a un año porque insertamos el nombre de archivo a la huella de archivo, de modo que, si el CSS se actualiza, la URL también cambiará. *El JavaScript también está definido para que caduque en un año, pero está marcado como privado, quizá porque contiene datos de usuario privados que la CDN no debería almacenar en memoria caché. *La imagen está almacenada en memoria caché sin una versión ni una huella única y se ha definido para que caduque pasado un año. La combinación de `ETag`, `Cache-Control` y URLs únicas nos permite ofrecer lo mejor de todas las situaciones: tiempos de caducidad con mucho margen, control sobre dónde se puede almacenar la respuesta en memoria caché y actualizaciones bajo demanda. ## Almacenar listas de comprobación en memoria caché No hay políticas de memoria caché que sean mejor que el resto. En función de los patrones de tráfico que tengas, el tipo de datos mostrados y los requisitos específicos de cada aplicación en cuanto a la actualidad de los datos, tendrás que definir y establecer la configuración adecuada para cada recurso, así como la `jerarquía de almacenamiento en memoria caché` general. A continuación indicamos algunos consejos y técnicas que debes tener en cuenta cuando elabores la estrategia de almacenamiento en memoria caché: 1. **Utiliza URLs coherentes**: si muestras el mismo contenido en diferentes URL, ese contenido se obtiene y se almacena varias veces. Consejo: ten en cuenta que las [URL distinguen entre mayúsculas y minúsculas](http://www.w3.org/TR/WD-html40-970708/htmlweb.html). 2. **Asegúrate de que el servidor proporcione un token de validación (ETag)**: los tokens de validación eliminan la necesidad de transferir los mismos bytes cuando un recurso del servidor no ha cambiado. 3. **Identifica los recursos que los intermediarios pueden almacenar en memoria caché**: los que tengan respuestas idénticas para todos los usuarios son muy buenos candidatos para que una CDN y otros intermediarios los almacenen en memoria caché. 4. **Determina la duración óptima de la memoria caché para cada recurso**: puede que diferentes recursos tengan requisitos de actualidad diferentes. Revisa y determina el elemento `max-age` adecuado para cada uno. 5. **Determina la mejor jerarquía de memoria caché para tu sitio**: la combinación de URLs de recursos con huellas de contenido y duraciones cortas o `no-cache` para documentos HTML te permite controlar la rapidez con la que el cliente aplica actualizaciones. 6. **Agitación mínima**: algunos recursos se actualizan más a menudo que otros. Si hay una parte concreta de un recurso (por ejemplo, una función JavaScript o un conjunto de estilos CSS) que se actualiza a menudo, plantéate la posibilidad de enviar ese código como un archivo independiente. De esa forma el resto del contenido (por ejemplo, el código de biblioteca que no cambia a menudo) se puede obtener de la memoria caché y se minimiza la cantidad de contenido descargado siempre que se obtiene una actualización.
117.74359
676
0.797637
spa_Latn
0.997796
9aa1a98cd79079a88e58144adf3d9442134e7496
2,214
md
Markdown
README.md
MohamedSamir93/Churn-Analysis
d543460dd275e407df34e054bfe45b2739c01427
[ "CNRI-Python", "IBM-pibs" ]
null
null
null
README.md
MohamedSamir93/Churn-Analysis
d543460dd275e407df34e054bfe45b2739c01427
[ "CNRI-Python", "IBM-pibs" ]
null
null
null
README.md
MohamedSamir93/Churn-Analysis
d543460dd275e407df34e054bfe45b2739c01427
[ "CNRI-Python", "IBM-pibs" ]
null
null
null
### Table of Contents 1. [Installation](#installation) 2. [Project Motivation](#motivation) 3. [File Descriptions](#files) 4. [Results](#results) 5. [Licensing, Authors, and Acknowledgements](#licensing) ## Installation <a name="installation"></a> Plotly library must be installed. It is an interactive, open-source, and browser-based graphing library for Python (includes Plotly Express). `pip install plotly==5.4.0` Inside [Jupyter](https://jupyter.org/install) (installable with `pip install "jupyterlab>=3" "ipywidgets>=7.6"`). The code should run with no issues using Python versions 3.*. ## Project Motivation<a name="motivation"></a> In the telecom industry, customers are able to choose from multiple service providers and actively switch from one operator to another. In this highly competitive market, the telecommunications industry experiences an average of 15-25% annual churn rate. Given the fact that it costs 5-10 times more to acquire a new customer than to retain an existing one, customer retention has now become even more important than customer acquisition. So we need to analyse telecom industry data and predict high value customers who are at high risk of churn and identify main indicators of churn. In this project, I will analyze customer-level data of a leading telecom firm and identify the main indicators of churn. ## File Descriptions <a name="files"></a> There is a notebook available here to showcase work related to the required analysis. The notebook is exploratory in searching through the data pertaining to the questions showcased through notebook. Markdown cells were used to assist in walking through the thought process for individual steps. ## Results<a name="results"></a> The main findings of the code can be found at the post available [here](https://medium.com/@eng.m7md.samir/why-customers-churn-7c44b3b169cc). ## Licensing, Authors, Acknowledgements<a name="licensing"></a> Must give credit to IBM for the data. You can find the Licensing for the data and other descriptive information at the Kaggle link available [here](https://www.kaggle.com/blastchar/telco-customer-churn). Otherwise, feel free to use the code here as you would like!
61.5
705
0.776874
eng_Latn
0.996255
9aa21b0fa4e12e67c1b7356ae98cda340a05e3f1
2,945
md
Markdown
about.md
serafdev/serafss2.github.io
27b7c54200e1fa4952bba6f9680176960ac42dd3
[ "MIT" ]
null
null
null
about.md
serafdev/serafss2.github.io
27b7c54200e1fa4952bba6f9680176960ac42dd3
[ "MIT" ]
null
null
null
about.md
serafdev/serafss2.github.io
27b7c54200e1fa4952bba6f9680176960ac42dd3
[ "MIT" ]
null
null
null
--- layout: page title: About permalink: /about/ tags: about --- ### Summary Passionate Developer offering experience in Back-End Development to deliver highly scalable products. Highly organized and have a good sense of architectural planning. Extensive experience in the full cycle of the software design process including requirements definition, prototyping, proof of concept, design, interface implementation, testing, deployment, maintenance and all the blablabla that you end up googling anyway. ---------------------------- ### Technical tools Python, Scala, Docker, Kubernetes, Cloud environment, Linux, etc --------------------------- ### Professional experience #### Bell (Nov 2019 - Present) #### Desjardins (Jan 2019 - Nov 2019) Software Consultant (Network Automation) ##### Infrastructure Put in place our application's network infrastructure (Gunicorn/Nginx + Nginx/Vue.js static files) and contributed in the deployment automation using the ansible-cli. Contributed in the Continuous Testing using Jenkins. ##### F5 BigIP Automation (LTM) Creation of Virtual Servers and sub-tree, aka pools, pool-members, ssl certificate objects, etc. Worked mostly on the backend side using Python. Created validations for our end-user's form with real-time queries to the BigIPs on the clusters. Helped design the network configuration file injected by ansible to create the LTM objects on the BigIP. ##### OpenTrust SSL Certificates Enrollment Automation (IDnomic) Automated the SSL Certificates enrollment using the OpenTrustRA (Registration Authority) SOAP api. Created an app that uses the RA SOAP api. Designed the buffer database to keep security logs and rollbacks easily queried (revokes, renewals, etc). The automation took care of creating the private key on the server and generating the public SSL Certificate on the machine. In the case of the BigIP the application would go and create an SSL Certificate on the BigIP and enrolled a signed public certificate using the OpenTrust APIs. #### Faimdata (Dec 2016 - Dec 2018) Software Engineer (Back-end, DevOps / Scala, Python, BigQuery, Postgresql, Docker, Kubernetes) Worked as a Backend developer using Scala, Python and BigQuery to build a highly scalable CI tool in a data environment. Moved the legacy network architecture from manual deployments to Nginx, Docker and Kubernetes with continuous integration and delivery with Jenkins, saved the developers a * ton of time. Built a Micro-Queries library that took care of reccurent logic in our Data Scientist's Queries, it helped the performance of APIs from >1min to <10 seconds, also added a Memcached server to our APIs which enhanced most requests to <.2second, pretty satisfying if you ask me. --------------------------- ### Education #### Université de Montréal Computer Science Major A lot of algorithmics, data structures, software engineering, mathematics, statistics, etc. Same as everyone else
55.566038
583
0.770458
eng_Latn
0.984857
9aa2c7f16e076b31bea40e46a6e3239149195f68
4,379
md
Markdown
content/post/2011-01-31-how-does-glm-generalize-lm-fit-and-test.md
woody-90/cosx.org
7376b8840cf90e7932fb665e4bf9c32b49e03c0b
[ "MIT" ]
null
null
null
content/post/2011-01-31-how-does-glm-generalize-lm-fit-and-test.md
woody-90/cosx.org
7376b8840cf90e7932fb665e4bf9c32b49e03c0b
[ "MIT" ]
null
null
null
content/post/2011-01-31-how-does-glm-generalize-lm-fit-and-test.md
woody-90/cosx.org
7376b8840cf90e7932fb665e4bf9c32b49e03c0b
[ "MIT" ]
null
null
null
--- title: 从线性模型到广义线性模型(2)——参数估计、假设检验 date: '2011-01-31T19:46:15+00:00' author: 张缔香 categories: - 回归分析 - 统计之都 - 风险精算 tags: - IRWLS - 假设检验 - 学习经历 - 广义线性模型 slug: how-does-glm-generalize-lm-fit-and-test forum_id: 418831 --- # 1.GLM参数估计——极大似然法 为了理论上简化,这里把GLM的分布限定在指数分布族。事实上,实际应用中使用最多的分布就是指数分布族,所以这样的简化可以节省很多理论上的冗长论述,也不会限制实际应用。 如前文如述,指数分布族的概率密度函数可以统一地写为: `$$ f_Y(y;\theta,\Psi)=exp[(y\theta – b(\theta))/{\Psi} + c(y;\Psi)] $$` 这里为了在模型中体现散布参数(dispersion parameter)`\(\phi\)`,把上述密度函数中的`\(\Psi\)`记做 `$$ \Psi=a_i(\phi)={\phi}/w_i $$` 从而响应变量的单个观测值的(加权)对数似然函数可以表示为: `$$ logL({\theta}_i,\phi;y_i)=w_i[(y_i{\theta}_i-b({\theta}_i))/{\phi}]+c(y_i,\phi) $$` 再结合观测值之间的独立性,全体观测值的对数似然函数可记做:`\(\sum_i logL({\theta}_i,\phi;y_i)\)` 一般情况下最大化上述的对数似然函数很难找到解析解(正态分布是特例之一),因而必须使用数值方法求解。McCullagh和Nelder(1989)证明了使用Newton-Raphson方法,结合Fisher scoring算法,上述对数似然函数的最大化等价于连续迭代的加权最小二乘法(iteratively weighted least squares, or IRWLS)。 广义线性模型的IRWLS算法如下: 1.设置线性估计量和响应变量的均值的初始估计值: `\(\hat {\eta}_0\)`和`\(\hat {\mu}_0\)` 这里`\(\hat {\mu}_0\)`是根据经验或是专家意见等信息对`\(\mu=E(Y)\)`的一个估计值,而`\(\hat {\eta}_0\)`可以利用模型建立时选用的联接函数来获得,即`\(\hat {\eta}_0=g(\hat {\mu}_0)\)`。这一函数关系也用于计算步骤2和3中`\(\eta\)`对`\(\mu\)`一阶导数。 2.构造调整的因变量(adjusted dependent variable):`\(z_0=\hat {\eta}_0+(y-{\hat \mu}_0){d\eta \over d\mu}|_{\hat {\eta}_0}\)` 3.构造权重:`\(w^{-1}_0={({d\eta \over d\mu})}^2|{\hat {\eta}_0V(\hat {\mu}_0)}\)` 这里`\(V(\hat {\mu}_0)\)`是利用方差函数(variance function)和`\(\hat {\mu}_0\)`构造的`\(Var(Y)\)`的估计值。 4.利用步骤2和3构造的调整的因变量和权重,拟合普通线性模型(ordinary linear model),预测/拟合(predict)新的线性估计量和均值: `\(\hat {\eta}_1\)`和`\(\hat {\mu}_1\)` 5.重复步骤2-4直到收敛(满足一定的迭代步数或是精度要求)。 此时得到的模型就是极大似然估计方法下的广义线性模型。IRWLS的算法思路也从另一个方面说明了广义线性模型是普通线性模型的推广。在广义线性模型的实际应用中,IRWLS算法是最常用的极大似然估计求解方法。对于特殊的案例,也有其他的特殊的参数估计方法。比如对于在精算学科中最常用的列联表(contigency table)数据或案例就有Bailey-Simon法、边际总和法(marginal totals)、最小二乘法(least squares)、直接法(direct method)等。 # 2.假设检验 ## 2.1 空模型和全模型 一个极端的情况,所有自变量`\(x_i\)`对于响应变量`\(Y\)`都没有影响,也即是为所有的响应变量`\(Y\)`拟合一个共同的均值,即只有一个参数。这样的模型称为空模型(null model)。对于普通线性模型(正态分布下的GLM)而言,空模型的具体形式就是`\(y=\mu + \epsilon\)`。对于特殊的数据或案例类型,可能存在着其他的限制条件(constraints)从而空模型的参数个数大于1。比如非寿险精算中经常用到的列联表(contigency table)数据,其空模型就可能包含了行号、列号、对角线序号等限制。 相反的一个极端情况就是,所有自变量`\(x_i\)`的每一个观测值或称为数据的样本点(data points)对于响应变量`\(Y\)`都有影响,这样的模型称为全模型(full or saturated model)。一般可以通过构造阶数足够高的多项式或者把所有的量化观测值(quantitative)视为质化观测值(qualitive),并且引入适当数量的交叉项(interactions)来构造全模型。 统计建模的目的之一就是把样本数据划分为随机成分和系统成分两大部分。在这一点上,空模型认为响应变量的变动(variation)完全由随机性(random variation)造成,而全模型则认为响应变量的变动完全来自于系统成分(systematic)。一个直观地理解就是全模型是在现有的数据或样本的条件下,针对某一种分布所能拟合的最优模型,因而可以做为检验目标模型拟合优度的一个标准(measure)。 ## 2.2 偏差(Deviance) 如果把全模型的对数似然函数记为`\(l(y,\phi|y)\)`,把目标模型的对数似然函数记为`\(l({\hat {\mu}},\phi|y)\)`,那么目标模型与全模型在拟合优度上的偏离的定义可写成`\(2(l(y,\phi|y)-l({\hat {\mu}},\phi|y))\)`。再结合观测值的独立性假设和指数散布族的假设,那么上述偏离的定义可以简化为: `$$ \sum_i 2w_i(y_i({\hat {\theta}_i} – {\tilde {\theta}_i}) – b({\tilde {\theta}_i}) + b({\hat {\theta}_i})) /{\phi} $$` 其中`\(a_i(\phi)={\phi}/w_i\)`,`\(\tilde {\theta}\)`是全模型下的参数估计值,`\(\hat {\theta}\)`是目标模型下的参数估计值。如果把上式写成`\(D(y,\hat {\mu})/{\phi}\)`,那么`\(D(y,\hat {\mu})\)`称为偏差(Deviance),`\(D(y,\hat {\mu})/{\phi}\)`则称为标准化偏差(scaled deviace)。 此外,皮尔逊卡方统计量(Pearson’s chi-square statistics): `$$ X^2={\sum_i (y_i – {{\hat \mu}_i})^2 \over Var({\hat {\mu}}_i)} $$` 也是衡量模型偏离程度(discrepancy)的统计量之一,在一些场合可以做为偏差的替代选择。 ## 2.3 拟合优度检验 广义线性模型的假设检验可以分为两种:一是检验目标模型相对于数据或预测值的拟合有效性的检验(goodness of fit test);另外一种则是对“大”模型以及对“大”模型的参数施加一定的线性约束(linear restrictions)之后得到的“小”模型之间的拟合优度比较检验。直观上的理解就是,“大”模型具有更多的参数,即从参数的线性约束总可把一个或多个参数用其他参数的线性组合来表示,然后代入“大”模型,从而参数的个数减少,派生出所谓的“小”模型,也就是说“大”和“小”并非任意的,而是具有一种派生关系(nested models)。如果把全模型认为是“大”模型,而目标模型是“小”模型,那么上述两种检验的本质是相同的。因而假设检验的零假设(null hypothsis)可以统一且直观地设定为:“小”模型(目标模型)是正确的模型。 如果把大模型记做`\(\Omega\)`,把小模型记做`\(\omega\)`,其标准化偏差之差记做`\(D_{\omega} – D_{\Omega}\)`,其自由度之差记做`\(df_{\omega}-df_{\Omega}\)`,则构造如下的统计量:`\({(D_{\omega} – D_{\Omega})/(df_{\omega}-df_{\Omega})} \over {\phi}\)`。 当`\(\phi\)`是已知常数时,比如泊松和二项分布的情况下`\(\phi=1\)`,上述统计量在零假设下渐近地(asymptotically)服从卡方分布(正态分布时正好是卡方分布)。当`\(\phi\)`未知时,通常需要用估计值代替。最常用的估计值是`\(\hat {\phi}=X^2/(n-p)\)`这里n是数据中观测值的数量,p是目标模型的参数个数。此时上述的统计量在零假设下近似地(approximately)服从F分布(正态分布时严格服从F分布)。注意上述两种情况下,渐近和近似的区别。 对于某一个参数,可以使用其估计值的标准误(standard error)来构造一个z统计量来检验其显著性,即`\(z=\hat {\beta}/se(\hat {\beta})\)`。在零假设下,z统计量在普通线性模型,也就是正态分布下的广义线性模型中就是我们熟知的t统计量,严格服从t分布。在其他分布下的广义线性模型中,渐近地服从正态分布。z检验也称为Wald检验,在广义线性模型中效果不如上述的偏差检验,因而较少使用。
43.356436
371
0.716602
yue_Hant
0.449922
9aa2e2737aa78c38a8371393c5e17859ecdca691
1,002
md
Markdown
docs/visual-basic/misc/bc31165.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc31165.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc31165.md
CodeTherapist/docs.de-de
45ed8badf2e25fb9abdf28c20e421f8da4094dd1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Anfang erwartet &#39; &lt; &#39; für ein XML-Tag ms.date: 07/20/2015 f1_keywords: - vbc31165 - bc31165 helpviewer_keywords: - BC31165 ms.assetid: d6c411f3-06be-4647-a18a-8ff8a24ff94b ms.openlocfilehash: 20f6a616441fe20c758c8c81bc307a808455e01e ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 05/04/2018 ms.locfileid: "33623093" --- # <a name="expected-beginning-39lt39-for-an-xml-tag"></a>Anfang erwartet &#39; &lt; &#39; für ein XML-Tag Beim XML-Tag für ein XML-Literal fehlt das erforderliche Anfangszeichen „<“. **Fehler-ID:** BC31165 ## <a name="to-correct-this-error"></a>So beheben Sie diesen Fehler - Fügen Sie am Anfang des XML-Tags für das XML-Literal das „<“-Zeichen hinzu. ## <a name="see-also"></a>Siehe auch [XML-Literale](../../visual-basic/language-reference/xml-literals/index.md) [XML](../../visual-basic/programming-guide/language-features/xml/index.md)
34.551724
105
0.722555
deu_Latn
0.469752
9aa31fbb3a65e6eee091e3c2eebd4397bab0fb00
4,401
md
Markdown
CHANGELOG.md
cnheider/CodeAceJumper
b558f6b3f6095d126475231c72a13e1dd7b10101
[ "MIT" ]
73
2016-10-27T06:19:05.000Z
2022-03-06T23:37:03.000Z
CHANGELOG.md
lucax88x/CodeAceJumper
b558f6b3f6095d126475231c72a13e1dd7b10101
[ "MIT" ]
404
2016-10-27T07:46:56.000Z
2022-01-31T07:40:12.000Z
CHANGELOG.md
cnheider/CodeAceJumper
b558f6b3f6095d126475231c72a13e1dd7b10101
[ "MIT" ]
22
2016-10-27T15:56:33.000Z
2022-01-14T09:00:23.000Z
# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [1.0.0] Initial release ## [1.1.0] New group system when we have more matches than the alphabet can have (26) [details](https://github.com/lucax88x/CodeAceJumper/issues/6) ## [1.1.1] using SVG instead of TEXT for the decorations, this means a HUGE performance boost ## [1.1.2] thanks to [ncthis](https://github.com/lucax88x/CodeAceJumper/pull/8) we now are able to make ace jump placeholders only for a "range" where the cursor is, instead of full page. Genius workaround until vscode releases the APIs for getting only viewable area of the screen. ## [1.1.3] placeholder now have more configurations, such has font size, family, etc added ', " and < in the pattern ## [1.1.4] - Added new command that let Ace Jump [details](https://github.com/lucax88x/CodeAceJumper/issues/6) - Correctly disposing the `AceJump: Type` and `AceJump: Jump To messages` ## [1.1.5] - possibility to search inside words using the new setting `aceJump.finder.onlyInitialLetter=false` - possibility to skip the search on the selections using the new setting `aceJump.finder.skipSelection=true` ## [1.1.6] - Resolve non-intuitive behavior when search query matches separator regex [details](https://github.com/lucax88x/CodeAceJumper/pull/20) ## [1.1.7] - Fixed "AceJump: Jump To" message always in status bar #18 [details](https://github.com/lucax88x/CodeAceJumper/issues/18) ## [1.1.8] - Now the icon does not move the charaters in vscode anymore #23 [details](https://github.com/lucax88x/CodeAceJumper/issues/23) ## [1.1.9] - Now uses a new vscode api for detecing the visible ranges in the screen [details](https://github.com/lucax88x/CodeAceJumper/issues/5) ## [1.1.10] - Now should work together with extensions using the TYPE command (like VIM extensions) thanks to [matklad](https://github.com/lucax88x/CodeAceJumper/pull/25) ## [2.0.0] - Total refactor of the code - added a new command that support multichar [info](https://github.com/lucax88x/CodeAceJumper/issues/21) ## [2.0.1] - if while restricting we don't match a letter but we match a placeholder we jump directly for it ## [2.1.0] - dimming the editor when we start to ace jump, can be disabled with `aceJump.dim.enabled` ## [2.1.1] - reduced bundle size with webpack ## [2.1.2] - fixes [bug](https://github.com/lucax88x/CodeAceJumper/issues/29) ## [2.1.3] - fixes [bug](https://github.com/lucax88x/CodeAceJumper/issues/30) ## [2.1.4] - changed way to render highlights in the multichar and removed the limitation of 10 ## [2.1.5] - now it uses full power of vscode api for multiple visible areas, for example when we collapse functions or classes ## [2.1.6] - updated readme, thanks to [pr](https://github.com/lucax88x/CodeAceJumper/pull/35) - audited node packages for security - when set to "only initial letter", first word of each line now works even with tabs indentation, [issue](https://github.com/lucax88x/CodeAceJumper/issues/33) ## [2.1.7] - audited node packages for security - when using "selection mode", correctly selects also last character, [issue](https://github.com/lucax88x/CodeAceJumper/issues/108) ## [2.1.8] - fixed [#34](https://github.com/lucax88x/CodeAceJumper/issues/34) - fixed [#153](https://github.com/lucax88x/CodeAceJumper/issues/153) ## [3.0.0] - fixed [#160](https://github.com/lucax88x/CodeAceJumper/issues/160) - implemented [#151](https://github.com/lucax88x/CodeAceJumper/issues/151) ![video](https://media.giphy.com/media/jUQixLErR27iPssBYq/giphy.gif) - implemented [#174](https://github.com/lucax88x/CodeAceJumper/issues/174) - fixed [164](https://github.com/lucax88x/CodeAceJumper/issues/164) which is the reason of the breaking change ## [3.1.0] - implemented [#162](https://github.com/lucax88x/CodeAceJumper/issues/162) ![video](https://media.giphy.com/media/VF63dhXmQggquKwFYn/giphy.gif) ## [3.3.0] - contains [#187](https://github.com/lucax88x/CodeAceJumper/pull/187) - contains [#189](https://github.com/lucax88x/CodeAceJumper/pull/189) ## [3.3.1] - fixed [#196](https://github.com/lucax88x/CodeAceJumper/issues/196) ## [3.3.2] - fixed [#228](https://github.com/lucax88x/CodeAceJumper/issues/228)
33.090226
271
0.735515
eng_Latn
0.802047
9aa33e13954b67469e089716b96849112e261f17
2,296
md
Markdown
README.md
npmdoc/node-npmdoc-domready
0d0ae5dcf45975474063e35ebdddff7ce16f6b59
[ "MIT" ]
null
null
null
README.md
npmdoc/node-npmdoc-domready
0d0ae5dcf45975474063e35ebdddff7ce16f6b59
[ "MIT" ]
null
null
null
README.md
npmdoc/node-npmdoc-domready
0d0ae5dcf45975474063e35ebdddff7ce16f6b59
[ "MIT" ]
null
null
null
# npmdoc-domready #### basic api documentation for [domready (v1.0.8)](https://github.com/ded/domready) [![npm package](https://img.shields.io/npm/v/npmdoc-domready.svg?style=flat-square)](https://www.npmjs.org/package/npmdoc-domready) [![travis-ci.org build-status](https://api.travis-ci.org/npmdoc/node-npmdoc-domready.svg)](https://travis-ci.org/npmdoc/node-npmdoc-domready) #### modern domready [![NPM](https://nodei.co/npm/domready.png?downloads=true&downloadRank=true&stars=true)](https://www.npmjs.com/package/domready) - [https://npmdoc.github.io/node-npmdoc-domready/build/apidoc.html](https://npmdoc.github.io/node-npmdoc-domready/build/apidoc.html) [![apidoc](https://npmdoc.github.io/node-npmdoc-domready/build/screenCapture.buildCi.browser.%252Ftmp%252Fbuild%252Fapidoc.html.png)](https://npmdoc.github.io/node-npmdoc-domready/build/apidoc.html) ![npmPackageListing](https://npmdoc.github.io/node-npmdoc-domready/build/screenCapture.npmPackageListing.svg) ![npmPackageDependencyTree](https://npmdoc.github.io/node-npmdoc-domready/build/screenCapture.npmPackageDependencyTree.svg) # package.json ```json { "author": { "name": "Dustin Diaz", "url": "http://dustindiaz.com" }, "bugs": { "url": "https://github.com/ded/domready/issues" }, "dependencies": {}, "description": "modern domready", "devDependencies": { "smoosh": ">=0.3.0" }, "directories": {}, "dist": { "shasum": "91f252e597b65af77e745ae24dd0185d5e26d58c", "tarball": "https://registry.npmjs.org/domready/-/domready-1.0.8.tgz" }, "ender": "./src/ender.js", "gitHead": "bddb9f9374d699289953c813961b7121bd040ce7", "homepage": "https://github.com/ded/domready", "keywords": [ "ender", "domready", "dom" ], "main": "./ready.js", "maintainers": [ { "name": "ded" }, { "name": "fat" } ], "name": "domready", "optionalDependencies": {}, "repository": { "type": "git", "url": "git+https://github.com/ded/domready.git" }, "scripts": {}, "version": "1.0.8", "bin": {} } ``` # misc - this document was created with [utility2](https://github.com/kaizhu256/node-utility2)
31.027027
361
0.63284
yue_Hant
0.225304
9aa3b50200cda72052ca88195055f4b561ca2178
65
md
Markdown
README.md
fabioindaiatuba/algamoney-api
43cdc736481d6a75405d796021a895c2598eb9d1
[ "Unlicense" ]
null
null
null
README.md
fabioindaiatuba/algamoney-api
43cdc736481d6a75405d796021a895c2598eb9d1
[ "Unlicense" ]
null
null
null
README.md
fabioindaiatuba/algamoney-api
43cdc736481d6a75405d796021a895c2598eb9d1
[ "Unlicense" ]
null
null
null
# algamoney-api Projeto da api Restfull com authenticaçao Oauth2
21.666667
48
0.830769
por_Latn
0.674357
9aa3f55c50218d48e5c41ef1faaa912c12b8c744
15,439
md
Markdown
docs/vs-2015/extensibility/walkthrough-displaying-quickinfo-tooltips.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/walkthrough-displaying-quickinfo-tooltips.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/walkthrough-displaying-quickinfo-tooltips.md
Birgos/visualstudio-docs.de-de
64595418a3cea245bd45cd3a39645f6e90cfacc9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Exemplarische Vorgehensweise: Anzeigen von QuickInfos | Microsoft-Dokumentation' ms.custom: '' ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.reviewer: '' ms.suite: '' ms.technology: - vs-ide-sdk ms.tgt_pltfrm: '' ms.topic: article helpviewer_keywords: - editors [Visual Studio SDK], new - QuickInfo ms.assetid: 23fb8384-4f12-446f-977f-ce7910347947 caps.latest.revision: 28 ms.author: gregvanl manager: ghogen ms.openlocfilehash: 9cd0e331536c194acdde95bdd74e5f41668a23e1 ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 11/16/2018 ms.locfileid: "51806281" --- # <a name="walkthrough-displaying-quickinfo-tooltips"></a>Exemplarische Vorgehensweise: Anzeigen von QuickInfos [!INCLUDE[vs2017banner](../includes/vs2017banner.md)] QuickInfo wird eine IntelliSense-Funktion, die Methodensignaturen anzeigt und Beschreibungen, wenn ein Benutzer den Mauszeiger über einen Methodennamen. Sie können die Sprache basierenden Features wie QuickInfo implementieren, definieren die Bezeichner für die Sie die QuickInfo-Beschreibungen bereitstellen möchten, und erstellen dann eine QuickInfo, in dem den Inhalt angezeigt. Sie können die QuickInfo im Kontext von einem Sprachdienst definieren oder können Sie definieren Sie eine eigene Erweiterung und Inhalt Dateinamentyp und Anzeigen der QuickInfo für nur diesen Typ, oder Sie können die QuickInfo anzeigen, für die einem vorhandenen Inhaltstyp (z. B. "Text"). Dieser exemplarischen Vorgehensweise beim Anzeigen von QuickInfos für den Inhaltstyp "Text". Das QuickInfo-Beispiel in dieser exemplarischen Vorgehensweise zeigt die QuickInfo an, wenn ein Benutzer den Zeiger über einen Methodennamen. Dieser Entwurf erfordert, dass Sie diese vier Schnittstellen implementieren: - Source-Schnittstelle - Anbieterschnittstelle für die Quelle - Controller-Schnittstelle - Controller-Provider-Schnittstelle Die Quell- und Controller-Anbieter sind Komponenten des Managed Extensibility Framework (MEF) und sind verantwortlich für die Quell- und Controller-Klassen exportieren und importieren und z. B.-Broker die <xref:Microsoft.VisualStudio.Text.ITextBufferFactoryService>, die den QuickInfo-Text erstellt Puffer, und die <xref:Microsoft.VisualStudio.Language.Intellisense.IQuickInfoBroker>, der die QuickInfo-Sitzung ausgelöst. In diesem Beispiel die QuickInfo-Quelle verwendet eine hartcodierte Liste mit Namen und Beschreibungen, aber in vollständige Implementierungen der Sprachdienst und der zugehörigen Dokumentation sind dafür verantwortlich, dass der Inhalt. ## <a name="prerequisites"></a>Vorraussetzungen Ab Visual Studio 2015, sind Sie nicht Visual Studio SDK aus dem Downloadcenter installieren. Er ist als optionales Feature in Visual Studio-Setup enthalten. Sie können das VS-SDK auch später installieren. Weitere Informationen finden Sie unter [Installieren von Visual Studio SDK](../extensibility/installing-the-visual-studio-sdk.md). ## <a name="creating-a-mef-project"></a>Erstellen eines MEF-Projekts #### <a name="to-create-a-mef-project"></a>So erstellen Sie ein MEF-Projekt 1. Erstellen Sie ein C#-VSIX-Projekt. (In der **neues Projekt** wählen Sie im Dialogfeld **Visual c# / Erweiterbarkeit**, klicken Sie dann **VSIX-Projekt**.) Nennen Sie die Projektmappe `QuickInfoTest`. 2. Fügen Sie eine Elementvorlage Editor Klassifizierer zum Projekt hinzu. Weitere Informationen finden Sie unter [Erstellen einer Erweiterung mit einer Editor-Elementvorlage](../extensibility/creating-an-extension-with-an-editor-item-template.md). 3. Löschen Sie die vorhandenen Klassendateien. ## <a name="implementing-the-quickinfo-source"></a>Implementieren der QuickInfo-Quelle Die QuickInfo-Quelle ist verantwortlich für das Sammeln des Satz von IDs und Beschreibungen, und den Inhalt auf den QuickInfo-Text-Puffer hinzugefügt werden, wenn einer der Bezeichner gefunden wird. In diesem Beispiel werden die IDs und Beschreibungen nur in den Konstruktor des Quelle hinzugefügt. #### <a name="to-implement-the-quickinfo-source"></a>Um die QuickInfo-Quelle zu implementieren. 1. Fügen Sie eine Klassendatei hinzu, und nennen Sie sie `TestQuickInfoSource`. 2. Fügen Sie einen Verweis auf Microsoft.VisualStudio.Language.IntelliSense hinzu. 3. Fügen Sie die folgenden Importe hinzu. [!code-csharp[VSSDKQuickInfoTest#1](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#1)] [!code-vb[VSSDKQuickInfoTest#1](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#1)] 4. Deklarieren Sie eine Klasse, die implementiert <xref:Microsoft.VisualStudio.Language.Intellisense.IQuickInfoSource>, und nennen Sie sie `TestQuickInfoSource`. [!code-csharp[VSSDKQuickInfoTest#2](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#2)] [!code-vb[VSSDKQuickInfoTest#2](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#2)] 5. Hinzufügen von Feldern für der Quellenanbieter QuickInfos, das den Textpuffer und einen Satz von Namen und Signaturen. In diesem Beispiel den Namen und Signaturen werden initialisiert die `TestQuickInfoSource` Konstruktor. [!code-csharp[VSSDKQuickInfoTest#3](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#3)] [!code-vb[VSSDKQuickInfoTest#3](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#3)] 6. Fügen Sie einen Konstruktor, der festlegt, der Quellenanbieter QuickInfo und den Textpuffer und füllt die Gruppe von Methoden und Signaturen und Beschreibungen. [!code-csharp[VSSDKQuickInfoTest#4](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#4)] [!code-vb[VSSDKQuickInfoTest#4](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#4)] 7. Implementieren Sie die <xref:Microsoft.VisualStudio.Language.Intellisense.IQuickInfoSource.AugmentQuickInfoSession%2A>-Methode. In diesem Beispiel sucht die Methode das aktuelle Wort oder vorherigen Wort, wenn der Cursor befindet sich am Ende einer Zeile oder einen Textpuffer. Wenn das Wort auf eine der Methode handelt, wird die Beschreibung dieser Methodenname der QuickInfo-Inhalt hinzugefügt. [!code-csharp[VSSDKQuickInfoTest#5](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#5)] [!code-vb[VSSDKQuickInfoTest#5](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#5)] 8. Sie müssen auch eine Dispose()-Methode implementieren, da <xref:Microsoft.VisualStudio.Language.Intellisense.IQuickInfoSource> implementiert <xref:System.IDisposable>: [!code-csharp[VSSDKQuickInfoTest#6](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#6)] [!code-vb[VSSDKQuickInfoTest#6](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#6)] ## <a name="implementing-a-quickinfo-source-provider"></a>Implementieren eines Anbieters für die QuickInfo-Quelle Der Anbieter, der den QuickInfo-Quelle dient in erster Linie zum Exportieren von sich selbst als einer MEF-Komponente und die QuickInfo-Quelle zu instanziieren. Da es sich um einen MEF-Komponente handelt, können sie andere MEF-Komponententeilen importieren. #### <a name="to-implement-a-quickinfo-source-provider"></a>Implementierung eines Anbieters der QuickInfo-Quelle 1. Deklarieren einen QuickInfo-Source-Anbieter mit dem Namen `TestQuickInfoSourceProvider` , implementiert <xref:Microsoft.VisualStudio.Language.Intellisense.IQuickInfoSourceProvider>, und exportieren Sie es mit einer <xref:Microsoft.VisualStudio.Utilities.NameAttribute> "QuickInfo QuickInfo Source" eine <xref:Microsoft.VisualStudio.Utilities.OrderAttribute> von vor = "Default", und ein <xref:Microsoft.VisualStudio.Utilities.ContentTypeAttribute> "Text". [!code-csharp[VSSDKQuickInfoTest#7](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#7)] [!code-vb[VSSDKQuickInfoTest#7](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#7)] 2. Importieren von zwei Editor Dienste <xref:Microsoft.VisualStudio.Text.Operations.ITextStructureNavigatorSelectorService> und <xref:Microsoft.VisualStudio.Text.ITextBufferFactoryService>, als Eigenschaften des `TestQuickInfoSourceProvider`. [!code-csharp[VSSDKQuickInfoTest#8](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#8)] [!code-vb[VSSDKQuickInfoTest#8](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#8)] 3. Implementieren <xref:Microsoft.VisualStudio.Language.Intellisense.IQuickInfoSourceProvider.TryCreateQuickInfoSource%2A> zurückzugebenden ein neues `TestQuickInfoSource`. [!code-csharp[VSSDKQuickInfoTest#9](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#9)] [!code-vb[VSSDKQuickInfoTest#9](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#9)] ## <a name="implementing-a-quickinfo-controller"></a>Implementieren eine QuickInfo-Controller QuickInfo-Controller bestimmen, wenn QuickInfos angezeigt werden soll. In diesem Beispiel wird die QuickInfo angezeigt, wenn der Zeiger über ein Wort, das eine der Methode entspricht, wird. Der QuickInfo-Controller implementiert einen Mausereignishandler gezeigt wird, der eine QuickInfo-Sitzung ausgelöst. #### <a name="to-implement-a-quickinfo-controller"></a>Um ein QuickInfo-Controller zu implementieren. 1. Deklarieren Sie eine Klasse, die implementiert <xref:Microsoft.VisualStudio.Language.Intellisense.IIntellisenseController>, und nennen Sie sie `TestQuickInfoController`. [!code-csharp[VSSDKQuickInfoTest#10](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#10)] [!code-vb[VSSDKQuickInfoTest#10](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#10)] 2. Fügen Sie private Felder für die Textansicht, das die Textpuffer dargestellt, in der Textansicht, die QuickInfo-Sitzung und der QuickInfo-Controller-Anbieter hinzu. [!code-csharp[VSSDKQuickInfoTest#11](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#11)] [!code-vb[VSSDKQuickInfoTest#11](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#11)] 3. Fügen Sie einen Konstruktor, der die Felder festgelegt, und fügt den Ereignishandler der Maus gezeigt wird. [!code-csharp[VSSDKQuickInfoTest#12](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#12)] [!code-vb[VSSDKQuickInfoTest#12](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#12)] 4. Fügen Sie den Maus zeigen Sie Ereignishandler, der die QuickInfo-Sitzung auslöst. [!code-csharp[VSSDKQuickInfoTest#13](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#13)] [!code-vb[VSSDKQuickInfoTest#13](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#13)] 5. Implementieren der <xref:Microsoft.VisualStudio.Language.Intellisense.IIntellisenseController.Detach%2A> Methode, sodass die It den Ereignishandler der Maus gezeigt wird entfernt, wenn der Controller von der Textansicht getrennt wird. [!code-csharp[VSSDKQuickInfoTest#14](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#14)] [!code-vb[VSSDKQuickInfoTest#14](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#14)] 6. Implementieren der <xref:Microsoft.VisualStudio.Language.Intellisense.IIntellisenseController.ConnectSubjectBuffer%2A> Methode und die <xref:Microsoft.VisualStudio.Language.Intellisense.IIntellisenseController.DisconnectSubjectBuffer%2A> Methode als leere Methoden für dieses Beispiel. [!code-csharp[VSSDKQuickInfoTest#15](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#15)] [!code-vb[VSSDKQuickInfoTest#15](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#15)] ## <a name="implementing-the-quickinfo-controller-provider"></a>Implementieren des QuickInfo-Controller-Anbieters Der Anbieter, der den QuickInfo-Controller dient in erster Linie zum Exportieren von sich selbst als einer MEF-Komponente, und instanziieren Sie den QuickInfo-Controller. Da es sich um einen MEF-Komponente handelt, können sie andere MEF-Komponententeilen importieren. #### <a name="to-implement-the-quickinfo-controller-provider"></a>Den QuickInfo-Controller-Anbieter implementiert 1. Deklarieren Sie eine Klasse, die mit dem Namen `TestQuickInfoControllerProvider` implementiert <xref:Microsoft.VisualStudio.Language.Intellisense.IIntellisenseControllerProvider>, und exportieren Sie es mit einer <xref:Microsoft.VisualStudio.Utilities.NameAttribute> "QuickInfo-QuickInfo-Controllers" und ein <xref:Microsoft.VisualStudio.Utilities.ContentTypeAttribute> "Text": [!code-csharp[VSSDKQuickInfoTest#16](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#16)] [!code-vb[VSSDKQuickInfoTest#16](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#16)] 2. Importieren Sie <xref:Microsoft.VisualStudio.Language.Intellisense.IQuickInfoBroker> als Eigenschaft. [!code-csharp[VSSDKQuickInfoTest#17](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#17)] [!code-vb[VSSDKQuickInfoTest#17](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#17)] 3. Implementieren der <xref:Microsoft.VisualStudio.Language.Intellisense.IIntellisenseControllerProvider.TryCreateIntellisenseController%2A> Methode durch die Instanziierung des QuickInfo-Controllers. [!code-csharp[VSSDKQuickInfoTest#18](../snippets/csharp/VS_Snippets_VSSDK/vssdkquickinfotest/cs/testquickinfosource.cs#18)] [!code-vb[VSSDKQuickInfoTest#18](../snippets/visualbasic/VS_Snippets_VSSDK/vssdkquickinfotest/vb/testquickinfosource.vb#18)] ## <a name="building-and-testing-the-code"></a>Erstellen und Testen des Codes Um diesen Code zu testen, erstellen Sie die Projektmappe QuickInfoTest, und führen Sie es in der experimentellen Instanz. #### <a name="to-build-and-test-the-quickinfotest-solution"></a>Zum Erstellen und Testen der Lösung QuickInfoTest 1. Erstellen Sie die Projektmappe. 2. Wenn Sie dieses Projekt im Debugger ausführen, wird eine zweite Instanz von Visual Studio instanziiert. 3. Erstellen Sie eine Textdatei und Typ, der die Wörter enthält Text "hinzufügen" und "subtrahieren". 4. Zeigen Sie auf ein Vorkommen von "hinzufügen". Die Signatur und die Beschreibung der `add` Methode angezeigt werden soll. ## <a name="see-also"></a>Siehe auch [Exemplarische Vorgehensweise: Verknüpfen eines Inhaltstyps mit einer Dateinamenerweiterung](../extensibility/walkthrough-linking-a-content-type-to-a-file-name-extension.md)
82.561497
765
0.800052
deu_Latn
0.860117
9aa471927b48f71a348ba180bbe7fc01434a1850
55
md
Markdown
README.md
liuchengyuan/LCYPhonebook
555f1944c46405fa238aa657f10be4c8599e2e5d
[ "MIT" ]
null
null
null
README.md
liuchengyuan/LCYPhonebook
555f1944c46405fa238aa657f10be4c8599e2e5d
[ "MIT" ]
null
null
null
README.md
liuchengyuan/LCYPhonebook
555f1944c46405fa238aa657f10be4c8599e2e5d
[ "MIT" ]
null
null
null
# LCYPhonebook LCY系列(1)号项目,用于通讯录管理。包含了通讯录读取并且排序,智能搜索等。
18.333333
39
0.818182
yue_Hant
0.772248
9aa54202deb3b0897d29ca5eee2dbfd5e81ec001
172
md
Markdown
.github/PULL_REQUEST_TEMPLATE.md
dylan-k/Obsidian-For-Business
f3888857ba16d6a1b915db8b0575661b42a9eb33
[ "MIT" ]
59
2021-04-05T00:43:53.000Z
2022-03-17T13:45:08.000Z
.github/PULL_REQUEST_TEMPLATE.md
dylan-k/Obsidian-For-Business
f3888857ba16d6a1b915db8b0575661b42a9eb33
[ "MIT" ]
16
2021-04-04T22:59:15.000Z
2022-03-13T16:34:37.000Z
.github/PULL_REQUEST_TEMPLATE.md
dylan-k/Obsidian-For-Business
f3888857ba16d6a1b915db8b0575661b42a9eb33
[ "MIT" ]
7
2021-04-05T05:39:20.000Z
2021-12-30T19:44:48.000Z
--- name: Pull request about: Create a pull request to help us improve --- # Pull request Create a pull request to help us improve. See [CONTRIBUTING](/CONTRIBUTING.md).
19.111111
79
0.732558
eng_Latn
0.880463
9aa5efe8645883d846c39c0bbf21154f073e3d4b
1,102
md
Markdown
libraries/rc-switch-master/README.md
BenjaminFair/makeathon2016
e0e3bc28508e4e670af41652d77405fea2886ad7
[ "MIT" ]
null
null
null
libraries/rc-switch-master/README.md
BenjaminFair/makeathon2016
e0e3bc28508e4e670af41652d77405fea2886ad7
[ "MIT" ]
null
null
null
libraries/rc-switch-master/README.md
BenjaminFair/makeathon2016
e0e3bc28508e4e670af41652d77405fea2886ad7
[ "MIT" ]
null
null
null
# rc-switch Use your Arduino or Raspberry Pi to operate remote radio controlled devices ##Download https://github.com/sui77/rc-switch/releases/latest ##Wiki https://github.com/sui77/rc-switch/wiki ##Info ###Send RC codes Use your Arduino or Raspberry Pi to operate remote radio controlled devices. This will most likely work with all popular low cost power outlet sockets. If yours doesn't work, you might need to adjust the pulse length for some to work. All you need is a Arduino or Raspberry Pi, a 315/433MHz AM transmitter and one or more devices with a SC5262 / SC5272, HX2262 / HX2272, PT2262 / PT2272, EV1527, RT1527, FP1527 or HS1527 chipset. Also supports Intertechno outlets. ###Receive and decode RC codes Find out what codes your remote is sending. Use your remote to control your Arduino. All you need is a Arduino, a 315/433MHz AM receiver (altough there is no instruction yet, yes it is possible to hack an existing device) and a remote hand set. For Raspberry Pi, clone the https://github.com/ninjablocks/433Utils project to compile a sniffer tool and transmission commands.
44.08
234
0.780399
eng_Latn
0.996108
9aa61ddec9fdf70af36b84391e821038be9b1da9
252
md
Markdown
iambismark.net/content/post/2009/02/1234413436.md
bismark/iambismark.net
1ef89663cfcf4682fbfd60781bb143a7fd276312
[ "MIT" ]
null
null
null
iambismark.net/content/post/2009/02/1234413436.md
bismark/iambismark.net
1ef89663cfcf4682fbfd60781bb143a7fd276312
[ "MIT" ]
null
null
null
iambismark.net/content/post/2009/02/1234413436.md
bismark/iambismark.net
1ef89663cfcf4682fbfd60781bb143a7fd276312
[ "MIT" ]
null
null
null
--- alturls: - https://twitter.com/bismark/status/1201617556 archive: - 2009-02 date: '2009-02-12T04:37:16+00:00' slug: '1234413436' --- i cannot stand how flakey google calendar's webdav support is. i have closed and reopened ical 5-6 times today.
21
112
0.730159
eng_Latn
0.604945
9aa67454e31ced5955caeaa44dfa0aa1c6eaa3dd
162
md
Markdown
README.md
togglefox/create-xxtf-application-blog-post
3a2c727891629e67277e6f5b8245692f3bf5abd5
[ "Apache-2.0" ]
1
2020-06-06T12:14:01.000Z
2020-06-06T12:14:01.000Z
README.md
togglefox/create-xxtf-application-blog-post
3a2c727891629e67277e6f5b8245692f3bf5abd5
[ "Apache-2.0" ]
null
null
null
README.md
togglefox/create-xxtf-application-blog-post
3a2c727891629e67277e6f5b8245692f3bf5abd5
[ "Apache-2.0" ]
null
null
null
# create-xxtf-application-blog-post Creating the xxtf custom application blog post See blog post https://togglefox.com/blog/creating-the-xxtf-custom-application/
40.5
78
0.820988
kor_Hang
0.377192
9aa6a79a63409fdf2aab8413f10c06dd817b4157
112
md
Markdown
_posts/0000-01-02-JohnRomeis.md
JohnRomeis/github-slideshow
67b1c7f17cbe1fcd91234e2d8c6f3675ef78528f
[ "MIT" ]
null
null
null
_posts/0000-01-02-JohnRomeis.md
JohnRomeis/github-slideshow
67b1c7f17cbe1fcd91234e2d8c6f3675ef78528f
[ "MIT" ]
5
2020-05-11T15:31:14.000Z
2022-02-26T07:49:48.000Z
_posts/0000-01-02-JohnRomeis.md
JohnRomeis/github-slideshow
67b1c7f17cbe1fcd91234e2d8c6f3675ef78528f
[ "MIT" ]
null
null
null
--- layout: slide title: "Welcome to our second slide!" My **text goes** _here!_ Use the left arrow to go back!
18.666667
37
0.696429
eng_Latn
0.995124
9aa77aec836f633e77bd77812bd77e27e81c9597
594
md
Markdown
content/navigation/chainsecurity/index.zh-cn.md
itey/metabd
263376ea70afea08cbc752c7117928826d2ec61a
[ "MIT" ]
null
null
null
content/navigation/chainsecurity/index.zh-cn.md
itey/metabd
263376ea70afea08cbc752c7117928826d2ec61a
[ "MIT" ]
6
2022-03-14T18:35:35.000Z
2022-03-28T18:43:54.000Z
content/navigation/chainsecurity/index.zh-cn.md
itey/metabd
263376ea70afea08cbc752c7117928826d2ec61a
[ "MIT" ]
null
null
null
--- weight: title: "ChainSecurity" description: "关于智能合约的首个代码审计平台,针对 Ethereum 和 Hyperledger Fabric 智能合约的安全性扫描,源于 Securify、ChainCode Scanner 创始者,基于苏黎世联邦理工学院 ICE 中心的最..." date: 2022-03-25T21:57:40+08:00 lastmod: 2022-03-25T16:45:40+08:00 draft: false authors: ["Metabd"] featuredImage: "chainsecurity.jpg" link: "" tags: ["安全机构","ChainSecurity"] categories: ["navigation"] navigation: ["安全机构"] lightgallery: true toc: true pinned: false recommend: false recommend1: false --- 关于智能合约的首个代码审计平台,针对 Ethereum 和 Hyperledger Fabric 智能合约的安全性扫描,源于 Securify、ChainCode Scanner 创始者,基于苏黎世联邦理工学院 ICE 中心的最新研究。
28.285714
132
0.776094
yue_Hant
0.686326
9aa8e3bbd808acc2902dd4c63112846e17710785
3,684
md
Markdown
README.md
charles2910/GRATOSS
f56e9a7a09bdf510fbe53a3bd163043aa31732ea
[ "MIT" ]
2
2021-09-29T15:20:48.000Z
2021-10-01T15:23:20.000Z
README.md
charles2910/GRATOSS
f56e9a7a09bdf510fbe53a3bd163043aa31732ea
[ "MIT" ]
null
null
null
README.md
charles2910/GRATOSS
f56e9a7a09bdf510fbe53a3bd163043aa31732ea
[ "MIT" ]
1
2021-10-04T16:42:12.000Z
2021-10-04T16:42:12.000Z
# GRATOSS ## GRupo de Apoio e Tutoria para Open Source Software Grupo de (ex-) alunos da USP querendo ajudar outros alunos que estao interessados em contribuir para projetos Open Source, mas nao sabem por onde comecar ou nao acham que estao prontos (spoilers: voces estao) Esse repositorio eh uma coleção de informações úteis para quem quer começar. Os links abaixo sao um bom ponto para comecar * Motivacao: TBD * Por onde comecar: TBD * Como se comunicar: TBD * outros? ### Como saber se voce esta pronto para contribuir A resposta eh simples, voce esta. Mas vamos expandir um pouco para voces entenderem por que a resposta eh essa: * Contribuicoes nao precisam ser codigo. Muitos projetos tem documentacoes ruins, designs feios, ou coisas do tipo. Apesar de acharmos - como programadores - que isso eh fru-fru, muitos usuarios normais nao confiam em projetos Open Source por nao _parecem_ maduros o suficiente. * "Eu nao sei fazer essas coisas bonitas". Reportar bugs tambem eh contribuicao. Especialmente quando voce usa o tempo para criar e refinar um reprodutor de bugs, ou mesmo identificando no codigo onde esta o problema. E nao adianta pensar que "um novato nao iria achar um bug importante:. Nao so eu achei 3 problemas no GDB acidentamente enquanto resolvia um deles, eu (flango) tambem levantei um problema MUITO conhecido no GDB que parece um bug por causa do output dele (CTRL + C escrevendo Quit mas nao saindo do GDB), que vai ser resolvido em breve. * "Mas levantar problemas nao eh contribuir, eu quero fazer codigo". Mesmo assim, varios projetos tem coisas bugs/tarefas simples para ajudar novatos a conhecerem o codigo. Um exemplo sao as Bite sized Tasks do QEMU https://wiki.qemu.org/Contribute/BiteSizedTasks ## Contatos Primeiro, entre no grupo do telegrao: https://t.me/ joinchat/ BTOwLxI3WtM0NWQx (Separado para evitar bots, so juntar no seu browser). Caso voce tenha alguma duvida (ou esta tao perdido que nem sabe qual eh a sua duvida) que nao quer colocar no grupo por algum motivo, as pessoas que se comprometeram a ajudar sao as seguintes: 1. Bruno Larsen: blarsen (at) redhat (dot) com; (at)flango no telegrão Experiencia: Ja contribui um pouco para FreeBSD, um bom tanto para QEMU e no momento contribuo para GDB. Tenho experiencia com projetos de C/C++, que gostam de usar email como metodo para revisar patches. Nao sei lidar com Gerrit (nao lembro nada de lidar com phabricator, desculpem) mas de resto estamos ai :) Notas: Por favor, colocar GRATOSS no assunto do e-mail, assim eu posso organizar melhor na lista de emails. Respostas durante horario comercial (mesmo no telegram) 2. [Gabriel Fontes](https://misterio.me): gratoss@misterio.me; [@Misterio7x no Telegrama](https://t.me/misterio7x) Experiência: Já empacotei um pouco no AUR (Arch Linux), e hoje ativamente no Nixpkgs (Nix/NixOS). Tenho alguns projetos open-source (a maioria relacionados à personalização). Tenho alguma experiência com os quirks de projetos em Rust, Python, e JS/TS, e também sou grande fã de workflow baseada em email. 3. Carlos Henrique Lima Melara: charlesmelara@outlook.com; @charles2910 (github e gitlab), @charles ([Salsa](https://salsa.debian.org)) Experiência: Debian - empacotamento de software ([mantenho alguns pacotes](https://qa.debian.org/developer.php?email=charlesmelara%40outlook.com)) e tradução para o português ([equipe de localização pt-BR](https://wiki.debian.org/Brasil/Traduzir)); Linux - contribuições _simples_ e fluxo de trabalho (e-mails com patches); Relatórios de Bugs - fiz vários relatórios de bugs e contribui pontualmente consertando os bugs reportados em vários projetos.
78.382979
553
0.776602
por_Latn
0.999013
9aa9d2a709ee0c52b6aee50931dc10e400cd32fc
54
md
Markdown
README.md
Aqudi/Youtube-clonecoding
1ecbc46dac227d3a283e3c5d4bbf9343ab27b891
[ "MIT" ]
null
null
null
README.md
Aqudi/Youtube-clonecoding
1ecbc46dac227d3a283e3c5d4bbf9343ab27b891
[ "MIT" ]
null
null
null
README.md
Aqudi/Youtube-clonecoding
1ecbc46dac227d3a283e3c5d4bbf9343ab27b891
[ "MIT" ]
null
null
null
# Youtube-clonecoding 동영상 스트리밍 사이트 Youtube를 흡사하게 만든다.
18
31
0.796296
kor_Hang
1.000006
9aaa237f8629b30898b94e416c25b26dce0d128b
5,478
md
Markdown
content/zh/blog/Helping data security: Ant and Intel work together to create a verified PPML solution/index.md
horizonzy/sofastack.tech
068765f9ec95e35199de192a70b1d22f7f124b93
[ "Apache-2.0" ]
100
2019-06-20T02:28:22.000Z
2022-03-27T14:02:43.000Z
content/zh/blog/Helping data security: Ant and Intel work together to create a verified PPML solution/index.md
horizonzy/sofastack.tech
068765f9ec95e35199de192a70b1d22f7f124b93
[ "Apache-2.0" ]
148
2019-06-23T14:46:10.000Z
2022-03-28T11:26:59.000Z
content/zh/blog/Helping data security: Ant and Intel work together to create a verified PPML solution/index.md
horizonzy/sofastack.tech
068765f9ec95e35199de192a70b1d22f7f124b93
[ "Apache-2.0" ]
155
2019-06-23T14:39:49.000Z
2022-03-28T11:37:26.000Z
--- title: "助力数据安全:蚂蚁携手英特尔共同打造验证PPML解决方案" author: "" authorlink: "https://github.com/sofastack" description: "助力数据安全:蚂蚁携手英特尔共同打造验证PPML解决方案" categories: "SOFAStack" tags: ["SOFAStack"] date: 2021-06-01T15:00:00+08:00 cover: "https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*XpvoSIy2cOkAAAAAAAAAAAAAARQnAQ" --- >机器学习(ML)和深度学习(DL)在众多真实的应用场景中愈发重要。这些模型使用已知数据进行训练,并部署在图像分类、内容推荐等场景中进行新数据的处理。总体而言,数据越多,ML/DL 模型就越完善。但囤积和处理海量数据也带来了隐私、安全和监管等风险。 隐私保护机器学习(PPML)有助于化解这些风险。其采用加密技术差分隐私、硬件技术等,旨在处理机器学习任务的同时保护敏感用户数据和训练模型的隐私。 在英特尔® 软件防护扩展(英特尔® SGX)和蚂蚁集团用于英特尔® SGX 的内存安全多进程用户态操作系统 Occlum 的基础上,蚂蚁集团与英特尔合作搭建了 PPML 平台。在本篇博客中,我们将介绍这项运行在 Analytics Zoo 上的解决方案,并展示该解决方案在第三代英特尔® 至强® 可扩展处理器上得到英特尔® 深度学习加速(英特尔® DL Boost)技术助力时的性能优势。 英特尔® SGX 是英特尔的受信任执行环境(TEE),它提供基于硬件的内存加密,隔离内存中的特定应用代码和数据。英特尔® SGX 使得用户层代码可以分配内存中的受保护区域,即 “飞地”,这些区域不受更高权限等级程序运行的任何影响(如图一所示)。 >![](https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*HIV7Sr9YKvUAAAAAAAAAAAAAARQnAQ) 图一 通过英特尔® SGX 加强防护 与同态加密和差分隐私相比,英特尔® SGX 在操作系统、驱动、BIOS、虚拟机管理器或系统管理模型已瘫痪的情况下仍可帮助防御软件攻击。因此,英特尔® SGX 在攻击者完全控制平台的情况下仍可增强对隐私数据和密钥的保护。第三代英特尔® 至强® 可扩展处理器可使 CPU 受信任内存区域增加到 512GB,使得英特尔® SGX 技术能够为隐私保护机器学习解决方案打下坚实的基础。 2014 年正式成立的蚂蚁集团服务于超 10 亿用户,是全球领先的金融科技企业之一。蚂蚁集团一直积极探索隐私保护机器学习领域,并发起了开源项目 Occlum。Occlum 是用于英特尔® SGX 的内存安全多进程用户态操作系统(LibOS)。使用 Occlum 后,机器学习工作负载等只需修改极少量(甚至无需修改)源代码即可在英特尔® SGX 上运行,以高度透明的方式保护了用户数据的机密性和完整性。用于英特尔® SGX 的 Occlum 架构如图二所示。 >![](https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*jmJbQ7YDja4AAAAAAAAAAAAAARQnAQ) 图二 用于英特尔® SGX 的 Occlum 架构(图片来源:Occlum · GitHub) ### Analytics Zoo 赋能端到端 PPML 解决方案 Analytics Zoo 是面向基于 Apache Spark、Flink 和 Ray 的分布式 TensorFlow、Keras 和 PyTorch 的统一的大数据分析和人工智能平台。使用 Analytics Zoo 后,分析框架、ML/DL 框架和 Python 库可以在 Occlum LibOS 以受保护的方式作为一个整体运行。此外,Analytics Zoo 还提供安全数据访问、安全梯度与参数管理等安全性功能,赋能联邦学习等隐私保护机器学习用例。端到端 Analytics Zoo PPML 解决方案如图三所示。 >![](https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*E7hlQ5pA4I0AAAAAAAAAAAAAARQnAQ) 图三 端到端 PPML 解决方案为金融服务、医疗卫生、云服务等应用领域提供安全分布式计算 在 Analytics Zoo PPML 平台上,蚂蚁集团与英特尔共同打造了一个更加安全的分布式端到端推理服务流水线(如图四所示)。 该流水线采用 Analytics Zoo Cluster Serving 打造,后者是轻量级分布式实时服务解决方案,支持多种深度学习模型,包括 TensorFlow、PyTorch、Caffe、BigDL 和 OpenVINOTM。 Analytics Zoo Cluster Serving 包括 web 前端、内存数据结构存储 Redis、推理引擎(如面向英特尔® 架构优化的 TensorFlow 或 OpenVINO™ 工具套件),以及分布式流处理框架(如 Apache Flink)。 推理引擎和流处理框架在 Occlum 和英特尔® SGX “飞地” 上运行。web 前端和 Redis 受到传输层安全(TLS)协议加密,因此推理流水线中的数据(包括用户数据和模型)在存储、传输、使用的过程中都受到更多地保护。 >![](https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*Jz2CTacYL64AAAAAAAAAAAAAARQnAQ) 图四 推理服务流水线 ### 共创美好未来:英特尔® DL Boost 加速端到端 PPML 解决方案 1.该解决方案执行如下端到端推理流水线: RESTful http API 接收用户输入,Analytics Zoo pub/sub API 将用户输入转化成输入队列,并由 Redis 管理。用户数据受加密保护。 2.Analytics Zoo 从输入队列中抓取数据。它在分布式流处理框架(如 Apache Flink)上采用推理引擎进行推理。英特尔® SGX 使用 Occlum 来保护推理引擎和分布式流处理框架。英特尔® oneAPI 深度神经网络库(oneDNN)利用支持 Int8 指令集的英特尔® DL Boost 提高分布式推理流水线的性能。 3.Analytics Zoo 从分布式环境中收集推理输出,并送回到由 Redis 管理的输出队列。随后,解决方案使用 RESTful http API 将推理结果作为预测返回给用户。输出队列中的数据和 http 通信内容都被加密。 ### 性能分析 Analytics Zoo PPML 解决方案的性能进行了验证。 >![](https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*p39aSLErv4kAAAAAAAAAAAAAARQnAQ) 表一 测试配置 图五为测试结果。与不受英特尔® SGX 保护的推理流水线相比,当推理解决方案受到英特尔® SGX 保护,ResNet50 推理流水线的吞吐量会有少许损失。而采用支持 INT8 指令集的英特尔® DL Boost 后,受英特尔® SGX 保护的推理流水线吞吐量翻了一番。 >![](https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*_ZFERI3Kt3wAAAAAAAAAAAAAARQnAQ) 图五 英特尔® SGX、英特尔® DL Boost 和第三代英特尔® 至强® 可扩展处理器提供高性能安全能力 基于英特尔® SGX 打造的 Analytics Zoo PPML 解决方案继承了受信任执行环境(TEE)的优点。和其它数据安全解决方案相比,它的安全性和数据效用性十分突出,性能方面仅略逊于纯文本。英特尔® DL Boost 和英特尔® oneDNN 则进一步提升了 Analytics Zoo PPML 推理解决方案的性能。表二总结了该解决方案(TEE)相对于同态加密(HE)、差分隐私(DP)、安全多方计算(MPC)和纯文本的优势。 >![](https://gw.alipayobjects.com/mdn/sofastack/afts/img/A*_ZFERI3Kt3wAAAAAAAAAAAAAARQnAQ) 表二 Analytics Zoo PPML 解决方案(TEE)与其他方案的比较 ### 总结 在日益复杂的法律和监管环境中,对于企业和组织来说,保护客户数据隐私比以往任何时候都更加重要。在隐私保护机器学习的助力下,企业和组织就能在继续探索强大的人工智能技术的同时,面对大量敏感数据处理降低安全性风险。 Analytics Zoo 隐私保护机器学习解决方案基于 Occlum、英特尔® SGX、英特尔® DL Boost 和 Analytics Zoo 打造,为助力确保数据的安全性和大数据人工智能工作负载性能提供了平台解决方案。蚂蚁集团和英特尔共同打造并验证了这一 PPML 解决方案,并将继续合作探索人工智能和数据安全性领域的最佳实践。 ### 测试配置 **系统配置** :2 节点,双路英特尔® 至强® 铂金 8369B 处理器,每路 32 核心,超线程开启,睿频开启,总内存 1024 GB(16 个插槽/ 64GB/ 3200 MHz),EPC 512GB,SGX DCAP 驱动程序 1.36.2,微代码: 0x8d05a260,Ubuntu 18.04.4 LTS,4.15.0-112-generic 内核,英特尔截至 2021 年 3 月 20日的测试。 **软件配置**:LibOS Occlum 0.19.1,Flink 1.10.1,Redis 0.6.9,OpenJDK 11.0.10,Python 3.6.9. **工作负载配置**:模型:Resnet50,深度学习框架:Analytics Zoo 0.9.0,OpenVINOTM 2020R2,数据集:Imagenet,BS=16/实例,16 个实例/双路,数据类型:FP32/INT8. **所有性能数据均为实验室环境下测试所得。** ### 本周推荐阅读 - [蚂蚁云原生应用运行时的探索和实践 - ArchSummit 上海](https://mp.weixin.qq.com/s?__biz=MzUzMzU5Mjc1Nw==&mid=2247487717&idx=1&sn=ca9452cdc10989f61afbac2f012ed712&chksm=faa0ff3fcdd77629d8e5c8f6c42af3b4ea227ee3da3d5cdf297b970f51d18b8b1580aac786c3&scene=21) - [带你走进云原生技术:云原生开放运维体系探索和实践](https://mp.weixin.qq.com/s?__biz=MzUzMzU5Mjc1Nw==&mid=2247488044&idx=1&sn=ef6300d4b451723aa5001cd3deb17fbc&chksm=faa0fdf6cdd774e03ccd9130099674720a81e7e109ecf810af147e08778c6582636769646490&scene=21) - [稳定性大幅度提升:SOFARegistry v6 新特性介绍](https://mp.weixin.qq.com/s?__biz=MzUzMzU5Mjc1Nw==&mid=2247488131&idx=1&sn=cd0b101c2db86b1d28e9f4fe07b0446e&chksm=faa0fd59cdd7744f14deeffd3939d386cff6cecdde512aa9ad00cef814c033355ac792001377&scene=21) - [金融级能力成核心竞争力,服务网格驱动企业创新](https://mp.weixin.qq.com/s?__biz=MzUzMzU5Mjc1Nw==&mid=2247487660&idx=1&sn=d5506969b7eb25efcbf52b45a864eada&chksm=faa0ff76cdd77660de430da730036022fff6d319244731aeee5d41d08e3a60c23af4ee6e9bb2&scene=21) 更多文章请扫码关注“金融级分布式架构”公众号 >![](https://gw.alipayobjects.com/mdn/rms_95b965/afts/img/A*s3UzR6VeQ6cAAAAAAAAAAAAAARQnAQ)
57.663158
263
0.820372
yue_Hant
0.726576
9aaa5f69bfeadcd3d1e92e2f96f5d0009ea905e4
2,928
markdown
Markdown
_posts/2019-08-13-rails_with_javascript_project_mode.markdown
amandena/amandena.github.io
0e3af1bcd64bd5e9ee5426c8f2e38e3c946ce8a2
[ "MIT" ]
1
2020-08-07T17:48:46.000Z
2020-08-07T17:48:46.000Z
_posts/2019-08-13-rails_with_javascript_project_mode.markdown
amandena/amandena.github.io
0e3af1bcd64bd5e9ee5426c8f2e38e3c946ce8a2
[ "MIT" ]
null
null
null
_posts/2019-08-13-rails_with_javascript_project_mode.markdown
amandena/amandena.github.io
0e3af1bcd64bd5e9ee5426c8f2e38e3c946ce8a2
[ "MIT" ]
null
null
null
--- layout: post title: "Rails with Javascript: Project Mode" date: 2019-08-13 16:25:50 +0000 permalink: rails_with_javascript_project_mode --- For the fourth portfolio project, the task was to add JavaScript to the previous Ruby on Rails project. At first, this seemed to be a very daunting task, so I hesitated to start. Not only was JavaScript still new to me, but I also felt that it would be easier to start from scratch than to work from a previously completed application. Converting a mostly Ruby and Active Record-based project to JavaScript was difficult to wrap my head around. Fortunately for me, once I finally got started, it appeared to be more and more plausible that this was going to work. I was able to finish the project within a couple of weeks, which is actually record time for me. I would describe the application as a portal where users can create costume parties and costumes that both belong to a user and a particular party that they are "attending." I used my costume parties model to complete the new JavaScript requirements for this project. The costume parties index page and costume party show page were converted with JavaScript and an Active Model Serialization JSON backend. On the show page of a costume party is where I demonstrated my 'has_many" relationship, which displays the many costumes going to the party. Keeping with the theme, the form submission requirement was met by utilizing a submit request click handler on the new costume party which will render the newly created party solely with JavaScript. What I found really beneficial with this particular project were the tutorials that were provided by Flatiron's Cernan Bernardo. Following along with his videos was what helped me get a real jump start on the tasks at hand and to lay down the basic foundation of the JavaScript file. I think that I would have been at a loss as to where to start, if not for his videos. Another helpful resource that I seem to be using more and more are the study groups for the portfolio projects. When I get stuck with specific errors and Google fails me, the study groups have been my go to. Working with Dalia Sawaya for this project has not only been fun, but also provided me with valuable tools that I can now implement on my own. One of those such tools was debugging, something that I didn't have much experience with using. Before our study sessions, I was a slave to using console.log(). Console.log() is extremely helpful, don't get me wrong, but because JavaScript was made for the web it makes more sense to be able to use the browser tools to debug issues in real time. From starting this project to now, I feel that I have gained a deeper understanding of JavaScript and how to implement it dynamically in web applications. It has shown me why JavaScript is so widely used today and how I will be able to utilize it for future, more complicated applications.
162.666667
745
0.796448
eng_Latn
0.999977
9aaa94c1f3441b98e5601f1de905b05ea5395229
389
md
Markdown
tests/test_biolink_model/output/markdown_no_image/process_positively_regulated_by_process.md
bpow/linkml
ab83c0caee9c02457ea5a748e284dee6b547fcd6
[ "CC0-1.0" ]
25
2019-07-05T01:16:18.000Z
2021-03-22T20:49:25.000Z
tests/test_biolink_model/output/markdown_no_image/process_positively_regulated_by_process.md
bpow/linkml
ab83c0caee9c02457ea5a748e284dee6b547fcd6
[ "CC0-1.0" ]
299
2019-03-05T15:15:30.000Z
2021-04-08T23:25:41.000Z
tests/test_biolink_model/output/markdown_no_image/process_positively_regulated_by_process.md
bpow/linkml
ab83c0caee9c02457ea5a748e284dee6b547fcd6
[ "CC0-1.0" ]
19
2019-05-23T17:46:47.000Z
2021-03-25T06:45:55.000Z
# Slot: process_positively_regulated_by_process URI: [biolink:process_positively_regulated_by_process](https://w3id.org/biolink/vocab/process_positively_regulated_by_process) ## Domain and Range [Occurrent](Occurrent.md) -> <sub>0..*</sub> [Occurrent](Occurrent.md) ## Parents * is_a: [process regulated by process](process_regulated_by_process.md) ## Children ## Used by
16.913043
126
0.760925
eng_Latn
0.674446
9aaae5154d292de35ff09d4f95ad361d1dc366b1
1,141
md
Markdown
docs/framework/wcf/diagnostics/tracing/system-servicemodel-security-securitysessionclosedresponsesendfailure.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-security-securitysessionclosedresponsesendfailure.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-security-securitysessionclosedresponsesendfailure.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: System.ServiceModel.Security.SecuritySessionClosedResponseSendFailure ms.date: 03/30/2017 ms.assetid: 214e88fe-0476-4604-bca6-1b2f25fe1194 ms.openlocfilehash: cc78860f04275049b1413b8d6ab82e56cbda37ac ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 04/08/2019 ms.locfileid: "59136005" --- # <a name="systemservicemodelsecuritysecuritysessionclosedresponsesendfailure"></a>System.ServiceModel.Security.SecuritySessionClosedResponseSendFailure System.ServiceModel.Security.SecuritySessionClosedResponseSendFailure ## <a name="description"></a>Beschreibung Beim Senden einer "Sicherheitssitzung wurde geschlossen"-Antwort an den Client ist ein Fehler aufgetreten. ## <a name="see-also"></a>Siehe auch - [Ablaufverfolgung](../../../../../docs/framework/wcf/diagnostics/tracing/index.md) - [Verwenden der Ablaufverfolgung zum Beheben von Anwendungsfehlern](../../../../../docs/framework/wcf/diagnostics/tracing/using-tracing-to-troubleshoot-your-application.md) - [Verwaltung und Diagnose](../../../../../docs/framework/wcf/diagnostics/index.md)
49.608696
173
0.797546
deu_Latn
0.372777
9aaaf32841c3a919069faad20bc77c991614c20b
90,270
md
Markdown
articles/blockchain-workbench/blockchain-workbench-database-views.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/blockchain-workbench/blockchain-workbench-database-views.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/blockchain-workbench/blockchain-workbench-database-views.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Viste di database in Azure Blockchain Workbench description: Panoramica delle viste di database del database SQL in Azure Blockchain Workbench. services: azure-blockchain keywords: '' author: PatAltimore ms.author: patricka ms.date: 5/1/2018 ms.topic: article ms.service: azure-blockchain ms.reviewer: mmercuri manager: femila ms.openlocfilehash: b1e12da5a9c83f06cdc6f541b521b4feb1a73231 ms.sourcegitcommit: e221d1a2e0fb245610a6dd886e7e74c362f06467 ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 05/07/2018 ms.locfileid: "33767033" --- # <a name="database-views-in-azure-blockchain-workbench"></a>Viste di database in Azure Blockchain Workbench Azure Blockchain Workbench invia dati da libri mastri distribuiti a un database SQL *off-chain*. In questo modo, è possibile usare SQL e gli strumenti esistenti, come [SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-2017), per interagire con i dati delle blockchain. Azure Blockchain Workbench offre un set di viste di database che forniscono accesso a dati utili quando si eseguono le query. Queste viste sono notevolmente denormalizzate per permettere di iniziare rapidamente a creare report e analisi e utilizzare in altri modi i dati delle blockchain con gli strumenti esistenti, senza dover formare di nuovo il personale che si occupa dei database. Questa sezione include una panoramica delle viste di database e dei dati che contengono. > [!NOTE] > Anche se possibile, oltre a queste viste non è supportato alcun utilizzo diretto delle tabelle di database presenti nel database. > ## <a name="vwapplication"></a>vwApplication Questa vista fornisce informazioni dettagliate sulle **applicazioni** che sono state caricate in Azure Blockchain Workbench. | NOME | type | Può essere Null | DESCRIZIONE | |----------------------------------|---------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione. | | ApplicationName | nvarchar(50) | No | Il nome dell'applicazione. | | ApplicationDescription | nvarchar(255) | Sì | Descrizione dell'applicazione | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente. | | ApplicationEnabled | bit | No | Specifica se l'applicazione è attualmente abilitata.</br> **Nota:** anche se un'applicazione può essere indicata come disabilitata nel database, i contratti associati restano nella blockchain e i dati su questi contratti restano nel database. | | UploadedDtTm | datetime2(7) | No | Data e ora in cui un contratto è stato caricato. | | UploadedByUserId | int | No | ID dell'utente che ha caricato l'applicazione. | | UploadedByUserExternalId | nvarchar(255) | No | Identificatore esterno dell'utente che ha caricato l'applicazione. Per impostazione predefinita, è l'ID dell'utente di Azure Active Directory per il consorzio. | | UploadedByUserProvisioningStatus | int | No | Identifica lo stato corrente del processo di provisioning per l'utente. Valori possibili:</br>0: l'utente è stato creato dall'API<br>1: una chiave è stata associata all'utente nel database</br>2: è stato effettuato il provisioning completo per l'utente | | UploadedByUserFirstName | nvarchar(50) | Sì | Nome dell'utente che ha caricato il contratto. | | UploadedByUserLastName | nvarchar(50) | Sì | Cognome dell'utente che ha caricato il contratto. | | UploadedByUserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente che ha caricato il contratto. | ## <a name="vwapplicationrole"></a>vwApplicationRole Questa vista fornisce informazioni dettagliate sui ruoli definiti nelle applicazioni Azure Blockchain Workbench. In un'applicazione di *trasferimento di asset*, ad esempio, è possibile definire ruoli come *Acquirente* e *Venditore*. | NOME | type | Può essere Null | DESCRIZIONE | |------------------------|------------------|-------------|---------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione | | ApplicationName | nvarchar(50) | No | Nome dell'applicazione | | ApplicationDescription | nvarchar(255) | Sì | Descrizione dell'applicazione | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente | | RoleId | int | No | Identificatore univoco di un ruolo nell'applicazione | | RoleName | nvarchar(50) | No | Nome del ruolo | | RoleDescription | description(255) | Sì | Descrizione del ruolo | ## <a name="vwapplicationroleuser"></a>vwApplicationRoleUser Questa vista fornisce informazioni dettagliate sui ruoli definiti nelle applicazioni Azure Blockchain Workbench e sui rispettivi utenti associati. In un'applicazione di *trasferimento degli asset*, ad esempio, *John Smith* può essere associato al ruolo *Acquirente*. | NOME | type | Può essere Null | DESCRIZIONE | |----------------------------|---------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione | | ApplicationName | nvarchar(50) | No | Nome dell'applicazione | | ApplicationDescription | nvarchar(255) | Sì | Descrizione dell'applicazione | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente | | ApplicationRoleId | int | No | Identificatore univoco di un ruolo nell'applicazione | | ApplicationRoleName | nvarchar(50) | No | Nome del ruolo | | ApplicationRoleDescription | nvarchar(255) | Sì | Descrizione del ruolo | | UserId | int | No | ID dell'utente associato al ruolo. | | UserExternalId | nvarchar(255) | No | Identificatore esterno dell'utente associato al ruolo. Per impostazione predefinita, è l'ID dell'utente di Azure Active Directory per il consorzio. | | UserProvisioningStatus | int | No | Identifica lo stato corrente del processo di provisioning per l'utente. I valori possibili sono:</br>0: l'utente è stato creato dall'API</br>1: una chiave è stata associata all'utente nel database<br>2: è stato effettuato il provisioning completo per l'utente | | UserFirstName | nvarchar(50) | Sì | Nome dell'utente associato al ruolo. | | UserLastName | nvarchar(255) | Sì | Cognome dell'utente associato al ruolo. | | UserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente associato al ruolo. | ## <a name="vwconnectionuser"></a>vwConnectionUser Questa vista fornisce informazioni dettagliate sulle connessioni definite in Azure Blockchain Workbench e sui rispettivi utenti associati. Per ogni connessione, questa vista contiene i dati seguenti: - Informazioni dettagliate sui libri mastri associati - Informazioni sugli utenti associati | NOME | type | Può essere Null | DESCRIZIONE | |--------------------------|---------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ConnectionId | int | No | Identificatore univoco di una connessione in Azure Blockchain Workbench. | | ConnectionEndpointUrl | nvarchar(50) | No | URL dell'endpoint per una connessione. | | ConnectionFundingAccount | nvarchar(255) | Sì | Conto finanziario associato a una connessione, se applicabile. | | LedgerId | int | No | Identificatore univoco di un libro mastro. | | LedgerName | nvarchar(50) | No | Nome del libro mastro. | | LedgerDisplayName | nvarchar(255) | No | Nome del libro mastro da visualizzare nell'interfaccia utente. | | UserId | int | No | ID dell'utente associato alla connessione. | | UserExternalId | nvarchar(255) | No | Identificatore esterno dell'utente associato alla connessione. Per impostazione predefinita, è l'ID dell'utente di Azure Active Directory per il consorzio. | | UserProvisioningStatus | int | No |Identifica lo stato corrente del processo di provisioning per l'utente. I valori possibili sono:</br>0: l'utente è stato creato dall'API</br>1: una chiave è stata associata all'utente nel database<br>2: è stato effettuato il provisioning completo per l'utente | | UserFirstName | nvarchar(50) | Sì | Nome dell'utente associato alla connessione. | | UserLastName | nvarchar(255) | Sì | Cognome dell'utente associato alla connessione. | | UserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente associato alla connessione. | ## <a name="vwcontract"></a>vwContract Questa vista fornisce informazioni dettagliate sui contratti distribuiti. Per ogni contratto, questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Implementazione del libro mastro associato per la funzione - Informazioni dettagliate sull'utente che ha avviato l'azione - Informazioni dettagliate correlate al blocco e alla transazione della blockchain | NOME | type | Può essere Null | DESCRIZIONE | |------------------------------------------|----------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ConnectionId | int | No | Identificatore univoco di una connessione in Azure Blockchain Workbench. | | ConnectionEndpointUrl | nvarchar(50) | No | URL dell'endpoint per una connessione. | | ConnectionFundingAccount | nvarchar(255) | Sì | Conto finanziario associato a una connessione, se applicabile. | | LedgerId | int | No | Identificatore univoco di un libro mastro. | | LedgerName | nvarchar(50) | No | Nome del libro mastro. | | LedgerDisplayName | nvarchar(255) | No | Nome del libro mastro da visualizzare nell'interfaccia utente. | | ApplicationId | int | No | Identificatore univoco dell'applicazione. | | ApplicationName | nvarchar(50) | No | Il nome dell'applicazione. | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente. | | ApplicationEnabled | bit | No | Specifica se l'applicazione è attualmente abilitata.</br> **Nota:** anche se un'applicazione può essere indicata come disabilitata nel database, i contratti associati restano nella blockchain e i dati su questi contratti restano nel database. | | WorkflowId | int | No | Identificatore univoco del flusso di lavoro associato a un contratto. | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro associato a un contratto. | | WorkflowDisplayName | nvarchar(255) | No | Nome del flusso di lavoro associato al contratto da visualizzare nell'interfaccia utente. | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro associato a un contratto. | | ContractCodeId | int | No | Identificatore univoco del codice del contratto associato al contratto. | | ContractFileName | int | No | Nome del file contenente il codice del contratto intelligente per questo flusso di lavoro. | | ContractUploadedDtTm | int | No | Data e ora in cui il codice del contratto è stato caricato. | | ContractId | int | No | Identificatore univoco del contratto. | | ContractProvisioningStatus | int | No | Cognome dell'utente che ha distribuito il contratto. | | ContractLedgerIdentifier | nvarchar(255) | | Indirizzo di posta elettronica dell'utente che ha distribuito il contratto. | | ContractDeployedByUserId | int | No | Identificatore esterno dell'utente che ha distribuito il contratto. Per impostazione predefinita, è il GUID che rappresenta l'ID Azure Active Directory per l'utente. | | ContractDeployedByUserExternalId | nvarchar(255) | No | Identificatore esterno dell'utente che ha distribuito il contratto. Per impostazione predefinita, è il GUID che rappresenta l'ID Azure Active Directory per l'utente. | | ContractDeployedByUserProvisioningStatus | int | No | Identifica lo stato corrente del processo di provisioning per l'utente. I valori possibili sono:</br>0: l'utente è stato creato dall'API</br>1: una chiave è stata associata all'utente nel database </br>2: è stato effettuato il provisioning completo per l'utente | | ContractDeployedByUserFirstName | nvarchar(50) | Sì | Nome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserLastName | nvarchar(255) | Sì | Cognome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente che ha distribuito il contratto. | ## <a name="vwcontractaction"></a>vwContractAction Questa vista rappresenta la maggior parte delle informazioni correlate ad azioni eseguite sui contratti ed è progettata per semplificare notevolmente scenari comuni di creazione di report. Per ogni azione eseguita, questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Definizione di funzioni e parametri del contratto intelligente associato - Implementazione del libro mastro associato per la funzione - Valori di istanza specifici forniti per i parametri - Informazioni dettagliate sull'utente che ha avviato l'azione - Informazioni dettagliate correlate al blocco e alla transazione della blockchain | NOME | type | Può essere Null | DESCRIZIONE | |------------------------------------------|---------------|-------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione. | | ApplicationName | nvarchar(50) | No | Il nome dell'applicazione. | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente. | | ApplicationEnabled | bit | No | Questo campo specifica se l'applicazione è attualmente abilitata. Nota: anche se un'applicazione può essere indicata come disabilitata nel database, i contratti associati restano nella blockchain e i dati sui questi contratti restano nel database. | | WorkflowId | int | No | Identificatore univoco di un flusso di lavoro. | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro. | | WorkflowDisplayName | nvarchar(255) | No | Nome del flusso di lavoro da visualizzare in un'interfaccia utente. | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro. | | ContractId | int | No | Identificatore univoco del contratto. | | ContractProvisioningStatus | int | No | Identifica lo stato corrente del processo di provisioning per il contratto. I valori possibili sono:</br>0: il contratto è stato creato dall'API nel database</br>1: il contratto è stato inviato al libro mastro</br>2: il contratto è stato distribuito correttamente nel libro mastro | | ContractCodeId | int | No | Identificatore univoco per l'implementazione del codice del contratto. | | ContractLedgerIdentifier | nvarchar(255) | Sì | Identificatore univoco associato alla versione distribuita di un contratto intelligente per un libro mastro distribuito specifico. Ad esempio, Ethereum. | | ContractDeployedByUserId | int | No | Identificatore univoco dell'utente che ha distribuito il contratto. | | ContractDeployedByUserFirstName | nvarchar(50) | Sì | Nome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserLastName | nvarchar(255) | Sì | Cognome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserExternalId | nvarchar(255) | No | Identificatore esterno dell'utente che ha distribuito il contratto. Per impostazione predefinita, è il GUID che rappresenta l'identità dell'utente in Azure Active Directory per il consorzio. | | ContractDeployedByUserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente che ha distribuito il contratto. | | WorkflowFunctionId | int | No | Identificatore univoco di una funzione del flusso di lavoro. | | WorkflowFunctionName | nvarchar(50) | No | Nome della funzione. | | WorkflowFunctionDisplayName | nvarchar(255) | No | Nome di una funzione da visualizzare nell'interfaccia utente. | | WorkflowFunctionDescription | nvarchar(255) | No | Descrizione della funzione. | | ContractActionId | int | No | Identificatore univoco di un'azione del contratto. | | ContractActionProvisioningStatus | int | No | Identifica lo stato corrente del processo di provisioning per l'azione del contratto. I valori possibili sono:</br>0: l'azione del contratto è stata creata dall'API nel database</br>1: l'azione del contratto è stata inviata al libro mastro</br>2: l'azione del contratto è stata distribuita correttamente nel libro mastro | | ContractActionTimestamp | datetime(2,7) | No | Timestamp dell'azione del contratto. | | ContractActionExecutedByUserId | int | No | Identificatore univoco dell'utente che ha eseguito l'azione del contratto. | | ContractActionExecutedByUserFirstName | int | Sì | Nome dell'utente che ha eseguito l'azione del contratto. | | ContractActionExecutedByUserLastName | nvarchar(50) | Sì | Cognome dell'utente che ha eseguito l'azione del contratto. | | ContractActionExecutedByUserExternalId | nvarchar(255) | Sì | Identificatore esterno dell'utente che ha eseguito l'azione del contratto. Per impostazione predefinita, è il GUID che rappresenta l'identità dell'utente in Azure Active Directory per il consorzio. | | ContractActionExecutedByUserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente che ha eseguito l'azione del contratto. | | WorkflowFunctionParameterId | int | No | Identificatore univoco di un parametro della funzione. | | WorkflowFunctionParameterName | nvarchar(50) | No | Nome di un parametro della funzione. | | WorkflowFunctionParameterDisplayName | nvarchar(255) | No | Nome di un parametro della funzione da visualizzare nell'interfaccia utente. | | WorkflowFunctionParameterDataTypeId | int | No | Identificatore univoco del tipo di dati associato a un parametro della funzione del flusso di lavoro. | | WorkflowParameterDataTypeName | nvarchar(50) | No | Nome del tipo di dati associato a un parametro della funzione del flusso di lavoro. | | ContractActionParameterValue | nvarchar(255) | No | Valore del parametro archiviato nel contratto intelligente. | | BlockHash | nvarchar(255) | Sì | Hash del blocco. | | BlockNumber | int | Sì | Numero del blocco nel libro mastro. | | BlockTimestamp | datetime(2,7) | Sì | Timestamp del blocco. | | TransactionId | int | No | Identificatore univoco della transazione. | | TransactionFrom | nvarchar(255) | Sì | Parte da cui ha avuto origine la transazione. | | TransactionTo | nvarchar(255) | Sì | Parte con cui è stata effettuata la transazione. | | TransactionHash | nvarchar(255) | Sì | Hash di una transazione. | | TransactionIsWorkbenchTransaction | bit | Sì | Bit che specifica se la transazione è una transazione di Azure Blockchain Workbench. | | TransactionProvisioningStatus | int | Sì | Identifica lo stato corrente del processo di provisioning per la transazione. I valori possibili sono:</br>0: la transazione è stata creata dall'API nel database</br>1: la transazione è stata inviata al libro mastro</br>2: la transazione è stata distribuita correttamente nel libro mastro | | TransactionValue | decimal(32,2) | Sì | Valore della transazione. | ## <a name="vwcontractproperty"></a>vwContractProperty Questa vista rappresenta la maggior parte delle informazioni correlate alle proprietà associate a un contratto ed è progettata per semplificare notevolmente scenari comuni di creazione di report. Per ogni proprietà acquisita, questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Informazioni dettagliate sull'utente che ha distribuito il flusso di lavoro - Definizione della proprietà del contratto intelligente associata - Valori di istanza specifici per le proprietà - Informazioni dettagliate per la proprietà di stato del contratto | NOME | type | Può essere Null | DESCRIZIONE | |------------------------------------|---------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione. | | ApplicationName | nvarchar(50) | No | Il nome dell'applicazione. | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente. | | ApplicationEnabled | bit | No | Specifica se l'applicazione è attualmente abilitata.</br>**Nota:** anche se un'applicazione può essere indicata come disabilitata nel database, i contratti associati restano nella blockchain e i dati su questi contratti restano nel database. | | WorkflowId | int | No | Identificatore univoco del flusso di lavoro. | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro. | | WorkflowDisplayName | nvarchar(255) | No | Nome del flusso di lavoro da visualizzare in un'interfaccia utente. | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro. | | ContractId | int | No | Identificatore univoco del contratto. | | ContractProvisioningStatus | int | No | Identifica lo stato corrente del processo di provisioning per il contratto. I valori possibili sono:</br>0: il contratto è stato creato dall'API nel database</br>1: il contratto è stato inviato al libro mastro</br>2: il contratto è stato distribuito correttamente nel libro mastro | | ContractCodeId | int | No | Identificatore univoco per l'implementazione del codice del contratto. | | ContractLedgerIdentifier | nvarchar(255) | Sì | Identificatore univoco associato alla versione distribuita di un contratto intelligente per un libro mastro distribuito specifico. Ad esempio, Ethereum. | | ContractDeployedByUserId | int | No | Identificatore univoco dell'utente che ha distribuito il contratto. | | ContractDeployedByUserFirstName | nvarchar(50) | Sì | Nome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserLastName | nvarchar(255) | Sì | Cognome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserExternalId | nvarchar(255) | No | Identificatore esterno dell'utente che ha distribuito il contratto. Per impostazione predefinita, è il GUID che rappresenta l'identità dell'utente in Azure Active Directory per il consorzio. | | ContractDeployedByUserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente che ha distribuito il contratto. | | WorkflowPropertyId | int | | Identificatore univoco di una proprietà di un flusso di lavoro. | | WorkflowPropertyDataTypeId | int | No | ID del tipo di dati della proprietà. | | WorkflowPropertyDataTypeName | nvarchar(50) | No | Nome del tipo di dati della proprietà. | | WorkflowPropertyName | nvarchar(50) | No | Nome della proprietà del flusso di lavoro. | | WorkflowPropertyDisplayName | nvarchar(255) | No | Nome visualizzato della proprietà del flusso di lavoro. | | WorkflowPropertyDescription | nvarchar(255) | Sì | Descrizione della proprietà. | | ContractPropertyValue | nvarchar(255) | No | Valore di una proprietà nel contratto. | | StateName | nvarchar(50) | Sì | Se la proprietà contiene lo stato del contratto, questo è il nome visualizzato per lo stato. Se non è associata allo stato, il valore sarà null. | | StateDisplayName | nvarchar(255) | No | Se la proprietà contiene lo stato, questo è il nome visualizzato dello stato. Se non è associata allo stato, il valore sarà null. | | StateValue | nvarchar(255) | Sì | Se la proprietà contiene lo stato, questo è il valore dello stato. Se non è associata allo stato, il valore sarà null. | ## <a name="vwcontractstate"></a>vwContractState Questa vista rappresenta la maggior parte delle informazioni correlate allo stato di un contratto specifico ed è progettata per semplificare notevolmente scenari comuni di creazione di report. Ogni record in questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Informazioni dettagliate sull'utente che ha distribuito il flusso di lavoro - Definizione della proprietà del contratto intelligente associata - Informazioni dettagliate per la proprietà di stato del contratto | NOME | type | Può essere Null | DESCRIZIONE | |------------------------------------|---------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione. | | ApplicationName | nvarchar(50) | No | Il nome dell'applicazione. | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente. | | ApplicationEnabled | bit | No | Specifica se l'applicazione è attualmente abilitata.</br>**Nota:** anche se un'applicazione può essere indicata come disabilitata nel database, i contratti associati restano nella blockchain e i dati su questi contratti restano nel database. | | WorkflowId | int | No | Identificatore univoco del flusso di lavoro. | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro. | | WorkflowDisplayName | nvarchar(255) | No | Nome da visualizzare nell'interfaccia utente. | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro. | | ContractLedgerImplementationId | nvarchar(255) | Sì | Identificatore univoco associato alla versione distribuita di un contratto intelligente per un libro mastro distribuito specifico. Ad esempio, Ethereum. | | ContractId | int | No | Identificatore univoco del contratto. | | ContractProvisioningStatus | int | No |Identifica lo stato corrente del processo di provisioning per il contratto. I valori possibili sono:</br>0: il contratto è stato creato dall'API nel database</br>1: il contratto è stato inviato al libro mastro</br>2: il contratto è stato distribuito correttamente nel libro mastro | | ConnectionId | int | No | Identificatore univoco dell'istanza di blockchain in cui viene distribuito il flusso di lavoro | | ContractCodeId | int | No | Identificatore univoco per l'implementazione del codice del contratto. | | ContractDeployedByUserId | int | No | Identificatore univoco dell'utente che ha distribuito il contratto. | | ContractDeployedByUserExternalId | nvarchar(255) | No | Identificatore esterno dell'utente che ha distribuito il contratto. Per impostazione predefinita, è il GUID che rappresenta l'identità dell'utente in Azure Active Directory per il consorzio. | | ContractDeployedByUserFirstName | nvarchar(50) | Sì | Nome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserLastName | nvarchar(255) | Sì | Cognome dell'utente che ha distribuito il contratto. | | ContractDeployedByUserEmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente che ha distribuito il contratto. | | WorkflowPropertyId | int | No | Identificatore univoco di una proprietà del flusso di lavoro. | | WorkflowPropertyDataTypeId | int | No | ID del tipo di dati della proprietà del flusso di lavoro. | | WorkflowPropertyDataTypeName | nvarchar(50) | No | Nome del tipo di dati della proprietà del flusso di lavoro. | | WorkflowPropertyName | nvarchar(50) | No | Nome della proprietà del flusso di lavoro. | | WorkflowPropertyDisplayName | nvarchar(255) | No | Nome visualizzato della proprietà da visualizzare in un'interfaccia utente. | | WorkflowPropertyDescription | nvarchar(255) | Sì | Descrizione della proprietà. | | ContractPropertyValue | nvarchar(255) | No | Valore di una proprietà archiviata nel contratto. | | StateName | nvarchar(50) | Sì | Se la proprietà contiene lo stato, questo è il nome visualizzato dello stato. Se non è associata allo stato, il valore sarà null. | | StateDisplayName | nvarchar(255) | No | Se la proprietà contiene lo stato, questo è il nome visualizzato dello stato. Se non è associata allo stato, il valore sarà null. | | StateValue | nvarchar(255) | Sì | Se la proprietà contiene lo stato, questo è il valore dello stato. Se non è associata allo stato, il valore sarà null. | ## <a name="vwuser"></a>vwUser Questa visualizzazione fornisce informazioni dettagliate sui membri del consorzio di cui viene effettuato il provisioning per l'uso di Azure Blockchain Workbench. Per impostazione predefinita, i dati vengono popolati tramite il provisioning iniziale dell'utente. | NOME | type | Può essere Null | DESCRIZIONE | |--------------------|---------------|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ID | int | No | Identificatore univoco di un utente. | | ExternalID | nvarchar(255) | No | Identificatore esterno di un utente. Per impostazione predefinita, è il GUID che rappresenta l'ID Azure Active Directory per l'utente. | | ProvisioningStatus | int | No |Identifica lo stato corrente del processo di provisioning per l'utente. I valori possibili sono:</br>0: l'utente è stato creato dall'API</br>1: una chiave è stata associata all'utente nel database<br>2: è stato effettuato il provisioning completo per l'utente | | FirstName | nvarchar(50) | Sì | Nome dell'utente. | | LastName | nvarchar(50) | Sì | Cognome dell'utente. | | EmailAddress | nvarchar(255) | Sì | Indirizzo di posta elettronica dell'utente. | ## <a name="vwworkflow"></a>vwWorkflow Questa vista rappresenta informazioni dettagliate sui metadati del flusso di lavoro di base, insieme alle funzioni e ai parametri del flusso di lavoro. Progettata per la creazione di report, contiene anche i metadati relativi all'applicazione associata al flusso di lavoro. Questa vista contiene dati di più tabelle sottostanti per semplificare la creazione di report sui flussi di lavoro. Per ogni flusso di lavoro, questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Informazioni sullo stato iniziale del flusso di lavoro associato | NOME | type | Può essere Null | DESCRIZIONE | |-----------------------------------|---------------|-------------|--------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione | | ApplicationName | nvarchar(50) | No | Nome dell'applicazione | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente | | ApplicationEnabled | bit | No | Specifica se l'applicazione è abilitata | | WorkflowId | int | Sì | Identificatore univoco del flusso di lavoro | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro | | WorkflowDisplayName | nvarchar(255) | No | Nome da visualizzare nell'interfaccia utente | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro. | | WorkflowConstructorFunctionId | int | No | Identificatore della funzione del flusso di lavoro che funge da costruttore per il flusso di lavoro. | | WorkflowStartStateId | int | No | Identificatore univoco dello stato. | | WorkflowStartStateName | nvarchar(50) | No | Nome dello stato. | | WorkflowStartStateDisplayName | nvarchar(255) | No | Nome dello stato da visualizzare nell'interfaccia utente. | | WorkflowStartStateDescription | nvarchar(255) | Sì | Descrizione dello stato del flusso di lavoro. | | WorkflowStartStateStyle | nvarchar(50) | Sì | Questo valore identifica la percentuale di completamento del flusso di lavoro quando è in questo stato. | | WorkflowStartStateValue | int | No | Valore dello stato. | | WorkflowStartStatePercentComplete | int | No | Descrizione di testo che fornisce un'indicazione ai client su come eseguire il rendering di questo stato nell'interfaccia utente. Gli stati supportati includono *Operazione riuscita* e *Errore* | ## <a name="vwworkflowfunction"></a>vwWorkflowFunction Questa vista rappresenta informazioni dettagliate sui metadati del flusso di lavoro di base, insieme alle funzioni e ai parametri del flusso di lavoro. Progettata per la creazione di report, contiene anche i metadati relativi all'applicazione associata al flusso di lavoro. Questa vista contiene dati di più tabelle sottostanti per semplificare la creazione di report sui flussi di lavoro. Per ogni funzione del flusso di lavoro, questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Dettagli della funzione del flusso di lavoro | NOME | type | Può essere Null | DESCRIZIONE | |--------------------------------------|---------------|-------------|--------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione | | ApplicationName | nvarchar(50) | No | Nome dell'applicazione | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente | | ApplicationEnabled | bit | No | Specifica se l'applicazione è abilitata | | WorkflowId | int | No | Identificatore univoco di un flusso di lavoro. | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro. | | WorkflowDisplayName | nvarchar(255) | No | Nome del flusso di lavoro da visualizzare in un'interfaccia utente. | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro. | | WorkflowFunctionId | int | No | Identificatore univoco di una funzione. | | WorkflowFunctionName | nvarchar(50) | Sì | Nome della funzione. | | WorkflowFunctionDisplayName | nvarchar(255) | No | Nome di una funzione da visualizzare nell'interfaccia utente | | WorkflowFunctionDescription | nvarchar(255) | Sì | Descrizione della funzione del flusso di lavoro. | | WorkflowFunctionIsConstructor | bit | No | Specifica se la funzione del flusso di lavoro è il costruttore per il flusso di lavoro. | | WorkflowFunctionParameterId | int | No | Identificatore univoco di un parametro di una funzione. | | WorkflowFunctionParameterName | nvarchar(50) | No | Nome di un parametro della funzione. | | WorkflowFunctionParameterDisplayName | nvarchar(255) | No | Nome di un parametro della funzione da visualizzare nell'interfaccia utente | | WorkflowFunctionParameterDataTypeId | int | No | Identificatore univoco del tipo di dati associato a un parametro della funzione del flusso di lavoro. | | WorkflowParameterDataTypeName | nvarchar(50) | No | Nome del tipo di dati associato a un parametro della funzione del flusso di lavoro. | ## <a name="vwworkflowproperty"></a>vwWorkflowProperty Questa vista rappresenta le proprietà definite per un flusso di lavoro. Per ogni proprietà, questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Dettagli della proprietà del flusso di lavoro | NOME | type | Può essere Null | DESCRIZIONE | |------------------------------|---------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione. | | ApplicationName | nvarchar(50) | No | Il nome dell'applicazione. | | ApplicationDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente. | | ApplicationEnabled | bit | No | Specifica se l'applicazione è attualmente abilitata.</br>**Nota:** anche se un'applicazione può essere indicata come disabilitata nel database, i contratti associati restano nella blockchain e i dati su questi contratti restano nel database. | | WorkflowId | int | No | Identificatore univoco del flusso di lavoro. | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro. | | WorkflowDisplayName | nvarchar(255) | No | Nome del flusso di lavoro da visualizzare in un'interfaccia utente. | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro. | | WorkflowPropertyID | int | No | Identificatore univoco di una proprietà di un flusso di lavoro. | | WorkflowPropertyName | nvarchar(50) | No | Nome della proprietà | | WorkflowPropertyDescription | nvarchar(255) | Sì | Descrizione della proprietà | | WorkflowPropertyDisplayName | nvarchar(255) | No | Nome da visualizzare in un'interfaccia utente | | WorkflowPropertyWorkflowId | int | No | ID del flusso di lavoro cui è associata questa proprietà | | WorkflowPropertyDataTypeId | int | No | ID del tipo di dati definito per la proprietà | | WorkflowPropertyDataTypeName | nvarchar(50) | No | Nome del tipo di dati definito per la proprietà | | WorkflowPropertyIsState | bit | No | Questo campo specifica se la proprietà del flusso di lavoro contiene lo stato del flusso di lavoro. | ## <a name="vwworkflowstate"></a>vwWorkflowState Questa vista rappresenta le proprietà associate a un flusso di lavoro. Per ogni contratto, questa vista contiene i dati seguenti: - Definizione dell'applicazione associata - Definizione del flusso di lavoro associato - Informazioni sullo stato del flusso di lavoro | NOME | type | Può essere Null | DESCRIZIONE | |------------------------------|---------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ApplicationId | int | No | Identificatore univoco dell'applicazione. | | ApplicationName | nvarchar(50) | No | Il nome dell'applicazione. | | ApplicationDisplayName | nvarchar(255) | No | Descrizione dell'applicazione | | ApplicationEnabled | bit | No | Specifica se l'applicazione è attualmente abilitata.</br>**Nota:** anche se un'applicazione può essere indicata come disabilitata nel database, i contratti associati restano nella blockchain e i dati su questi contratti restano nel database. | | WorkflowId | int | No | Identificatore univoco del flusso di lavoro. | | WorkflowName | nvarchar(50) | No | Nome del flusso di lavoro. | | WorkflowDisplayName | nvarchar(255) | No | Nome del flusso di lavoro da visualizzare nell'interfaccia utente. | | WorkflowDescription | nvarchar(255) | Sì | Descrizione del flusso di lavoro. | | WorkflowStateID | int | No | Identificatore univoco dello stato. | | WorkflowStateName | nvarchar(50) | No | Nome dello stato. | | WorkflowStateDisplayName | nvarchar(255) | No | Nome dello stato da visualizzare nell'interfaccia utente. | | WorkflowStateDescription | nvarchar(255) | Sì | Descrizione dello stato del flusso di lavoro. | | WorkflowStatePercentComplete | int | No | Questo valore identifica la percentuale di completamento del flusso di lavoro quando è in questo stato. | | WorkflowStateValue | nvarchar(50) | No | Valore dello stato. | | WorkflowStateStyle | nvarchar(50) | No | Descrizione di testo che fornisce un'indicazione ai client su come eseguire il rendering di questo stato nell'interfaccia utente. Gli stati supportati includono *Operazione riuscita* e *Errore* |
218.571429
477
0.313382
ita_Latn
0.994616
9aab250b1d32656bef4b7056e4845e17f48684a8
773
md
Markdown
CustomMarquee/README.md
posburn/tidbyt-apps
29cd6779f1ec7c73010d912ff331d4e6adc1248d
[ "MIT" ]
null
null
null
CustomMarquee/README.md
posburn/tidbyt-apps
29cd6779f1ec7c73010d912ff331d4e6adc1248d
[ "MIT" ]
null
null
null
CustomMarquee/README.md
posburn/tidbyt-apps
29cd6779f1ec7c73010d912ff331d4e6adc1248d
[ "MIT" ]
null
null
null
# Custom Marquee 'widget' Scroll text vertically in your Tidbyt apps with this custom marquee code. Features: * Delay before the text begins scrolling * Use in animations * Supports centered text * Add a simple drop shadow * Simple support for recongizing hyphenation See the [custommarquee.star](https://github.com/posburn/tidbyt-apps/blob/main/CustomMarquee/custommarquee.star) file for an example of how to use the code. ### Examples: #### Simple with initial delay: ![](https://github.com/posburn/tidbyt-apps/blob/main/CustomMarquee/default.gif) #### Centered: ![](https://github.com/posburn/tidbyt-apps/blob/main/CustomMarquee/centered.gif) #### Centered with drop shadow: ![](https://github.com/posburn/tidbyt-apps/blob/main/CustomMarquee/centered-shadow.gif)
32.208333
155
0.767141
eng_Latn
0.765299
9aab695c2175ef54ab2df45e768403c65c3ffbf6
867
md
Markdown
includes/free-trial-note.md
mikaelkrief/azure-docs.fr-fr
4cdd4a1518555b738f72dd53ba366849013f258c
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/free-trial-note.md
mikaelkrief/azure-docs.fr-fr
4cdd4a1518555b738f72dd53ba366849013f258c
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/free-trial-note.md
mikaelkrief/azure-docs.fr-fr
4cdd4a1518555b738f72dd53ba366849013f258c
[ "CC-BY-4.0", "MIT" ]
3
2020-03-31T11:56:12.000Z
2021-06-04T06:51:19.000Z
> [!NOTE] > <a name="note"></a>Pour suivre ce didacticiel, vous avez besoin d’un compte Azure : > > * Vous pouvez [ouvrir un compte Azure gratuitement](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F): vous obtenez alors des crédits dont vous pouvez vous servir pour tester les services Azure payants, et même lorsqu’ils sont épuisés, vous pouvez conserver le compte et utiliser les services Azure gratuits, notamment Sites Web. Votre carte de crédit ne sera pas débitée tant que vous n'aurez pas explicitement modifié vos paramètres pour demander à l'être. > * Vous pouvez [activer les avantages de l’abonnement MSDN](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F): votre abonnement MSDN vous donne droit chaque mois à des crédits dont vous pouvez vous servir pour les services Azure payants. > >
96.333333
478
0.78316
fra_Latn
0.992712
9aac347489b8210c56819c4b39907ec6f0707977
4,171
md
Markdown
articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Szinkronizálási szabály testreszabása Azure AD Connectban | Microsoft Docs " description: Megtudhatja, hogyan szerkesztheti vagy hozhat létre új szinkronizálási szabályt a szinkronizációs szabály szerkesztőjével. services: active-directory documentationcenter: '' author: billmath manager: daveba editor: curtand ms.service: active-directory ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: how-to ms.date: 01/31/2019 ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management ms.openlocfilehash: e2bb86988454141dc692b4a9967997c4ff7574a2 ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 10/09/2020 ms.locfileid: "90530488" --- # <a name="how-to-customize-a-synchronization-rule"></a>Szinkronizálási szabály testre szabása ## <a name="recommended-steps"></a>**Javasolt lépések** A szinkronizálási szabály szerkesztőjével szerkesztheti vagy létrehozhatja az új szinkronizálási szabályt. A szinkronizálási szabályok módosításához speciális felhasználónak kell lennie. Előfordulhat, hogy a helytelen módosítások miatt törölheti az objektumokat a cél könyvtárából. Kérjük, olvassa el az [ajánlott dokumentumokat](#recommended-documents) a szinkronizálási szabályokkal kapcsolatos szakértelem beszerzéséhez. A szinkronizálási szabály módosításához kövesse az alábbi lépéseket: * Indítsa el a szinkronizációs szerkesztőt az asztal alkalmazás menüjében az alábbi ábrán látható módon: ![Szinkronizációs szabály szerkesztője menü](media/how-to-connect-create-custom-sync-rule/how-to-connect-create-custom-sync-rule/syncruleeditormenu.png) * Az alapértelmezett szinkronizálási szabály testreszabásához a szinkronizálási szabályok szerkesztőjének Szerkesztés gombjára kattintva hozza létre a meglévő szabályt, amely létrehozza a szabványos alapértelmezett szabály másolatát, és letiltja azt. Mentse a klónozott szabályt a 100-nál kisebb prioritással. A prioritás határozza meg, hogy milyen szabály nyeri a WINS (alacsonyabb numerikus érték) ütközéses feloldást, ha az attribútum folyamata ütközik. ![Szinkronizációs szabály szerkesztője](media/how-to-connect-create-custom-sync-rule/how-to-connect-create-custom-sync-rule/clonerule.png) * Egy adott attribútum módosításakor ideális esetben csak a módosított attribútumot kell megőrizni a klónozott szabályban. Ezután engedélyezze az alapértelmezett szabályt, hogy a módosított attribútum klónozott szabályból származik, és a többi attribútumot az alapértelmezett standard szabályból válassza ki. * Vegye figyelembe, hogy abban az esetben, ha a módosított attribútum számított értéke NULL értékű a klónozott szabályban, és nem NULL az alapértelmezett standard szabályban, akkor a nem NULL érték fog megjelenni, és a NULL értéket fogja cserélni. Ha nem szeretné, hogy NULL értékű helyett null értéket adjon meg, akkor a AuthoritativeNull a klónozott szabályban kell kiosztania. * Egy **kimenő** szabály módosításához módosítsa a szűrést a szinkronizálási szabály szerkesztőjéből. ## <a name="recommended-documents"></a>**Ajánlott dokumentumok** * [Azure AD Connect Sync: technikai fogalmak](./how-to-connect-sync-technical-concepts.md) * [Azure AD Connect Sync: az architektúra megismerése](./concept-azure-ad-connect-sync-architecture.md) * [Azure AD Connect szinkronizálás: a deklaratív kiépítés ismertetése](./concept-azure-ad-connect-sync-declarative-provisioning.md) * [Azure AD Connect szinkronizálás: a deklaratív kiépítési kifejezések ismertetése](./concept-azure-ad-connect-sync-declarative-provisioning-expressions.md) * [Az Azure AD Connect szinkronizálása: az alapértelmezett konfiguráció ismertetése](./concept-azure-ad-connect-sync-default-configuration.md) * [Azure AD Connect szinkronizálás: a felhasználók, csoportok és névjegyek ismertetése](./concept-azure-ad-connect-sync-user-and-contacts.md) * [Azure AD Connect Sync: Shadow attributes](./how-to-connect-syncservice-shadow-attributes.md) ## <a name="next-steps"></a>Következő lépések - [Azure ad Connect szinkronizálás](how-to-connect-sync-whatis.md). - [Mi az a hibrid identitás?](whatis-hybrid-identity.md).
74.482143
492
0.820906
hun_Latn
0.999981
9aac8bff6b2a1294f4c02393c128b0089b12cc74
750
md
Markdown
content/publication/2021-02-02_AI.md
frigal001/academic-kickstart
5838c069acf00c8512e2ddd2739472c8afca831e
[ "MIT" ]
null
null
null
content/publication/2021-02-02_AI.md
frigal001/academic-kickstart
5838c069acf00c8512e2ddd2739472c8afca831e
[ "MIT" ]
null
null
null
content/publication/2021-02-02_AI.md
frigal001/academic-kickstart
5838c069acf00c8512e2ddd2739472c8afca831e
[ "MIT" ]
null
null
null
+++ title = "Automated Discovery of Relationships, Models, and Principles in Ecology" date = "2020-12-11" authors = ["P. Cardoso", "V.V. Branco", "P.A.V. Borges", "J.C. Carvalho", "**F.Rigal**", "R. Gabriel", "S. Mammola", "J. Cascalho", "L. Correia"] publication_types = ["2"] publication = "**_Frontiers in Ecology and Evolution_**, 8:530135" publication_short = "**_Frontiers in Ecology and Evolution_**, 8:530135" abstract = "" abstract_short = "" image_preview = "" selected = false projects = [] tags = [] url_pdf = "publication/fevo-08-530135.pdf" url_preprint = "" url_code = "" url_dataset = "" url_project = "" url_slides = "" url_video = "" url_poster = "" url_source = "" math = true highlight = true [header] image = "" caption = "" +++
25.862069
145
0.661333
eng_Latn
0.318534
9aacad014a9e26c3df91b9c76b81b3dde34c5b5e
8,198
md
Markdown
articles/active-directory-domain-services/concepts-replica-sets.md
BielinskiLukasz/azure-docs.pl-pl
952ecca251b3e6bdc66e84e0559bbad860a886b9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-domain-services/concepts-replica-sets.md
BielinskiLukasz/azure-docs.pl-pl
952ecca251b3e6bdc66e84e0559bbad860a886b9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-domain-services/concepts-replica-sets.md
BielinskiLukasz/azure-docs.pl-pl
952ecca251b3e6bdc66e84e0559bbad860a886b9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Replika koncepcji dla Azure AD Domain Services | Microsoft Docs description: Dowiedz się, jakie zestawy replik znajdują się w Azure Active Directory Domain Services oraz jak zapewnia nadmiarowość dla aplikacji, które wymagają usług Identity Services. services: active-directory-ds author: MicrosoftGuyJFlo manager: daveba ms.service: active-directory ms.subservice: domain-services ms.workload: identity ms.topic: conceptual ms.date: 07/16/2020 ms.author: joflore ms.openlocfilehash: 499f4df303993d97ebb4eb38de98828b085aff00 ms.sourcegitcommit: d103a93e7ef2dde1298f04e307920378a87e982a ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 10/13/2020 ms.locfileid: "91961072" --- # <a name="replica-sets-concepts-and-features-for-azure-active-directory-domain-services-preview"></a>Funkcja Replica ustawia koncepcje i funkcje Azure Active Directory Domain Services (wersja zapoznawcza) Podczas tworzenia domeny zarządzanej Azure Active Directory Domain Services (Azure AD DS) należy zdefiniować unikatową przestrzeń nazw. Ta przestrzeń nazw jest nazwą domeny, na przykład *aaddscontoso.com*, a dwa kontrolery domeny (DC) są wdrażane w wybranym regionie platformy Azure. To wdrożenie kontrolerów domeny jest znane jako zestaw replik. Można rozszerzyć domenę zarządzaną, aby mieć więcej niż jeden zbiór replik dla dzierżawy usługi Azure AD. Zestawy replik można dodawać do dowolnej komunikacji równorzędnej sieci wirtualnej w dowolnym regionie świadczenia usługi Azure, który obsługuje usługę Azure AD DS. Dodatkowe zestawy replik w różnych regionach platformy Azure zapewniają geograficzne odzyskiwanie po awarii dla starszych aplikacji, jeśli region platformy Azure przejdzie w tryb offline. Zestawy replik są obecnie dostępne w wersji zapoznawczej. > [!NOTE] > Zestawy replik nie umożliwiają wdrażania wielu unikatowych domen zarządzanych w jednej dzierżawie platformy Azure. Każdy zestaw replik zawiera te same dane. ## <a name="how-replica-sets-work"></a>Jak działają zestawy replik Podczas tworzenia domeny zarządzanej, takiej jak *aaddscontoso.com*, zostaje utworzony początkowy zestaw replik. Dodatkowe zestawy replik mają tę samą przestrzeń nazw i konfigurację. Zmiany w usłudze Azure AD DS, w tym konfiguracja, tożsamość użytkownika i poświadczenia, grupy, obiekty zasad grupy, obiekty komputerów i inne zmiany są stosowane do wszystkich zestawów replik w domenie zarządzanej przy użyciu replikacji AD DS. Każdy zestaw replik można utworzyć w sieci wirtualnej. Każda sieć wirtualna musi być połączona za pomocą komunikacji równorzędnej z każdą inną siecią wirtualną, która hostuje zestaw replik domeny zarządzanej. Ta konfiguracja służy do tworzenia topologii sieci siatki, która obsługuje replikację katalogu. Sieć wirtualna może obsługiwać wiele zestawów replik, pod warunkiem, że każdy zestaw replik znajduje się w innej podsieci wirtualnej. Wszystkie zestawy replik są umieszczane w tej samej lokacji Active Directory. W związku z tym wszystkie zmiany są propagowane przy użyciu replikacji wewnątrzlokacyjnej na potrzeby szybkiego wywoływania zbieżności. > [!NOTE] > Nie można definiować oddzielnych lokacji i definiować ustawień replikacji między zestawami replik. Na poniższym diagramie przedstawiono domenę zarządzaną z dwoma zestawami replik. Pierwszy zestaw replik jest tworzony z przestrzeni nazw domeny. Drugi zestaw replik jest tworzony po: ![Diagram przykładowej domeny zarządzanej z dwoma zestawami replik](./media/concepts-replica-sets/two-replica-set-example.png) > [!NOTE] > Zestawy replik zapewniają dostępność usług uwierzytelniania w regionach, w których skonfigurowano zestaw replik. Aby aplikacja miała geograficzną nadmiarowość w przypadku awarii regionalnej, platforma aplikacji, która opiera się na domenie zarządzanej, musi również znajdować się w innym regionie. > > Zestaw replik nie zapewnia odporności innych usług wymaganych do działania aplikacji, takich jak maszyny wirtualne platformy Azure lub usługa Azure App Services. W projekcie dostępności innych składników aplikacji należy wziąć pod uwagę funkcje odporności dla usług, które tworzą aplikację. Poniższy przykład przedstawia domenę zarządzaną z trzema zestawami replik w celu dodatkowego zapewnienia odporności i zapewnienia dostępności usług uwierzytelniania. W obu przykładach obciążenia aplikacji znajdują się w tym samym regionie, w którym znajduje się zestaw replik domeny zarządzanej: ![Diagram przykładowej domeny zarządzanej z trzema zestawami replik](./media/concepts-replica-sets/three-replica-set-example.png) ## <a name="deployment-considerations"></a>Zagadnienia dotyczące wdrażania Domyślną *jednostką* SKU dla domeny zarządzanej jest jednostka SKU przedsiębiorstwa, która obsługuje wiele zestawów replik. Aby utworzyć dodatkowe zestawy replik w przypadku zmiany *standardowej* jednostki SKU, należy [uaktualnić domenę zarządzaną](change-sku.md) do *wersji Enterprise* lub *Premium*. Maksymalna liczba zestawów replik obsługiwanych w ramach wersji zapoznawczej to cztery, łącznie z pierwszą repliką utworzoną podczas tworzenia domeny zarządzanej. Rozliczenia dla każdego zestawu replik bazują na JEDNOSTKAch konfiguracji domeny. Na przykład jeśli masz domenę zarządzaną, która korzysta z jednostki SKU *przedsiębiorstwa* i masz trzy zestawy replik, subskrypcja jest rozliczana za godzinę dla każdego z trzech zestawów replik. ## <a name="frequently-asked-questions"></a>Często zadawane pytania ### <a name="can-i-use-my-production-managed-domain-with-this-preview"></a>Czy mogę używać mojej produkcyjnej domeny zarządzanej z tą wersją zapoznawczą? Zestawy replik są funkcją publicznej wersji zapoznawczej w Azure AD Domain Services. Można użyć produkcyjnej domeny zarządzanej, ale należy pamiętać o różnicach w zakresie pomocy technicznej istniejących dla funkcji w wersji zapoznawczej. Aby uzyskać więcej informacji o wersjach zapoznawczych, [Azure Active Directory wersji zapoznawczej umowy SLA](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ### <a name="can-i-create-a-replica-set-in-subscription-different-from-my-managed-domain"></a>Czy można utworzyć zestaw replik w ramach subskrypcji innej niż domena zarządzana? Nie. Zestawy replik muszą znajdować się w tej samej subskrypcji co domena zarządzana. ### <a name="how-many-replica-sets-can-i-create"></a>Ile zestawów replik można utworzyć? Wersja zapoznawcza jest ograniczona do maksymalnie czterech zestawów replik — początkowej repliki zestawu dla domeny zarządzanej oraz trzech dodatkowych zestawów replik. ### <a name="how-does-user-and-group-information-get-synchronized-to-my-replica-sets"></a>Jak informacje o użytkownikach i grupach są synchronizowane z zestawami replik? Wszystkie zestawy replik są połączone ze sobą za pomocą komunikacji równorzędnej sieci wirtualnej siatki. Jeden zestaw replik otrzymuje aktualizacje użytkowników i grup z usługi Azure AD. Te zmiany są następnie replikowane do innych zestawów replik za pomocą replikacji wewnątrzlokacyjnej AD DS za pośrednictwem sieci równorzędnej. Podobnie jak w przypadku AD DS lokalnego, rozszerzony stan rozłączenia może powodować zakłócenia w replikacji. Ponieważ wirtualne sieci równorzędne nie są przechodnie, wymagania projektowe dla zestawów replik wymagają w pełni oczka topologii sieci. ### <a name="how-do-i-make-changes-in-my-managed-domain-after-i-have-replica-sets"></a>Jak mogę wprowadzić zmiany w mojej domenie zarządzanej po skonfigurowaniu zestawów replik? Zmiany w domenie zarządzanej działają tak samo jak wcześniej. Można [utworzyć maszynę wirtualną zarządzania i korzystać z niej przy użyciu narzędzi RSAT, które są dołączone do domeny zarządzanej](tutorial-create-management-vm.md). Do domeny zarządzanej można przyłączyć dowolną liczbę maszyn wirtualnych zarządzania. ## <a name="next-steps"></a>Następne kroki Aby rozpocząć pracę z zestawami replik, [Utwórz i skonfiguruj domenę zarządzaną platformy Azure AD DS][tutorial-create-advanced]. Po wdrożeniu [Utwórz i użyj dodatkowych zestawów replik][create-replica-set]. <!-- LINKS - INTERNAL --> [tutorial-create-advanced]: tutorial-create-instance-advanced.md [create-replica-set]: tutorial-create-replica-set.md
87.212766
458
0.825811
pol_Latn
0.999984
9aacee2c93d8f867ea0b995c9fe482764037f7d4
4,549
md
Markdown
README.md
AAFC-BICoE/snakemake-barcoding-assembly-pipeline
3798585eb78b948116323b1f787e98bc141b518c
[ "MIT" ]
null
null
null
README.md
AAFC-BICoE/snakemake-barcoding-assembly-pipeline
3798585eb78b948116323b1f787e98bc141b518c
[ "MIT" ]
null
null
null
README.md
AAFC-BICoE/snakemake-barcoding-assembly-pipeline
3798585eb78b948116323b1f787e98bc141b518c
[ "MIT" ]
1
2019-02-23T00:20:32.000Z
2019-02-23T00:20:32.000Z
# Snakemake Barcoding Assembly Pipeline Pipeline for processing Illumina sequencing data consisting of AAFC Diptera COI PCR amplicons. Current release is not compatible with other multi-fragment COI strategies. 1) Trims adapters and bases below 10 quality score [BBDuk](https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/bbduk-guide/) 2) Merges paired end reads into either Fragment A or Fragment B corresponding to PCR amplicons 3) Removes degenerate primers from the fragments 4) Dereplicates merged reads using VSEARCH [VSearch](https://github.com/torognes/vsearch) 5) Merges top two merged reads using Emboss Merger [Emboss](http://emboss.open-bio.org/) 6) Evaluates results and generates a multi-fasta ### Prerequisites * [Conda Installation](https://conda.io/docs/user-guide/install/index.html) * [Snakemake Installation](https://snakemake.readthedocs.io/en/stable/getting_started/installation.html) * Git Installing Miniconda + Snakemake ``` wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh conda install -c bioconda -c conda-forge snakemake ``` ## Getting Started * Create and enter a working directory: * Clone repository ```bash git clone https://github.com/AAFC-BICoE/snakemake-barcoding-assembly-pipeline.git . ``` * Create a folder named "fastq" and populate with COI Illumina reads in fastq.gz format * Initialize conda environment containing snakemake ```bash source ~/miniconda3/bin/activate ``` * Invoke pipeline from within working directory. Adjust cores to suit computer ``` snakemake --use-conda -k --cores 32 ``` * Alternative pipeline to map reads to COI reference gene ``` snakemake -s barcoding_snakefile --use-conda -k --cores 32 ``` ## Methodology Pipeline was designed to handle COI genes amplified in overlapping fragments from thousands of Diptera specimens. Reads are trimmed of adaptors and poor quality bases. Paired end reads are merged into single long fragments. Each fragment is examined for primers, specifically Fragment A forward primer, and Fragment B reverse primer. If Fragment A forward primer is detected, read is trimmed of all Fragment A primers. If Fragment B reverse primer is detected, read is trimmed of all Fragment B primers. Reads are then de-replicated and abundance calculated. Ideal scenario is two very high abundance reads representing Fragment A and B, with minimal or no other reads. The top two reads are merged together to form a single CO1 sequence. Curated CO1 sequences that contain <10% contamination/error reads, have a minimum of 40 combined reads that pass all filtering steps, and are the correct 646 bp length are placed in Curated_Barcodes.fasta Quality checks for all sequences are listed in Summary_Output.csv ## Built With * [Python](https://www.python.org/doc/) - Programming language * [Conda](https://conda.io/docs/index.html) - Package, dependency and environment management * [Snakemake](https://snakemake.readthedocs.io/en/stable/) - Workflow management system * [BioPython](https://biopython.org/) - Tools for biological computation ## Copyright Government of Canada, Agriculture & Agri-Food Canada ## License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details ## Citations * Snakemake Köster, Johannes and Rahmann, Sven. “Snakemake - A scalable bioinformatics workflow engine”. Bioinformatics 2012. * Bold Retriever Vesterinen, E. J., Ruokolainen, L., Wahlberg, N., Peña, C., Roslin, T., Laine, V. N., Vasko, V., Sääksjärvi, I. E., Norrdahl, K., and Lilley, T. M. (2016) What you need is what you eat? Prey selection by the bat Myotis daubentonii. Molecular Ecology, 25(7), 1581–1594. doi:10.1111/mec.13564 * BBTools Brian-JGI (2018) BBTools is a suite of fast, multithreaded bioinformatics tools designed for analysis of DNA and RNA sequence data.https://jgi.doe.gov/data-and-tools/bbtools/ * BOLD Ratnasingham, S. & Hebert, P. D. N. (2007). BOLD : The Barcode of Life Data System (www.barcodinglife.org). Molecular Ecology Notes 7, 355–364. DOI: 10.1111/j.1471-8286.2006.01678.x * VSearch Rognes T, Flouri T, Nichols B, Quince C, Mahé F. (2016) VSEARCH: a versatile open source tool for metagenomics. PeerJ 4:e2584. doi: 10.7717/peerj.2584 * Emboss Rice P., Longden I. and Bleasby A. EMBOSS: The European Molecular Biology Open Software Suite. Trends in Genetics. 2000 16(6):276-277 ## Author Jackson Eyres \ Bioinformatics Programmer \ Agriculture & Agri-Food Canada \ jackson.eyres@canada.ca
43.32381
135
0.770719
eng_Latn
0.87583
9aacee562e466f6824597559985f6f7580317231
11,197
md
Markdown
controls/radchart/populating-with-data/creating-chart-declaratively.md
kylemurdoch/xaml-docs
724c2772d5b1bf5c3fe254fdc0653c24d51824fc
[ "MIT", "Unlicense" ]
null
null
null
controls/radchart/populating-with-data/creating-chart-declaratively.md
kylemurdoch/xaml-docs
724c2772d5b1bf5c3fe254fdc0653c24d51824fc
[ "MIT", "Unlicense" ]
null
null
null
controls/radchart/populating-with-data/creating-chart-declaratively.md
kylemurdoch/xaml-docs
724c2772d5b1bf5c3fe254fdc0653c24d51824fc
[ "MIT", "Unlicense" ]
null
null
null
--- title: Creating a Chart Declaratively page_title: Creating a Chart Declaratively description: Check our &quot;Creating a Chart Declaratively&quot; documentation article for the RadChart {{ site.framework_name }} control. slug: radchart-populating-with-data-creating-chart-declaratively tags: creating,a,chart,declaratively published: True position: 1 --- # Creating a Chart Declaratively In some situations you might want to create a chart declaratively - in XAML. This is useful when you have static data that will not change in time. This tutorial will walk you through the common tasks of: * [Adding RadChart and setting the DefaultView property](#adding-radchart-and-setting-the-defaultview-property) * [Adding ChartTitle](#adding-charttitle) * [Adding ChartLegend](#adding-chartlegend) * [Adding ChartArea](#adding-chartarea) * [Adding DataSeries](#adding-dataseries) >In order to use __RadChart__ control in your projects you have to add references to __Telerik.Windows.Controls.Charting.dll__. ## Adding RadChart and Setting the DefaultView Property For most of the cases, where a chart has a title, a legend and a chart area, __RadChart__ control provides you with a __DefaultView__ that contains these three elements. In your XAML file, add a new __RadChart__ declaration and make a new instance of __ChartDefaultView__ which has to be set to the __RadChart.DefaultView__ property. #### __XAML__ {{region xaml-radchart-populating-with-data-creating-chart-declaratively_0}} <telerik:RadChart> <telerik:RadChart.DefaultView> <telerik:ChartDefaultView /> </telerik:RadChart.DefaultView> </telerik:RadChart> {{endregion}} >To use __RadChart__ in your XAML file, add a reference to the following namespaces: __xmlns:telerikChart="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Charting"__ and __xmlns:telerikCharting="clr-namespace:Telerik.Windows.Controls.Charting;assembly=Telerik.Windows.Controls.Charting"__ The __ChartDefaultView__ contains [ChartTitle]({%slug radchart-features-chart-title%}), [ChartLegend]({%slug radchart-features-chart-legend%}) and [ChartArea]({%slug radchart-features-chart-area%}) and you have to use them to build the chart in the way you need. The next several sections will show you how to set these properties. ## Adding ChartTitle After declaring the default view, the next step is to add a chart title. Find your __RadChart__ declaration and add chart title "Year 2009" which is positioned in the center: #### __XAML__ {{region xaml-radchart-populating-with-data-creating-chart-declaratively_1}} <telerik:RadChart VerticalAlignment="Top"> <telerik:RadChart.DefaultView> <telerik:ChartDefaultView> <telerik:ChartDefaultView.ChartTitle> <telerik:ChartTitle Content="Year 2009" HorizontalAlignment="Center"/> </telerik:ChartDefaultView.ChartTitle> </telerik:ChartDefaultView> </telerik:RadChart.DefaultView> </telerik:RadChart> {{endregion}}   ![](images/RadChart_PopulatingWithData_CreatingChartDeclaratively_010.png) ## Adding ChartLegend Add a __ChartLegend__ declaration to your default view declaration. If you want the data series which is added later in the __ChartArea__ to be automatically added in the legend, set the __UseAutoGeneratedItems__ property to __True__. #### __XAML__ {{region xaml-radchart-populating-with-data-creating-chart-declaratively_2}} <telerik:RadChart VerticalAlignment="Top"> <telerik:RadChart.DefaultView> <telerik:ChartDefaultView> <telerik:ChartDefaultView.ChartTitle> <telerik:ChartTitle Content="Year 2009" HorizontalAlignment="Center"/> </telerik:ChartDefaultView.ChartTitle> <telerik:ChartDefaultView.ChartLegend> <telerik:ChartLegend x:Name="chartLegend" UseAutoGeneratedItems="True" /> </telerik:ChartDefaultView.ChartLegend> </telerik:ChartDefaultView> </telerik:RadChart.DefaultView> </telerik:RadChart> {{endregion}} ## Adding ChartArea The third step in the populating a __RadChart__ with data is adding a __ChartArea__. >Note that to show the __DataSeries__ in the legend you have to also set the __ChartArea.LegendName__ property with the name of the __ChartLegend__ you want. #### __XAML__ {{region xaml-radchart-populating-with-data-creating-chart-declaratively_3}} <telerik:RadChart VerticalAlignment="Top"> <telerik:RadChart.DefaultView> <telerik:ChartDefaultView> <telerik:ChartDefaultView.ChartTitle> <telerik:ChartTitle Content="Year 2009" HorizontalAlignment="Center"/> </telerik:ChartDefaultView.ChartTitle> <telerik:ChartDefaultView.ChartLegend> <telerik:ChartLegend x:Name="chartLegend" UseAutoGeneratedItems="True" /> </telerik:ChartDefaultView.ChartLegend> <telerik:ChartDefaultView.ChartArea> <telerik:ChartArea LegendName="chartLegend" /> </telerik:ChartDefaultView.ChartArea> </telerik:ChartDefaultView> </telerik:RadChart.DefaultView> </telerik:RadChart> {{endregion}} ## Adding DataSeries The __ChartArea__ has a list of __DataSeries__, where you have to add one __DataSeries__ per the chart graphic you want to see. The __DataSeries__ has a property __Definition__ of type __ISeriesDefinition__ where the chart type has to be specified - __LineSeriesDefinition__ for line chart, __BarSeriesDefinition__ for bar chart, etc. The last thing you have to define is the data for each __DataSeries__: simply add as many __DataPoints__ as you need - one for each point of the chart. The __DataPoint__ class represents a single piece of data that is visualized in a chart series. For each __DataPoint__, you can define several values depending on the chart type: __XValue__ and __YValue__, __High__ and __Low__, __Open__ and __Close__, etc. These values are used later to visually calculate and draw the chart graphic. Other properties of __DataPoint__ are the __LegendLable__ and the __LegendFormat.__ The first one specifies the text displayed in the __ChartLegend__ related to that __DataSeries,__ while the second one defines the format of the labels. The XAML bellow defines __LineSeriesDefinition__, that represents the line chart showing the Turnover for year 2009. The data is defined as __DataPoints__, where __Y-Axis__ (__YValue__) is set with the desired value. The second data series is __BarSeriesDefinition__ for the Expenses defined in a similar way. #### __XAML__ {{region xaml-radchart-populating-with-data-creating-chart-declaratively_4}} <telerik:RadChart VerticalAlignment="Top"> <telerik:RadChart.DefaultView> <telerik:ChartDefaultView> <telerik:ChartDefaultView.ChartTitle> <telerik:ChartTitle Content="Year 2009" HorizontalAlignment="Center"/> </telerik:ChartDefaultView.ChartTitle> <telerik:ChartDefaultView.ChartLegend> <telerik:ChartLegend x:Name="chartLegend" UseAutoGeneratedItems="True" /> </telerik:ChartDefaultView.ChartLegend> <telerik:ChartDefaultView.ChartArea> <telerik:ChartArea LegendName="chartLegend"> <telerik:ChartArea.DataSeries> <!-- Line Chart --> <telerik:DataSeries LegendLabel="Turnover"> <telerik:DataSeries.Definition> <telerik:LineSeriesDefinition /> </telerik:DataSeries.Definition> <telerik:DataPoint YValue="154" XCategory="Jan"/> <telerik:DataPoint YValue="138" XCategory="Feb"/> <telerik:DataPoint YValue="143" XCategory="Mar"/> <telerik:DataPoint YValue="120" XCategory="Apr"/> <telerik:DataPoint YValue="135" XCategory="May"/> <telerik:DataPoint YValue="125" XCategory="Jun"/> <telerik:DataPoint YValue="179" XCategory="Jul"/> <telerik:DataPoint YValue="170" XCategory="Aug"/> <telerik:DataPoint YValue="198" XCategory="Sep"/> <telerik:DataPoint YValue="187" XCategory="Oct"/> <telerik:DataPoint YValue="193" XCategory="Nov"/> <telerik:DataPoint YValue="176" XCategory="Dec"/> </telerik:DataSeries> <!-- Bar Chart --> <telerik:DataSeries LegendLabel="Expenses"> <telerik:DataSeries.Definition> <telerik:BarSeriesDefinition /> </telerik:DataSeries.Definition> <telerik:DataPoint YValue="45" XCategory="Jan"/> <telerik:DataPoint YValue="48" XCategory="Feb"/> <telerik:DataPoint YValue="53" XCategory="Mar"/> <telerik:DataPoint YValue="41" XCategory="Apr"/> <telerik:DataPoint YValue="32" XCategory="May"/> <telerik:DataPoint YValue="28" XCategory="Jun"/> <telerik:DataPoint YValue="63" XCategory="Jul"/> <telerik:DataPoint YValue="74" XCategory="Aug"/> <telerik:DataPoint YValue="77" XCategory="Sep"/> <telerik:DataPoint YValue="85" XCategory="Oct"/> <telerik:DataPoint YValue="89" XCategory="Nov"/> <telerik:DataPoint YValue="80" XCategory="Dec"/> </telerik:DataSeries> </telerik:ChartArea.DataSeries> </telerik:ChartArea> </telerik:ChartDefaultView.ChartArea> </telerik:ChartDefaultView> </telerik:RadChart.DefaultView> </telerik:RadChart> {{endregion}} The result is [categorical chart]({%slug radchart-features-categorical-charts%}), where on __X-Axis__ you can see the months: ![](images/RadChart_PopulatingWithData_CreatingChartDeclaratively_020.png) ## See Also * [Overview]({%slug radchart-populating-with-data-overview%}) * [Creating a Chart in Code-behind]({%slug radchart-populating-with-data-creating-chart-in-code-behind%}) * [Data Binding Support Overview]({%slug radchart-populating-with-data-data-binding-support-overview%}) * [Data Binding with Automatic Series Mappings]({%slug radchart-populating-with-data-data-binding-with-automatic-series-binding%}) * [Data Binding with Manual Series Mapping]({%slug radchart-populating-with-data-data-binding-with-manual-series-mapping%})
51.127854
487
0.664642
eng_Latn
0.486104
9aad893ed0cb65fcff360c01b5c10a14f4e51968
1,087
md
Markdown
markdown/python/pypi.sh.md
hdknr/annotated-django
5843908dd6586a54b92d974f45049fa87e64db8b
[ "PSF-2.0", "BSD-3-Clause" ]
null
null
null
markdown/python/pypi.sh.md
hdknr/annotated-django
5843908dd6586a54b92d974f45049fa87e64db8b
[ "PSF-2.0", "BSD-3-Clause" ]
55
2016-02-27T06:02:24.000Z
2021-11-01T07:53:20.000Z
markdown/python/pypi.sh.md
hdknr/annotated-django
5843908dd6586a54b92d974f45049fa87e64db8b
[ "PSF-2.0", "BSD-3-Clause" ]
null
null
null
# sh - [https://github.com/amoffat/sh](https://github.com/amoffat/sh) - [https://amoffat.github.io/sh/](https://amoffat.github.io/sh/) ## install ``` $ pip install sh Collecting sh Downloading sh-1.09.tar.gz /home/vagrant/.pyenv/versions/wordpress/lib/python2.7/site-packages/setuptools-8.3-py2.7.egg/setuptools/dist.py:284: UserWarning: The version specified requires normalization, consider using '1.9' instead of '1.09'. Installing collected packages: sh Running setup.py install for sh /home/vagrant/.pyenv/versions/wordpress/lib/python2.7/site-packages/setuptools-8.3-py2.7.egg/setuptools/dist.py:284: UserWarning: The version specified requires normalization, consider using '1.9' instead of '1.09'. Successfully installed sh-1.9 ``` ## ためす ### uptime ``` >>> from sh import uptime >>> uptime() 12:24:33 up 22:10, 1 user, load average: 0.01, 0.02, 0.05 >>> res = _ >>> type(res) <class 'sh.RunningCommand'> >>> res 12:24:33 up 22:10, 1 user, load average: 0.01, 0.02, 0.05 ``` ``` >>> res.cmd ['/usr/bin/uptime'] >>> res.pid 25549 >>> res.exit_code 0 ```
25.27907
219
0.691812
eng_Latn
0.525871
9aada814d98da6c084d8cc1997d0e6a1bf2e3a1c
184
md
Markdown
README.md
BruceKangCN/clush-client
d0953b4ae23b747eed1c8a28e82aa4a8545e45d5
[ "MIT" ]
null
null
null
README.md
BruceKangCN/clush-client
d0953b4ae23b747eed1c8a28e82aa4a8545e45d5
[ "MIT" ]
null
null
null
README.md
BruceKangCN/clush-client
d0953b4ae23b747eed1c8a28e82aa4a8545e45d5
[ "MIT" ]
null
null
null
# Clush Client a `C++/Qt` based IM software client using markdown to communicate, the server is written in `Rust` called [clush-server](https://github.com/BruceKangCN/clush-server)
46
167
0.755435
eng_Latn
0.976202
9aae3429be46aa2d3009cea85f1f6ddc30ac9018
2,121
md
Markdown
panos/r/panos_panorama_bgp_auth_profile.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
78
2021-01-15T14:10:30.000Z
2022-02-14T09:17:40.000Z
panos/r/panos_panorama_bgp_auth_profile.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
5
2021-04-09T15:21:28.000Z
2022-01-28T19:02:05.000Z
panos/r/panos_panorama_bgp_auth_profile.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
30
2021-01-17T13:16:57.000Z
2022-03-21T12:52:08.000Z
# panos_panorama_bgp_auth_profile [back](../panos.md) ### Index - [Example Usage](#example-usage) - [Variables](#variables) - [Resource](#resource) - [Outputs](#outputs) ### Terraform ```terraform terraform { required_providers { panos = ">= 1.8.1" } } ``` [top](#index) ### Example Usage ```terraform module "panos_panorama_bgp_auth_profile" { source = "./modules/panos/r/panos_panorama_bgp_auth_profile" # name - (required) is a type of string name = null # secret - (optional) is a type of string secret = null # template - (optional) is a type of string template = null # template_stack - (optional) is a type of string template_stack = null # virtual_router - (required) is a type of string virtual_router = null } ``` [top](#index) ### Variables ```terraform variable "name" { description = "(required)" type = string } variable "secret" { description = "(optional)" type = string default = null } variable "template" { description = "(optional)" type = string default = null } variable "template_stack" { description = "(optional)" type = string default = null } variable "virtual_router" { description = "(required)" type = string } ``` [top](#index) ### Resource ```terraform resource "panos_panorama_bgp_auth_profile" "this" { # name - (required) is a type of string name = var.name # secret - (optional) is a type of string secret = var.secret # template - (optional) is a type of string template = var.template # template_stack - (optional) is a type of string template_stack = var.template_stack # virtual_router - (required) is a type of string virtual_router = var.virtual_router } ``` [top](#index) ### Outputs ```terraform output "id" { description = "returns a string" value = panos_panorama_bgp_auth_profile.this.id } output "secret_enc" { description = "returns a string" value = panos_panorama_bgp_auth_profile.this.secret_enc sensitive = true } output "this" { value = panos_panorama_bgp_auth_profile.this } ``` [top](#index)
18.128205
63
0.661009
eng_Latn
0.74205
9aaea8efcebef192339aacd4269bdc5564b7ac29
14
md
Markdown
README.md
clrbit/gazer
13b72f26fb7e3cf66bb0ff1bb6dc156fa808d439
[ "MIT" ]
null
null
null
README.md
clrbit/gazer
13b72f26fb7e3cf66bb0ff1bb6dc156fa808d439
[ "MIT" ]
null
null
null
README.md
clrbit/gazer
13b72f26fb7e3cf66bb0ff1bb6dc156fa808d439
[ "MIT" ]
null
null
null
# gazer gazer
4.666667
7
0.714286
por_Latn
0.989258
9aaf8aef8e202cd4254bf9f02e69a1f437862724
2,571
md
Markdown
README.md
kei-g/emojify-js
53c28fdddd2b8f26ea9f5295c84c2640f8d0ef3f
[ "BSD-3-Clause" ]
null
null
null
README.md
kei-g/emojify-js
53c28fdddd2b8f26ea9f5295c84c2640f8d0ef3f
[ "BSD-3-Clause" ]
51
2021-12-21T19:24:58.000Z
2022-03-28T19:30:39.000Z
README.md
kei-g/emojify-js
53c28fdddd2b8f26ea9f5295c84c2640f8d0ef3f
[ "BSD-3-Clause" ]
null
null
null
# Emojify [![licence][license-image]][license-url] [![npm][npm-image]][npm-url] [![coverage][nyc-cov-image]][github-url] [![dependency][dependency-image]][dependency-url] [![maintenance][maintenance-image]][npmsio-url] [![quality][quality-image]][npmsio-url] [![GitHub CI (Build)][github-build-image]][github-build-url] [![GitHub CI (Coverage)][github-coverage-image]][github-coverage-url] [![travis][travis-image]][travis-url] Emojify - a text formatter for `:emoji:` style ## Installation ```shell npm i @kei-g/emojify -g ``` ## Usage To format emojis simply, then you'll see :star: Hello world :tada:, ```shell echo :star: Hello world :tada: | emojify ``` And to see available emojis list, ```shell emojify -l ``` ### emojify with git To see emojified git logs, ```shell mkdir play-with-emojify cd play-with-emojify git init touch .gitkeep git add . git commit -m ":tada: Initial commit" git log --color | emojify ``` To configure `git` to use `emojify` as pager; for example, on :penguin: linux, ```shell git config --global core.pager 'emojify | less -R' ``` ## TODO - features - customizable dictionary of emojis - provide a method for escaped colons - quality - coverage - failure cases of parsing emojis' dictionary [dependency-image]:https://img.shields.io/librariesio/release/npm/@kei-g/emojify?logo=nodedotjs [dependency-url]:https://npmjs.com/package/@kei-g/emojify?activeTab=dependencies [github-build-image]:https://github.com/kei-g/emojify-js/actions/workflows/build.yml/badge.svg [github-build-url]:https://github.com/kei-g/emojify-js/actions/workflows/build.yml [github-coverage-image]:https://github.com/kei-g/emojify-js/actions/workflows/coverage.yml/badge.svg [github-coverage-url]:https://github.com/kei-g/emojify-js/actions/workflows/coverage.yml [github-url]:https://github.com/kei-g/emojify-js [license-image]:https://img.shields.io/github/license/kei-g/emojify-js [license-url]:https://opensource.org/licenses/BSD-3-Clause [maintenance-image]:https://img.shields.io/npms-io/maintenance-score/@kei-g/emojify?logo=npm [npm-image]:https://img.shields.io/npm/v/@kei-g/emojify?logo=npm [npm-url]:https://npmjs.com/@kei-g/emojify [npmsio-url]:https://npms.io/search?q=%40kei-g%2Femojify [nyc-cov-image]:https://img.shields.io/nycrc/kei-g/emojify-js?config=.nycrc.json&label=coverage&logo=mocha [quality-image]:https://img.shields.io/npms-io/quality-score/@kei-g/emojify?logo=npm [travis-image]:https://img.shields.io/travis/com/kei-g/emojify-js/main?logo=travis [travis-url]:https://app.travis-ci.com/github/kei-g/emojify-js
35.219178
347
0.735511
kor_Hang
0.142325
9ab02b9864533f786d77919bec329505cd4e051b
971
md
Markdown
README.md
deralex/untar
d9ce47d841c2986a5c6982d881a8004052f92a19
[ "MIT" ]
null
null
null
README.md
deralex/untar
d9ce47d841c2986a5c6982d881a8004052f92a19
[ "MIT" ]
null
null
null
README.md
deralex/untar
d9ce47d841c2986a5c6982d881a8004052f92a19
[ "MIT" ]
null
null
null
# untar Unpack tar archives with various compression algorithms (gz, bzip2 etc.) ## Build Install `cargo` and builing `untar` is quite simple: cargo build --release If you want `untar` without printing colors in your terminal build with the following feature option: cargo build --release --features nocolor ## Usage Quite simple: untar $PATH_TO_ARCHIVE It'll extract your tar archive to the current directory. Currently `untar` supports tar archive compressed with the GZIP compression algorithm, more to come. ## Disclaimer I'm new to Rust, to this code is far from being Rusty-perfect. If you got severel improvements or suggestions just open up a PR, I'm happy about it! Besides that `untar` is still a work in progress. More compression algorithms (bzip2, zlib) are yet to come as well as some more conventient functionallity. ## License This project is licensed under the terms and conditions of the MIT license; see LICENSE for more details.
32.366667
119
0.77034
eng_Latn
0.997593
9ab07f574d8edea83d27ecaa6f67737a6462abbc
71,436
markdown
Markdown
_posts/2009-04-30-collecting-information-from-user-devices.markdown
LizetteO/Tracking.virtually.unlimited.number.tokens.and.addresses
89c6b0fa3defa14758d6794027cc15150d005bbd
[ "Apache-2.0" ]
null
null
null
_posts/2009-04-30-collecting-information-from-user-devices.markdown
LizetteO/Tracking.virtually.unlimited.number.tokens.and.addresses
89c6b0fa3defa14758d6794027cc15150d005bbd
[ "Apache-2.0" ]
null
null
null
_posts/2009-04-30-collecting-information-from-user-devices.markdown
LizetteO/Tracking.virtually.unlimited.number.tokens.and.addresses
89c6b0fa3defa14758d6794027cc15150d005bbd
[ "Apache-2.0" ]
2
2019-10-31T13:03:55.000Z
2020-08-13T12:57:08.000Z
--- title: Collecting information from user devices abstract: An information collection system may include a server and one or more user devices that are in electronic communication with each other. Information may be collected by the user devices. For example, a user device may collect information regarding an error that occurred on the device. A server may monitor the user devices and receive information reports from those devices. The server may also instruct the user devices to perform self-corrective actions based on information received from those devices. url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=08171351&OS=08171351&RS=08171351 owner: Amazon Technologies, Inc. number: 08171351 owner_city: Reno owner_country: US publication_date: 20090430 --- This application is related to and claims priority from U.S. Provisional Patent Application Ser. No. 61 145 479 filed on Jan. 16 2009 for COLLECTING INFORMATION FROM USER DEVICES with inventors Beryl Tomay Howard S. Kapustein William D. Dozier John B. Roy Paul F. Ferraro and David Berbessou. Electronic distribution of information has gained in importance with the proliferation of personal computers and has undergone a tremendous upsurge in popularity as the Internet has become widely available. With the widespread use of the Internet it has become possible to distribute large coherent units of information using electronic technologies. Advances in electronic and computer related technologies have permitted computers to be packaged into smaller and more powerful electronic devices. An electronic device may be used to receive and process information. The electronic device may provide compact storage of the information as well as ease of access to the information. For example a single electronic device may store a large quantity of information that might be downloaded at any time via the Internet. In addition the electronic device may be backed up so that physical damage to the device does not necessarily correspond to a loss of the information stored on the device. Electronic devices may experience problems from certain hardware certain software from data or messages on the device etc. Sometimes a user may be aware of a problem or error while at other times a user may be unaware of a problem. Benefits may be realized from improved systems and methods for collecting and using error information from a user device. The present disclosure relates generally to an information collection system. Such a system may include a server and one or more user devices e.g. electronic book readers that are in electronic communication with each other. Information such as errors that occur on a user device may be collected. For example a user device may unexpectedly shut down during operation. The user device may create an error report when an error occurs. The error report may include information about the error and information about the system parameters of the user device when the error occurred. The server may monitor the user devices on a system wide basis. For example the server may monitor events that occur on the user devices such as user statistics and errors that have occurred on the user devices. The server may monitor events that occur on the user devices by requesting information from the user devices such as error reports. The server may also instruct the user devices to perform self corrective actions such as fixing corrupt data files. As used herein the term item may correspond to any type of content. In one case the item corresponds to a digital media item. The media item may include without limitation text content image content audio content video content hypertext protocol content and so on or any combination of these kinds of content. In addition or alternatively the item may include instruction bearing content such as machine readable program code markup language content script content and so forth. For instance an item may correspond to a software upgrade or the like. More specifically in one case the term item may refer to a specific unit of merchandisable content such as an electronic book referred to herein as an eBook an issue of a magazine and so on. Alternatively an item may refer to smaller parts of a merchandisable unit such as a chapter of a book or a song in an album. Alternatively an item may refer to a larger compilation of component items which are related in any manner. For instance an item may refer to multiple issues of a magazine in a particular year. In general the various features described in the implementations may be regarded as optional features meaning that these features may be omitted or replaced with other features. Further the various implementations described herein may be supplemented by adding additional features. As explained above the term item has broad connotation. The following list which is non exhaustive identifies representative types of items. An item may correspond to an eBook item. An eBook item in turn may refer to a book in electronic form or to one or more portions of a book such as a chapter of a book or to a compilation of multiple books such as a book series and so on. An eBook is an example of a general class of items referred to herein as pre generated items. The term pre generated item refers to content typically although not necessarily provided to a user in response to the user s on demand request for the content after it has been received and stored by the IPS . An item of content may also correspond to a subscription related item. A subscription related item refers to any item the user receives based on a schedule or based on some other type of pre established arrangement. Without limitation representative forms of subscription related items include magazines journals newspapers newsletters and so on. Other forms of subscription related items include electronic feeds of various types such as Really Simple Syndication RSS feeds and so on. In contrast to a pre generated item a subscription related item is typically provided to a user in response to the receipt of the item by the IPS rather than the user s on demand request for a pre generated item. An item may also correspond to a personal document item or simply personal item. A personal item refers to a document the user forwards in advance to the IPS whereupon the IPS converts the item to a device readable format. An item may also correspond to audio content such as a piece of music a collection of music an audio book and so on. An item may also correspond to a bundle of information generated in response to a query made by the user. An item may also correspond to instruction bearing content such as a software update. An item may also correspond to advertising material downloaded to the user device by any entity or combination of entities. Various rules may be applied to govern the downloading of this type of item. An item may also correspond to a sample of a more complete version of the item. In one case a sample type item may embed one or more links to allow the user to acquire its full version counterpart or another part e.g. chapter of the item. In another case a publisher or author may release an eBook or other item in a series of installments. Each installment may be regarded as an item. The item providing system IPS corresponds to any functionality or combination of functionality for forwarding items to the user device . In one case the IPS may correspond to network accessible server based functionality various data stores and or other data processing equipment. The IPS may be implemented by a single collection of functionality provided at a single physical site. Alternatively the IPS may be implemented by multiple collections of functionality optionally provided at plural physical sites. The IPS may be administered by a single entity or plural entities. In one case the IPS corresponds to an entity that provides items to users upon the users purchase of the items. In this role the IPS may essentially act as a bookseller or the like. In one particular commercial environment the IPS may also offer services which allow users to purchase hard copy books for physical delivery to the users in this context the IPS may allow users to download electronic items to respective user devices as part of its entire suite of services. In other cases the IPS corresponds to an entity which provides items to users on a non fee basis or on the basis of some other type of alternative compensation arrangement. Thus the term provider of items should be construed broadly to encompass educational institutions governmental organizations libraries non profit organizations and so on or some cooperative combination of any two or more entities. The user device corresponds to any type of electronic processing device for receiving items from the IPS . In one implementation the user device is readily portable meaning the user may freely carry the user device from one location to another. In one particular case the user device is designed as a book reader device also known as an eBook reader device. In this case the user device functions as the electronic counterpart of a paper based book. The user may hold the user device in a manner similar to a physical book the user may electronically turn the pages of the book and so on. Without limitation illustrates a particular type of eBook reader device. Additional details regarding this particular type of reader device are provided below. Alternatively the user device may correspond to any other type of portable device such as a portable music player a personal digital assistant PDA a mobile telephone a game module a laptop computer and so on or any combination of these types of devices. Alternatively or in addition the user device may correspond to a device which is not readily portable such as a personal computer a set top box associated with a television a gaming console and so on. A communication infrastructure bi directionally couples the IPS to the user device . Namely the IPS downloads items upgrades or other information to the user device via the communication infrastructure . The IPS receives various instructions and other data from the user device via the communication infrastructure . The communication infrastructure may include any combination of communication functionality including any combination of hardwired links wireless links etc. For instance to be discussed below in turn shows one implementation of the communication infrastructure which includes a combination of a wide area network WAN and wireless infrastructure. By virtue of the wireless component of the communication infrastructure the user may use the user device to purchase items and consume items without being tethered to the IPS via hardwired links. Thus for instance a user may purchase and consume an eBook using the device while riding in a car as a passenger while hiking in a park while boating on a lake and so forth. The user device receives the list from the IPS in response to the second message note does not specifically identify the transmission of the list from the IPS to the user device . If the instructions identify items to be downloaded in a third message the user device sends a request to the IPS asking the IPS to provide for download of the items identified in the list. In a fourth message the requested items are downloaded from the IPS to the user device . In effect the user device retrieves the items using a pull approach but the pull approach is initiated by a push operation by virtue of the IPS pushing a notification message to the user device . In a fifth message the IPS may request information from the user device . For example the IPS may request error and health information from the user device . Error and health information are discussed in more detail below in relation to . In a sixth message the user device may then provide the requested information to the IPS . In effect the IPS retrieves the requested information using a pull approach. Requested information may also be pushed to the IPS . The communication infrastructure may include multiple components. A first component may be a wireless provider system . The wireless provider system corresponds to any infrastructure for providing a wireless exchange with the user device . In one case the wireless provider system is implemented using various data processing equipment communication towers and so forth not shown . Alternatively or in addition the wireless provider system may rely on satellite technology to exchange information with the user device . The wireless provider system may use any form of electromagnetic energy to transfer signals such as without limitation radio wave signals. The wireless provider system may use any communication technology to transfer signals such as without limitation spread spectrum technology implemented for instance using the Code Division Multiple Access CDMA protocol. The wireless provider system may be administered by a single entity or by a cooperative combination of multiple entities. The communication infrastructure may also include a communication enabling system . One purpose of the communication enabling system is to serve as an intermediary in passing information between the IPS and the wireless provider system . The communication enabling system may be implemented in any manner such as without limitation by one or more server type computers data stores and or other data processing equipment. The communication enabling system may include one or more application programming interfaces APIs . The communication enabling system may communicate with the wireless provider system via a dedicated channel also referred to as a dedicated communication pipe or private pipe. The channel is dedicated in the sense it is exclusively used to transfer information between the communication enabling system and the wireless provider system . In contrast the communication enabling system communicates with the IPS via a non dedicated communication mechanism such as a public Wide Area Network WAN . For example the WAN may represent the Internet. The users may access the IPS through alternative communication routes which bypass the wireless provider system . For instance as indicated by alternative access path a user may use a personal computer or the like to access the IPS via the wide area network circumventing the wireless provider system and the communication enabling system . The user may download items through this route in conventional fashion. The user may then transfer the items from the personal computer to the user device e.g. via a Universal Serial Bus USB transfer mechanism through the manual transfer of a portable memory device and so on. This mode of transfer may be particularly appropriate for large files such as audio books and the like. Transferring such a large amount of data in wireless fashion may have a relatively high cost. However the system may also be configured to transfer large files such as audio files via the wireless exchange . Other network accessible resources may be accessed by the IPS through the wide area network . For example the IPS may access additional databases through the wide area network . The wireless provider system may facilitate communication traffic with one or more user devices and store statistics for the communication traffic in a database. The wireless provider system may also facilitate merchant related communication traffic with one or more merchants and store statistics for the merchant related communication traffic in a database. In another implementation the system may use some other communication infrastructure than is shown in which may optionally omit the use of wireless communication. Addressing the details of the IPS first this system performs various functions. Different modules are associated with these different functions. One module is a content reception system . The content reception system receives content from one or more sources of content . The sources may represent any type of provider of content such as eBook publishers newspaper publishers other publishers of periodicals various feed sources music sources and so on. The content reception system may also access network accessible resources through the network such as databases. The sources may be administered by a single entity or may be administered by separate respective entities. Further the entity administering the IPS may correspond to a same entity which administers one or more of the sources . Alternatively or in addition the entity administering the IPS may interact with one or more different entities administering one or more respective sources . In the latter case the entity administering the IPS may enter into an agreement with the source entities to receive content from these source entities. In the above example the entities associated with the sources may correspond to commercial organizations or other types of organizations. In another case one or more of the sources may correspond to individual users such as the creators of the items. For example a user may directly provide items to the IPS . Alternatively or in addition a user may supply content to a community repository of items and the IPS may receive content from this repository and so on. The content reception system may obtain the content through various mechanisms. In one case the content reception system obtains the content via one or more networks . The networks may represent a WAN such as the Internet a Local Area Network LAN or some combination thereof. The content reception system may receive the information in various forms using any protocol or combination of protocols. For instance the content reception system may receive the information by making a Hypertext Transfer Protocol HTTP request by making a File Transfer Protocol FTP request by receiving a feed e.g. an RSS feed and so forth. In another case the IPS may obtain content via a peer to peer P2P network of sources . More generally the content reception system may proactively request the content in an on demand manner based on a pull method of information transfer or the content reception system may receive the content in response to independent transfer operations initiated and performed by the sources based on a push method of information transfer . Alternatively the content reception system may use a combination of pull and push transfer mechanisms to receive the content. The content reception system may receive content in the form of items. Without limitation the items may include eBooks audio books music magazine issues journal issues newspaper editions various feeds and so forth. In one case the content reception system may receive some items expressed in a format not readable by the user device where the user device may optionally be configured to receive process and present content expressed in one or more predefined formats . To address this situation the content reception system may convert the items from their original format into a device readable format. The content reception system stores the items received and optionally converts them to another format in a content store . The content store includes one or more storage systems for retaining items in electronic form located at a single site or distributed over plural sites administered by one or more entities. The IPS also includes a subscription module . The subscription module manages users subscriptions to subscription related items. Generally a subscription entitles a user to receive one or more subscription related items which are yet to be received and stored by the content reception system based on any type of consideration or combination of considerations. Without limitation subscription related item types include magazines journals newsletters newspapers various feeds and so forth. Users may arrange to receive subscription related items by purchasing such subscriptions or more generally by registering to receive such subscriptions which in some cases may not involve the payment of a fee . Alternatively or in addition the IPS may automatically register users to receive subscription related items without the involvement of the users and possibly without the approval of the users . The latter scenario may be appropriate in the case in which the IPS or some other entity registers a user to receive unsolicited advertisements newsletters and so on. The system may allow the user to opt out of receiving such unsolicited information. The IPS may consult the subscription module to determine which user devices should receive a newly received subscription related item. For instance upon receiving an electronic issue of the magazine Forbes the IPS consults the subscription module to determine the users who have paid to receive this magazine. The IPS then sends the issue to the appropriate user devices. An item delivery system represents the functionality which actually performs the transfer of content to the user device . In one illustrative representation the item delivery system includes two components a to do list server module and a content delivery module . The to do list server module generally provides instructions for the user device . The instructions direct the user device to retrieve items and perform other operations. The content delivery module allows the user device to obtain the items identified in the instructions received from the to do list server module . More specifically in a first phase of information retrieval the to do list server module sends a notification message to the user device . The user device responds to the notification message by waking up if asleep which may involve switching from a first power state to a second power state where the second power state consumes more power than the first power state . The user device may then contact the to do list server module to request instructions from the to do list server module . More specifically for each user device the to do list server module maintains a list of entries also referred to herein as a to do queue. An entry provides an instruction for a user device to perform an action. As will be described in greater detail below there are different instructions that a device may be directed to perform wherein a collection of instructions defines an IPS device interaction protocol. One such action e.g. associated with a GET instruction of the protocol directs the user device to retrieve an item from a specified location by specifying an appropriate network address e.g. a URL and appropriate arguments. In a first phase of the downloading procedure the user device may retrieve n such entries wherein n is an integer. In one scenario the number n may be a subset of a total number of items in the to do queue associated with the user device . In a second phase of the downloading procedure the user device may contact the content delivery module to retrieve one or more items identified in the GET related entries. In general after receiving the notification message the item delivery system may interact with the user device in a data mode e.g. using the Hypertext Transfer Protocol HTTP or some other protocol or combination of protocols. The IPS may also include a merchant store module . The merchant store module may provide access to an item catalog which in turn may provide information regarding a plurality of items such as eBooks audio books subscription related items and so on . As will be described in greater detail below the merchant store module may include functionality allowing a user to search and browse though the item catalog . The merchant store module may also include functionality allowing a user to purchase items or more generally acquire items based on any terms . In one case a user may interact with the merchant store module via the user device using wireless communication. Alternatively or in addition the user may interact with the merchant store module via another type of device such as a personal computer optionally via wired links. In either case when the user purchases or otherwise acquires an item via the merchant store module the IPS may invoke the item delivery system to deliver the item to the user. The IPS may also include a personal media library module . The personal media library module may store for each user a list of the user s prior purchases. More specifically in one case the personal media library module may provide metadata information regarding eBook items and other on demand selections e.g. a la carte selections such as subscription issues etc. which a user already owns. The personal media library module may also provide links to the items in the content store . As will be described in greater detail below to download an eBook item or the like which the user has already purchased the user device contacts the content delivery module . The content delivery module may interact with permission information and linking information in the personal media library module in order to download the item to the user. In one use scenario the user device may access the content delivery module in this manner to initiate downloading of an item which has been previously purchased by the user but has been deleted by the user device for any reason. The IPS may also include an information reporting information analyzing system . The information reporting information analyzing system may request error information from the user device including information related to the health and status of the user device . For example the information reporting information analyzing system may send error configuration messages to the user devices which configure the user devices to produce error reports and send the error reports to the IPS . The information reporting information analyzing system may also process and analyze the error reports received from the user device . The information reporting information analyzing system is discussed in more detail below. The IPS may also include various security related features such as one or more authorization stores . The authorization stores may provide information which enables various components of the IPS to determine whether to allow the user to perform various functions such as access the merchant store module download items change settings and so on. The above enumerated list of modules is representative and is not exhaustive of the types of functions performed by the IPS . As indicated by the label Other Server Side Functionality the IPS may include additional functions many of which are described below. Now turning to the device side features of the system the user device may include a device to do list processing module . The purpose of the device to do list processing module may be to interact with the item delivery system to download items from the item delivery system . Namely in a first phase of the downloading procedure the device to do list processing module may first receive a notification message from the to do list server module which prompts it to wake up if asleep and contact the to do list server module to retrieve a set of n entries. Each entry may include an instruction which directs the device to do list processing module to perform an action. In a second phase for a GET type entry the device to do list processing module may contact the content delivery module to request and retrieve an item identified by the GET type entry. As will be described in greater detail below the user device may signal a successful completion of the download process or a failure in the download process. Upon downloading an item the user device may store the item in a device side memory which in one example is a flash type memory and may be any other type of memory in other examples. Although not shown the user device may also exchange information with any other source of content . In one illustrative case the other source of content may represent a personal computer or other data processing device. Such other source of content may transfer an item to the user device via a Universal Serial Bus USB connection and or any other type s of connection s . In this scenario the other source of content in turn may receive the item from the IPS or other source via hardwired connection e.g. non wireless connection . For example to receive an audio book the user may use a personal computer to non wirelessly download the audio book from a network accessible source of such content. The user may then transfer the audio book to the user device via USB connection. In another illustrative case the other source of content may represent a portable memory module of any type such as a flash type memory module a magnetic memory module an optical memory module and so on. The user device may also include a reader module . The illustrative purpose of the reader module is to present media items for consumption by the user using the user device . For example the reader module may be used to display an eBook to the user to provide a user experience which simulates the reading of a paper based physical book. The user device may also include a content manager module . The purpose of the content manager module is to allow the user to manage items available for consumption using the user device . For example the content manager module may allow the user to view a list of items available for consumption. The content manager module may also identify the sources of respective items one such source corresponds to the device memory another source corresponds to an attached portable memory e.g. represented by the other source another source corresponds to items identified in the personal media library module as may be revealed in turn by device side metadata provided by the IPS another source corresponds to subscription related items identified by the subscription module and so on. The content manager module may allow the user to filter and sort the items in various ways. For example the user may selectively view items which originate from the device memory . The user device may also include a store interaction module . The store interaction module may allow the user device to interact with the merchant store module . The user may engage the store interaction module to search and browse through items to purchase items to read and author customer reviews and so on. As described above the user may also use a personal computer or the like to interact with the merchant store module via hardwired links. The user device may also include an information reporting module . The information reporting module may communicate information from the user device to the IPS . For example the information reporting module may communicate error information from the user device to the IPS . The information reporting module may receive configuration messages from the information reporting information analyzing system of the IPS . For example the information reporting module may receive error configuration messages from the information reporting information analyzing system of the IPS . The configuration messages may instruct the information reporting module to collect store or send information. For example the configuration messages may instruct the information reporting module to collect store or send error information. The information reporting module may collect and store information pertaining to the user device such as error information. The information reporting module may collect information such as error information upon receiving an information configuration message from the information reporting information analyzing system . Alternatively the information reporting module may periodically collect information. Alternatively still the information reporting module may only collect information when an event occurs on the user device . The information reporting module may then send the information collected to the information reporting information analyzing system of the IPS in an information report. The information reporting module may send the information report to the information reporting information analyzing system upon receiving an error configuration message from the information reporting information analyzing system . Alternatively the information reporting module may periodically send an information report to the information reporting information analyzing system of the IPS . Alternatively still the information reporting module may send an information report to the information reporting information analyzing system of the IPS after an event has occurred on the user device . The information reporting module is discussed in more detail below. The data received in an information report may be used by the IPS to track and fix errors occurring on the user device . The data received in an information report may also be used by the IPS for a variety of reasons including but not limited to bandwidth reports determining the impact of radio usage on the battery annotation synchronization usage and the impact on the battery by contacting a user device . The above enumerated list of modules is representative and is not exhaustive of the types of functions performed by the user device . As indicated by the label Other Device Side Functionality the user device may include additional functions many of which are described below. In fact shows additional device side functionality. For completeness also identifies the various modules described above including the device to do list processing module the device memory the reader module the content manager module the store interaction module and the information reporting module . These features perform the functions described above. The user device may also include a home presentation module . The home presentation module may provide a home page when the user first turns on the user device and or at other junctures. The home page may act as a general portal allowing a user to access media items and various features provided by the user device . In one illustrative case the home page may present a summary of some or all of the items available for consumption using the user device . The user device may also include an audio player module . The audio player module may provide an interface which allows the user to play back and interact with audio items such as music audio books and the like. As discussed above in relation to the information reporting module may receive configuration messages from the IPS and may collect store and send information pertaining to the user device . The information reporting module may include error information such as a list of errors that the user device has encountered or is currently encountering. The information reporting module may also include other types of information such as events pertaining to the user device . As another example the information reporting module may include a listing of the current event on the user device such as the user device downloading an item the user device running diagnostic software or the like. The information reporting module may also include information such as the health of the user device . The health of the user device may be an overall indicator of the health of the user device . For example the health of the user device may include usage statistics of the user device the rate of errors of the user device present and past errors that the user device has encountered etc. As another example the information reporting module may include information concerning the current and prior battery status of the user device . The information reporting module may also include information concerning the current and prior memory status of the user device . The memory status of the user device may include information about the total memory on the user device or the percentage of memory used to store eBooks on the user device . Alternatively the memory status of the user device may include information about the usage of temporary storage locations such as random access memory RAM by the user device . The information reporting module may also allow the IPS to access and or gain control of the user device . For example the IPS may access the user device to shutdown and or lock the user device . The IPS may shutdown lock the user device if for example the user device is causing errors at the IPS or if the device has been reported stolen and the owner of the user device wishes to prevent it from being used to purchase download additional items to the user device . Alternatively the IPS may shutdown lock the user device to disable features on the user device that are no longer necessary or available to the user. The IPS may also access the user device to perform debugging operations. For example the IPS may gain control of the user device to fix errors on the user device and prevent future errors. The IPS may perform additional debugging operations such as applying software patches deleting unnecessary software initiating a reboot of the user device etc. The above described features of the user device may pertain to applications with which the user may interact or which otherwise play a high level role in the user s interaction with the user device . The user device may include a number of other features to perform various lower level tasks possibly as background type operations. Power management functionality performs one such background type operation. More specifically the power management functionality corresponds to a collection of hardware and or software features operating to manage the power consumed by the user device . The power management functionality generally operates to reduce the power consumed by the device . The power management functionality achieves this goal by selectively powering down features not actively being used or for which there is an assumption these features are not actively being used . The power management functionality achieves particularly noteworthy power savings by powering down features which make large power demands such as one or more features associated with wireless communication. The user device may also include performance Monitoring and Testing MT functionality . The MT functionality maintains a performance log identifying the behavior of the device . The IPS and or other entities may access the performance log along with other information gleaned from the communication infrastructure to help diagnose anomalies in the operation of the user device and the system as a whole. The MT functionality may also interact with testing functionality provided by the IPS and or other entities. For example the MT functionality may respond to test probes generated by the IPS . The MT functionality may be used by the information reporting module to collect information for the user device . For example the performance log may collect and store error information. The user device may also include an upgrade related functionality . The upgrade related functionality allows the user device to receive and integrate instruction bearing update items such as software updates . In one case the upgrade related functionality may automatically receive instruction bearing items provided by the IPS and or by other entities . An administrator at the IPS may manually initiate the upgrade procedure by which an instruction bearing update item is forwarded to the user device . Or an automated IPS side routine may initiate the upgrade procedure. In any event the user device may receive the instruction bearing update item without the involvement of the user or with minimal involvement from the user. In this sense the upgrade procedure may be viewed as transparent. In another case the upgrade related functionality may be operated by the user to manually access a source of instruction bearing items such as a prescribed website or the like and download an item from this source. To repeat the above enumerated list of modules is representative and is not exhaustive of the types of functions performed by the user device . As indicated by the label Other Device Side Functionality the user device may include additional functions. The IPS described above may interact with any type of user device . In one case the user device is a portable type device meaning a device designed to be readily carried from location to location. In one specific case the user device allows the user to consume the media items while holding the user device e.g. in a manner which simulates the way a user might hold a physical book. A portable user device may take the form of an eBook reader device a portable music player a personal digital assistant a mobile telephone a game module a laptop computer and so forth and or any combination of these types of devices. Alternatively or in addition the user device may correspond to a device not readily portable such as a personal computer set top box associated with a television gaming console and so on. Without limitation shows one type of user device which may be used to interact with the IPS . The user device may include a wedge shaped body designed to fit easily in the hands of a user generally having the size of a paperback book. Other user devices may adopt different shapes and sizes. In one representative design the user device includes two display parts a main display part and a supplemental display part . The main display part presents various pages provided by the store interaction module the reader module and so on. In one case the supplemental display part is used to present a cursor. The user may position the cursor to identify laterally adjacent portions in the main display part . Without limitation in one illustrative case the main display part and or the supplemental display part may be implemented using electronic paper technology such as provided by E Ink Corporation of Cambridge Mass. This technology presents information using a non volatile mechanism using this technology the user device may retain information on its display even when the device is powered off. The user device includes various input keys and mechanisms. A cursor movement mechanism allows a user to move a cursor within the supplemental display part . In one representative case the cursor movement mechanism may include a cursor wheel that may be rotated to move a cursor up and down within the supplemental display part . The cursor movement mechanism may be configured to allow the user to make a selection by pressing down the wheel. Other types of selection mechanisms may be used such as a touch sensitive display a series of vertically and or horizontally arrayed keys along the edge s of the main display part one or more graphical scroll bar s in the main display part and so on. The user device also includes various page turning buttons such as next page buttons and a previous page button . The next page buttons advance the user to a next page in an item relative to a page that is currently being displayed . The previous page button advances the user to a previous page in an item relative to a page that is currently being displayed . The user device may also include a page turning input mechanism actuated by the user s thumb as it passes over the mechanism . This user experience simulates the manner in which a user turns a page in a physical book e.g. by thumbing through a book . The user device may also include a back button allowing the user to advance to a previous page when using the browsing module . Although not shown the user device may include a switch for turning power on and off a switch for enabling and disabling a wireless interface and so on. The user device may also include a keyboard . The keyboard may include alphanumeric keys. The keys may be shaped and oriented in a manner which facilitates the user s interaction with the keys while the user holds the device in the manner of a physical book. The user may use the keyboard to enter search terms annotations URLs and so forth. The keyboard may also include various special function keys. The IPS may include one or more databases. For example the IPS may include a database of users . The database of users may include information about each of the user devices that are in electronic communication with the IPS . Alternatively the database of users may include information about each of the user devices that are authorized to communicate with the IPS . Alternatively still the database of users may include information about each of the user devices that have previously reported information to the IPS . The IPS may also include an information database . The information database may include the information received pertaining to each user device . For example the information database may include the errors that user devices have encountered. The information database may also include solutions to errors that user devices have encountered. In addition the information database may include a listing of errors that do not yet have prescribed solutions. The information reporting information analyzing system may communicate with the databases included on the IPS . For example the information reporting information analyzing system may access the database of users or the information database to determine which user devices are to be monitored and or to determine which user devices are to be sent configuration messages. The IPS may provide outside access to the information reporting information analyzing system . For example a creator or distributor of a digital item may access the information reporting information analyzing system from outside the IPS to determine the cause of reported errors. Customer support may also access the information reporting information analyzing system from outside the IPS when helping users with problems they may be having on user devices . The system may include one or more user devices that are in electronic communication with the IPS through a network . For example user devices and may communicate wirelessly with a wireless provider . The wireless provider may then communicate with the IPS through the network . A user device may also communicate with the IPS through the network without intermediary modules or devices. A user device may communicate with the network using wired or wireless means. A user device may also communicate with a host computer . The user device may communicate with a host computer using wired or wireless means. The host computer may then communicate with the IPS through the network . Alternatively the host computer may bypass the network when communicating with the IPS . Instead the host computer may copy information received from a user device to a physical media . Physical media may include a CD ROM a USB flash card Isonlinear data crystals etc. The physical media may then be transferred to the IPS in person or by a carrier as a physical media transfer . Likewise the host computer may receive information from the IPS over a physical media transferred by a physical media transfer . A user device may bypass both the network and the host computer when communicating with the IPS . The user device may copy information to a physical media and the physical media may be transferred to the IPS using a physical media transfer . The user device may also receive information from the IPS over a physical media transferred by a physical media transfer . The information reporting information analyzing system may include analysis functionality and user device configuration messages . The analysis functionality may allow the information reporting information analyzing system to analyze user device information such as errors and events. For example the analysis functionality may include a list of critical errors that have occurred on a user device . The analysis functionality may also include a list of the most frequent errors that have occurred on one or more user devices . For example the analysis functionality may allow the information reporting information analyzing system to determine which user devices have had the most frequent number of errors reported. Alternatively the analysis functionality may allow the information reporting information analyzing system to determine which error has occurred on the highest number of user devices . The analysis functionality may also allow the information reporting information analyzing system to determine the user devices that have the highest bandwidth consumption . The analysis functionality may also allow the information reporting information analyzing system to create a list of errors without solutions and a list of errors with solutions . The user device configuration messages may include configuration messages requests to be sent to user devices . Many different kinds of requests for information may be sent to the user device . The user device configuration messages may include a request to report device health which may be sent to a user device requesting the user device return device health information to the information reporting information analyzing system . A request for directory listing may request a user device to send information indicating the files filenames located on the user device to the information reporting information analyzing system . The user device configuration messages may also include a request for a user device to show the last N events with N being an integer. The request to show last N events may ask a user device to prepare and send information concerning the previous N events on the user device . The user device configuration messages may also include a request for a user device to return date information such as the date and times that specific events have occurred on the user device . The user device configuration messages may also include a function to disable enable a user device . A request to initiate download may be used to request that a user device allow the information reporting information analyzing system to download an error report from the user device . Alternatively the function to initiate download may request that a user device download one or more software updates from the IPS . The user device configuration messages may also include a request for a screenshot from a user device . The request for a screenshot may require that the user device prepare and send a screenshot from the user device display to the information reporting information analyzing system . The user device configuration messages may also include a list of errors to report . The list of errors to report may include all errors that have occurred on the device or some subset of the errors that have occurred on the device . Likewise the user device configuration messages may also include a list of errors to log locally . The list of errors to log locally may be used to request the device to log certain errors locally to the device but to not yet report these errors to the information reporting information analyzing system . The user device configuration messages may further include additional system changes . The additional system changes may include data and or actions which provide for the user device to perform self corrective action. Self corrective action may include deleting files to free disk space rebooting the user device fixing corrupt data deleting corrupt data altering configuration settings and other actions that may be performed by the user device . The user device configuration messages may also include a request for a user device to show all the events over a range of time . The request to show events over a range of time may ask a user device to prepare and send information concerning the events on the user device that occur during a specific time period. The specific time period may include past time period present time periods and future time periods. A request to show events for a future time period may be processed after the time period has expired. As discussed above in relation to the system may include one or more databases such as a database of users and an information database that the information reporting information analyzing system may use. The system may include an information logging module . The information logging module may determine which user devices have logged information. The information logging module may then store this information in the databases . The system may include an information delivery system . The information delivery system may communicate with the user devices through the communication infrastructure . The information delivery system may deliver information such as error reports to the information logging module . Alternatively the information delivery system may deliver information directly to the information reporting information analyzing system . The user device may then prepare an information report. The user device may then send the information report to the server. The user device may also log the information locally on the user device . Until it is determined that an information report has been requested the device may continue to operate. This method illustrates information being pulled from the device . The device may also be programmed to automatically report specific information to the server at periodic time intervals or upon the occurrence of particular events. For example the device may be programmed to perform steps whenever an event occurs rather than waiting for an information request. For example the device may perform steps every day at a certain time. The device may be programmed such that steps have no customer impact. In other words the device may prepare and send information reports unbeknownst to the user of the device . The information report may include a listing of the type of information being reported. The type or information may identify one or more aspects of the information being reported such as a downloading error a playback error battery information usage data etc. The information report may include a listing of memory data . The memory data may include memory related information about the device or about the information being reported. For example memory data may identify a file that caused an error to be reported the memory location where an error occurred a memory dump etc. The information report may also include a listing of the software configuration and hardware configuration of the user device e.g. what software was running installed what hardware components are part of the device . The information report may also include information about the network such as the network connection and the network status . The information report may also include a more detailed description of the information being reported. For example when reporting a subsystem error the information report may include a description of the subsystem error and a description of the subsystem parameters before during or after the error. Within the subsystem error the information report may also include a description of an application error and a description of the application parameters application configuration and application state before during or after the error. The application error may correspond to an error provided by or associated with a specific application or program on the device. The application state may be stored in memory or on a disk. Within the application error the information report may also include a description of a module error and a description of the module parameters before during or after the error. The module error may correspond to an error provided by or associated with a specific function procedure or module of an application or program on the device. Within the module error the information report may also include a description of a line number error and a description of the line number parameters before during or after the error. The information report may also include separate information for a subsystem error an application error a module error and a line number error . The information report may further include separate information such as the subsystem parameters the application parameters the application configuration the application state the module parameters and the line number parameters . Additional information descriptions that are not shown such as the date and time of the information report may be included in the information report . The computer system is shown with a processor and memory . The processor may control the operation of the computer system and may be embodied as a microprocessor a microcontroller a digital signal processor DSP or other device known in the art. The processor typically performs logical and arithmetic operations based on program instructions stored within the memory . The instructions may be executable to implement the methods described herein. The processor may also be referred to as a central processing unit CPU . Memory which may include both read only memory ROM and random access memory RAM provides instructions and data to the processor . Portions of the instructions and the data are illustrated as being currently executed or read by the processor . A portion of the memory may also include non volatile random access memory NVRAM . The data in the memory may include one or more information reporting configurations . Each information reporting configuration may pertain to a single item of information or a single user device . The data in the memory may also include one or more information reports that have been received from one or more user devices . An information report may include the device ID of the user device that it pertains to. The information report may include additional information not shown see . The data in the memory may also include one or more error logs that link user devices to the logged errors. The instructions in the memory may include instructions for preparing information reporting configurations . The instructions in the memory may also include instructions for sending information reporting configurations to user devices . The instructions in the memory may also include instructions for receiving information reports from user devices . The instructions in the memory may also include instructions for associating information reports with user devices . The instructions in the memory may also include instructions for analyzing information in the information reports received. The instructions in the memory may also include instructions for determining actions to be performed by the user devices . The instructions in the memory may also include instructions for sending action requests to user devices . The computer system may also include one or more communication interfaces and or network interfaces for communicating with other electronic devices. The communication interface s and the network interface s may be based on wired communication technology wireless communication technology or both. The computer system may also include one or more input devices and one or more output devices . The input devices and output devices may facilitate user input. Other components may also be provided as part of the computer system . The various components of the device may be coupled together by a bus system which may include a power bus a control signal bus and a status signal bus in addition to a data bus. However for the sake of clarity the various busses are illustrated in as the bus system . Memory which may include both read only memory ROM and random access memory RAM provides instructions and data to the processor . A portion of the memory may also include non volatile random access memory NVRAM . The processor typically performs logical and arithmetic operations based on program instructions stored within the memory . The instructions in the memory may be executable to implement the methods described herein. The data in the memory may include one or more information reports . Each information report may include one or more pieces of information data pertaining to information or events that have occurred on the eBook reader . Information reports and data have been discussed in more detail above in relation to . The data in the memory may also include one or more error logs that include error information that has not yet been added to an information report . The instructions in the memory may include instructions for receiving information reporting configurations for gathering and preparing data for preparing information reports for sending an information report to a server and for logging an event locally as has been described above. The eBook reader may also include a housing that may include a transmitter and a receiver to allow transmission and reception of data between the device and a remote location. The transmitter and receiver may be combined into a transceiver . An antenna may be attached to the housing and electrically coupled to the transceiver . The eBook reader may also include not shown multiple transmitters multiple receivers multiple transceivers and or multiple antennas. The eBook reader may also include a signal detector not shown that may be used to detect and quantify the level of signals received by the transceiver . The signal detector may detect such signals as total energy pilot energy per pseudo noise PN chips power spectral density and other signals. The eBook reader may also include a digital signal processor DSP for use in processing signals. The eBook reader may also include one or more communication ports . Such communication ports may allow direct wired connections to be easily made with the device . Additionally input components and output components may be included with the eBook reader for various input to and output from the eBook reader . Examples of different kinds of input components include a keyboard keypad mouse microphone remote control device buttons joystick trackball touchpad light pen etc. Examples of different kinds of output components include a speaker printer etc. Instead of using the receiver the eBook reader may receive action requests such as actions to prepare error reports via the input components . For example the eBook reader may receive action requests over a USB port. The eBook reader may send error reports to a server through the output components instead of the transmitter. For example the eBook reader may send error reports to a server over a USB port. One specific type of output component is a display . A display controller may also be provided for converting data stored in the memory into text graphics and or moving images as appropriate shown on the display . The display may be an electronic paper display which is a display that is capable of holding text and images indefinitely without drawing electricity while allowing the text and images to be changed later. There are several different technologies that may be used to create an electronic paper display including electrophoretic display technology bistable liquid crystal display LCD technology cholesteric LCD display technology etc. Alternatively the display may utilize another image projection technology such as liquid crystal display LCD gas plasma light emitting diode LED etc. The various components of the device may be coupled together by a bus system which may include a power bus a control signal bus and a status signal bus in addition to a data bus. However for the sake of clarity the various busses are illustrated in as the bus system . Server side events may also be placed on the queue . Server side events may include notifications sent to a device to perform certain activities items downloaded server side errors etc. A device activity recorder service may receive notifications of the information logs and server side events from the queue . For example the device activity recorder service may pull notifications of the information logs and server side events from the queue . The device activity recorder service may process the information logs and server side events . The device activity recorder service may process the information logs by retrieving the information log files from the server side storage parsing the information log files and writing each event to a database . The device activity recorder service may also process the server side events . The device activity recorder service may take other actions for event analysis if necessary. Device events may be queried by a client . A client may include a process or device that is interested in the event data stored on the database . For example a client may include an external request for reports concerning the event data stored on the database . A client may also include external monitoring of the event data stored on the database . It may be beneficial to interleave server side events and device side events. For example time stamps of events may be used in order to interleave client side reports and server side reports to form a timeline of events. In other words a client may be able to view the correlation between server side events and device side events. Other tools and reports may be run off of the database . In one configuration only the data from the previous month may be queried by the database . The data may be stored long term in a central data repository where browse sales fulfillment and replenishment information is consolidated for all legal entities. As used herein the term determining encompasses a wide variety of actions and therefore determining can include calculating computing processing deriving investigating looking up e.g. looking up in a table a database or another data structure ascertaining and the like. Also determining can include receiving e.g. receiving information accessing e.g. accessing data in a memory and the like. Also determining can include resolving selecting choosing establishing and the like. The phrase based on does not mean based only on unless expressly specified otherwise. In other words the phrase based on describes both based only on and based at least on. The various illustrative logical blocks modules and circuits described herein may be implemented or performed with a general purpose processor a digital signal processor DSP an application specific integrated circuit ASIC a field programmable gate array signal FPGA or other programmable logic device discrete gate or transistor logic discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor but in the alternative the processor may be any conventional processor controller microcontroller or state machine. A processor may also be implemented as a combination of computing devices e.g. a combination of a DSP and a microprocessor a plurality of microprocessors one or more microprocessors in conjunction with a DSP core or any other such configuration. The steps of a method or algorithm described herein may be embodied directly in hardware in a software module executed by a processor or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include RAM memory flash memory ROM memory EPROM memory EEPROM memory registers a hard disk a removable disk a CD ROM and so forth. A software module may comprise a single instruction or many instructions and may be distributed over several different code segments among different programs and across multiple storage media. An exemplary storage medium may be coupled to a processor such that the processor can read information from and write information to the storage medium. In the alternative the storage medium may be integral to the processor. The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and or actions may be interchanged with one another without departing from the scope of the claims. In other words unless a specific order of steps or actions is required for proper operation of the method that is being described the order and or use of specific steps and or actions may be modified without departing from the scope of the claims. The functions described may be implemented in hardware software firmware or any combination thereof. If implemented in software the functions may be stored as one or more instructions on a computer readable medium. A computer readable medium may be any available medium that can be accessed by a computer. By way of example and not limitation a computer readable medium may comprise RAM ROM EEPROM CD ROM or other optical disk storage magnetic disk storage or other magnetic storage devices or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc as used herein includes compact disc CD laser disc optical disc digital versatile disc DVD floppy disk and Blu ray disc where disks usually reproduce data magnetically while discs reproduce data optically with lasers. Software or instructions may also be transmitted over a transmission medium. For example if the software is transmitted from a website server or other remote source using a coaxial cable fiber optic cable twisted pair digital subscriber line DSL or wireless technologies such as infrared radio and microwave then the coaxial cable fiber optic cable twisted pair DSL or wireless technologies such as infrared radio and microwave are included in the definition of transmission medium. Functions such as executing processing performing running determining notifying sending receiving storing requesting and or other functions may include performing the function using a web service. Web services may include software systems designed to support interoperable machine to machine interaction over a computer network such as the Internet. Web services may include various protocols and standards that may be used to exchange data between applications or systems. For example the web services may include messaging specifications security specifications reliable messaging specifications transaction specifications metadata specifications XML specifications management specifications and or business process specifications. Commonly used specifications like SOAP WSDL XML and or other specifications may be used. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications changes and variations may be made in the arrangement operation and details of the systems methods and apparatus described herein without departing from the scope of the claims.
260.715328
1,345
0.822372
eng_Latn
0.999948
9ab0fae11544f5c1998f503db5a7813e562c915a
2,029
md
Markdown
docs/fr/3.1.0/cordova/file/fileuploadoptions/fileuploadoptions.md
jsoref/cordova-docs
bd265628de93a0cd2070e984cc0bb32448445494
[ "Apache-2.0" ]
1
2016-03-06T18:14:12.000Z
2016-03-06T18:14:12.000Z
docs/fr/3.1.0/cordova/file/fileuploadoptions/fileuploadoptions.md
jsoref/cordova-docs
bd265628de93a0cd2070e984cc0bb32448445494
[ "Apache-2.0" ]
null
null
null
docs/fr/3.1.0/cordova/file/fileuploadoptions/fileuploadoptions.md
jsoref/cordova-docs
bd265628de93a0cd2070e984cc0bb32448445494
[ "Apache-2.0" ]
1
2019-02-15T19:10:38.000Z
2019-02-15T19:10:38.000Z
--- license: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --- # FileUploadOptions Un objet `FileUploadOptions` peut être passé à la méthode `upload()` de l'objet `FileTransfer` pour spécifier des paramètres supplémentaires au script d'upload. ## Propriétés * **fileKey** : le nom de l'élément form. La valeur par défaut est `file`. (DOMString) * **fileName** : le nom de fichier à utiliser pour l'enregistrement sur le serveur. La valeur par défaut est `image.jpg`. (DOMString) * **mimeType** : le type mime des données à envoyer. La valeur par défaut est `image/jpeg`. (DOMString) * **params** : un ensemble de paires clé/valeur facultative à passer dans la requête HTTP. (Objet) * **chunkedMode** : s'il faut transmettre ou non les données en mode streaming de bloc. La valeur par défaut est `true`. (Boolean) * **headers** : un objet représentant les noms et valeurs d'en-têtes à transmettre. Utiliser un tableau permet de spécifier plusieurs valeurs. (Objet) ## Description Un objet `FileUploadOptions` peut être passé à la méthode `upload()` de l'objet `FileTransfer` pour spécifier des paramètres supplémentaires au script d'upload. ## Particularités de WP7 * la propriété **chunkedMode** est simplement ignorée sous WP7.
49.487805
406
0.737309
fra_Latn
0.806363
9ab1692b22eb373371524e638996e412a4b3ac25
15,763
md
Markdown
articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
tsunami416604/azure-docs.hu-hu
aeba852f59e773e1c58a4392d035334681ab7058
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Oktatóanyag – erőforrás-előkészítés description: Az egyéni szolgáltatókon keresztüli erőforrás-előkészítés lehetővé teszi a meglévő Azure-erőforrások kezelését és kiterjesztését. ms.topic: tutorial ms.author: jobreen author: jjbfour ms.date: 09/17/2019 ms.openlocfilehash: 22d1dcd997a4ddb94aba184c5dace4c00509054d ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 10/09/2020 ms.locfileid: "75649938" --- # <a name="tutorial-resource-onboarding-with-azure-custom-providers"></a>Oktatóanyag: erőforrás-előkészítés az Azure egyéni szolgáltatókkal Ebben az oktatóanyagban egy egyéni erőforrás-szolgáltatót helyez üzembe az Azure-ban, amely kibővíti a Azure Resource Manager API-t a Microsoft. CustomProviders/társítások erőforrás-típussal. Az oktatóanyag bemutatja, hogyan terjeszthető ki az erőforráscsoport azon erőforrásain kívüli meglévő erőforrások, amelyeken az egyéni szolgáltatói példány található. Ebben az oktatóanyagban az egyéni erőforrás-szolgáltatót egy Azure logikai alkalmazás működteti, de bármilyen nyilvános API-végpontot használhat. ## <a name="prerequisites"></a>Előfeltételek Az oktatóanyag elvégzéséhez ismernie kell a következőket: * Az [Egyéni Azure-szolgáltatók](overview.md)képességei. * Alapszintű információk az erőforrás-előkészítés [Egyéni szolgáltatókkal való](concepts-resource-onboarding.md)bevezetéséről. ## <a name="get-started-with-resource-onboarding"></a>Ismerkedés az erőforrás-előkészítéssel Ebben az oktatóanyagban két darabot kell telepíteni: az egyéni szolgáltatót és a társítást. A folyamat megkönnyítése érdekében igény szerint egyetlen sablont is használhat, amely mindkettőt üzembe helyezi. A sablon ezeket az erőforrásokat fogja használni: * Microsoft. CustomProviders/resourceProviders * Microsoft. Logic/munkafolyamatok * Microsoft. CustomProviders/társítások ```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "location": { "type": "string", "allowedValues": [ "australiaeast", "eastus", "westeurope" ], "metadata": { "description": "Location for the resources." } }, "logicAppName": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id)]", "metadata": { "description": "Name of the logic app to be created." } }, "customResourceProviderName": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id)]", "metadata": { "description": "Name of the custom provider to be created." } }, "customResourceProviderId": { "type": "string", "defaultValue": "", "metadata": { "description": "The resource ID of an existing custom provider. Provide this to skip deployment of new logic app and custom provider." } }, "associationName": { "type": "string", "defaultValue": "myAssociationResource", "metadata": { "description": "Name of the custom resource that is being created." } } }, "resources": [ { "type": "Microsoft.Resources/deployments", "apiVersion": "2017-05-10", "condition": "[empty(parameters('customResourceProviderId'))]", "name": "customProviderInfrastructureTemplate", "properties": { "mode": "Incremental", "template": { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "logicAppName": { "type": "string", "defaultValue": "[parameters('logicAppName')]" } }, "resources": [ { "type": "Microsoft.Logic/workflows", "apiVersion": "2017-07-01", "name": "[parameters('logicAppName')]", "location": "[parameters('location')]", "properties": { "state": "Enabled", "definition": { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "actions": { "Switch": { "cases": { "Case": { "actions": { "CreateCustomResource": { "inputs": { "body": { "properties": "@addProperty(triggerBody().Body['properties'], 'myDynamicProperty', 'myDynamicValue')" }, "statusCode": 200 }, "kind": "Http", "type": "Response" } }, "case": "CREATE" } }, "default": { "actions": { "DefaultHttpResponse": { "inputs": { "statusCode": 200 }, "kind": "Http", "type": "Response" } } }, "expression": "@triggerBody().operationType", "type": "Switch" } }, "contentVersion": "1.0.0.0", "outputs": {}, "parameters": {}, "triggers": { "CustomProviderWebhook": { "inputs": { "schema": {} }, "kind": "Http", "type": "Request" } } } } }, { "type": "Microsoft.CustomProviders/resourceProviders", "apiVersion": "2018-09-01-preview", "name": "[parameters('customResourceProviderName')]", "location": "[parameters('location')]", "properties": { "resourceTypes": [ { "name": "associations", "mode": "Secure", "routingType": "Webhook,Cache,Extension", "endpoint": "[[listCallbackURL(concat(resourceId('Microsoft.Logic/workflows', parameters('logicAppName')), '/triggers/CustomProviderWebhook'), '2017-07-01').value]" } ] } } ], "outputs": { "customProviderResourceId": { "type": "string", "value": "[resourceId('Microsoft.CustomProviders/resourceProviders', parameters('customResourceProviderName'))]" } } } } }, { "type": "Microsoft.CustomProviders/associations", "apiVersion": "2018-09-01-preview", "name": "[parameters('associationName')]", "location": "global", "properties": { "targetResourceId": "[if(empty(parameters('customResourceProviderId')), reference('customProviderInfrastructureTemplate').outputs.customProviderResourceId.value, parameters('customResourceProviderId'))]", "myCustomInputProperty": "myCustomInputValue", "myCustomInputObject": { "Property1": "Value1" } } } ], "outputs": { "associationResource": { "type": "object", "value": "[reference(parameters('associationName'), '2018-09-01-preview', 'Full')]" } } } ``` ### <a name="deploy-the-custom-provider-infrastructure"></a>Az egyéni szolgáltatói infrastruktúra üzembe helyezése A sablon első része az egyéni szolgáltatói infrastruktúrát telepíti. Ez az infrastruktúra határozza meg a társítások erőforrásának hatását. Ha még nem ismeri az egyéni szolgáltatókat, tekintse meg az [egyéni szolgáltató alapjai](overview.md)című témakört. Üzembe helyezzük az egyéni szolgáltatói infrastruktúrát. Másolja, mentse és telepítse az előző sablont, vagy kövesse az infrastruktúra következő lépéseit, és telepítse az infrastruktúrát a Azure Portal használatával. 1. Nyissa meg az [Azure Portalt](https://portal.azure.com). 2. A **sablonok** keresése az **összes szolgáltatásban** vagy a fő keresőmező használatával: ![Sablonok keresése](media/tutorial-resource-onboarding/templates.png) 3. Válassza a **Hozzáadás** lehetőséget a **sablonok** ablaktáblán: ![Hozzáadás kiválasztása](media/tutorial-resource-onboarding/templatesadd.png) 4. Az **általános**területen adja meg az új sablon **nevét** és **leírását** : ![Sablon neve és leírása](media/tutorial-resource-onboarding/templatesdescription.png) 5. Hozzon létre egy Resource Manager-sablont a jelen cikk "az erőforrások bevezetésének első lépései" című szakaszának JSON-sablonban történő másolásával: ![Resource Manager-sablon létrehozása](media/tutorial-resource-onboarding/templatesarmtemplate.png) 6. A sablon létrehozásához válassza a **Hozzáadás** lehetőséget. Ha az új sablon nem jelenik meg, válassza a **frissítés**lehetőséget. 7. Válassza ki az újonnan létrehozott sablont, majd válassza a **telepítés**lehetőséget: ![Válassza ki az új sablont, majd válassza a telepítés lehetőséget.](media/tutorial-resource-onboarding/templateselectspecific.png) 8. Adja meg a kötelező mezők beállításait, majd válassza ki az előfizetést és az erőforráscsoportot. Az **egyéni erőforrás-szolgáltató azonosítója** mezőt üresen hagyhatja. | Beállítás neve | Kötelező? | Leírás | | ------------ | -------- | ----------- | | Hely | Igen | A sablon erőforrásainak helye. | | Logikai alkalmazás neve | Nem | A logikai alkalmazás neve. | | Egyéni erőforrás-szolgáltató neve | Nem | Az egyéni erőforrás-szolgáltató neve. | | Egyéni erőforrás-szolgáltató azonosítója | Nem | Egy meglévő egyéni erőforrás-szolgáltató, amely támogatja a társítási erőforrást. Ha itt értéket ad meg, a rendszer kihagyja a logikai alkalmazást és az egyéni szolgáltató üzembe helyezését. | | Társítás neve | Nem | A társítási erőforrás neve. | Minta paramétereinek: ![Adja meg a sablon paramétereit](media/tutorial-resource-onboarding/templatescustomprovider.png) 9. Nyissa meg a központi telepítést, és várjon, amíg befejeződik. A következő képernyőképhez hasonlóan kell megjelennie. Az új társítási erőforrásnak kimenetként kell megjelennie: ![Sikeres üzembe helyezés](media/tutorial-resource-onboarding/customproviderdeployment.png) Itt található az erőforráscsoport, a **rejtett típusok megjelenítése** beállítással: ![Egyéni szolgáltató üzembe helyezése](media/tutorial-resource-onboarding/showhidden.png) 10. Fedezze fel a logikai alkalmazás **futtatási előzményei** lapot, ahol megtekintheti a társítások létrehozásához szükséges hívásokat: ![A Logic app-futtatások előzményei](media/tutorial-resource-onboarding/logicapprun.png) ## <a name="deploy-additional-associations"></a>További társítások telepítése Miután beállította az egyéni szolgáltatói infrastruktúrát, egyszerűen üzembe helyezhet több társítást. A további társítások erőforráscsoporthoz nem kell megegyeznie az egyéni szolgáltatói infrastruktúrát üzembe helyező erőforráscsoporthoz. Társítás létrehozásához Microsoft. CustomProviders/resourceproviders/Write engedélyekkel kell rendelkeznie a megadott egyéni erőforrás-szolgáltatói AZONOSÍTÓhoz. 1. Nyissa meg a **Microsoft. CustomProviders/resourceProviders** erőforrást az előző üzemelő példány erőforráscsoporthoz. Be kell jelölnie a **rejtett típusok megjelenítése** jelölőnégyzetet: ![Ugrás az erőforráshoz](media/tutorial-resource-onboarding/showhidden.png) 2. Másolja az egyéni szolgáltató erőforrás-azonosító tulajdonságát. 3. A **sablonok** keresése az **összes szolgáltatásban** vagy a fő keresőmező használatával: ![Sablonok keresése](media/tutorial-resource-onboarding/templates.png) 4. Válassza ki a korábban létrehozott sablont, majd válassza a **telepítés**lehetőséget: ![Válassza ki a korábban létrehozott sablont, majd válassza a telepítés lehetőséget.](media/tutorial-resource-onboarding/templateselectspecific.png) 5. Adja meg a kötelező mezők beállításait, majd válassza ki az előfizetést és egy másik erőforráscsoportot. Az **egyéni erőforrás-szolgáltató azonosítója** beállításnál adja meg azt az erőforrás-azonosítót, amelyet a korábban telepített egyéni szolgáltatótól másolt. 6. Nyissa meg a központi telepítést, és várjon, amíg befejeződik. Most csak az új társítási erőforrást kell telepítenie: ![Új társítások erőforrás](media/tutorial-resource-onboarding/createdassociationresource.png) Ha szeretné, térjen vissza a logikai alkalmazás **futtatási előzményeihez** , és tekintse meg, hogy a logikai alkalmazásnak van-e másik hívása. A logikai alkalmazás frissítésével további funkciókat is kibővítheti az egyes létrehozott társításokhoz. ## <a name="getting-help"></a>Segítség kérése Ha kérdése van az egyéni Azure-szolgáltatókkal kapcsolatban, próbálja meg megkérdezni őket [stack Overflowon](https://stackoverflow.com/questions/tagged/azure-custom-providers). Egy hasonló kérdés megválaszolása már megtörtént, ezért először A feladás előtt érdemes megnéznie. A címke hozzáadásával `azure-custom-providers` gyors választ kaphat!
53.982877
504
0.557381
hun_Latn
0.999776
9ab2321225bc1236c4d6eae04b3363da0f8b1837
1,877
md
Markdown
README.md
IBM/appIdr
d429b438d7610e3d664c1707d8fd46abf88c67d9
[ "Apache-2.0" ]
null
null
null
README.md
IBM/appIdr
d429b438d7610e3d664c1707d8fd46abf88c67d9
[ "Apache-2.0" ]
null
null
null
README.md
IBM/appIdr
d429b438d7610e3d664c1707d8fd46abf88c67d9
[ "Apache-2.0" ]
null
null
null
# AppIdR <img src='man/figures/hexsticker.png' align="right" height="139" /> <!-- badges: start --> [![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://www.tidyverse.org/lifecycle/#experimental) [![R build status](https://github.com/ibm/AppIdR/workflows/R-CMD-check/badge.svg)](https://github.com/ibm/AppIdR/actions) <!-- badges: end --> The `AppIdR` is a package to get authentication with [App ID IBM](https://www.ibm.com/cloud/app-id) service in the Shiny Apps. ## Install ``` r remotes::install_github("IBM/AppIdR") ``` ## Configure In the first, you need generate a config yaml file with: ``` r gen_appid_config(name = "Myapp") ``` And resulting # appid_config.yml name: Myapp config: key: !expr Sys.getenv("APPID_KEY") # clientId secret: !expr Sys.getenv("APPID_SECRET") # secret redirect_uri: !expr Sys.getenv("APP_URL") # app url base_url: !expr Sys.getenv("APPID_URL") # oAuthServerUrl authorize: authorization access: token scope: openid password: !expr Sys.getenv("SECRET") # encrypt token You should too, create a `.Renviron` file with the credentials. ## Example ``` r require(shiny) require(shinydashboard) require(AppIdR) ui <- dashboardPage( dashboardHeader(user_info(), # show user info title = "My dashboard"), dashboardSidebar(), dashboardBody() ) server <- function(input, output, session) { # if you want get user info in app userinfo <- callModule(get_user_info, "userinfo") output$user <- renderText({userinfo()}) } # modified shinyApp shinyAppId(ui, server) ``` ## References 1. Package using Auth0 service [curso-r/auth0](https://github.com/curso-r/auth0) 2. Gist with Sketch of Shiny + Oauth [hadley/shiny-oauth.r](https://gist.github.com/hadley/144c406871768d0cbe66b0b810160528)
24.064103
129
0.690464
eng_Latn
0.363157
9ab25215f47a88e4b01b6883e4746dc4802e7e06
4,644
md
Markdown
docs/Mineable-Pool-List.md
XDagger/docs.xdag.io
5482719716370751c1b310d25247b02986877375
[ "MIT" ]
1
2021-03-16T03:17:00.000Z
2021-03-16T03:17:00.000Z
docs/Mineable-Pool-List.md
xrdavies/docs.xdag.io
5482719716370751c1b310d25247b02986877375
[ "MIT" ]
null
null
null
docs/Mineable-Pool-List.md
xrdavies/docs.xdag.io
5482719716370751c1b310d25247b02986877375
[ "MIT" ]
null
null
null
--- id: pool-list title: Pool list --- ### Donate Minable Pools > **fee: pool fee.** > **block: reward to the miner who find the block.** > **direct: reward to the miners who help to find the block.** > **donate: donation to community.** *** Website: http://pool.xdag.us Location: Japan Pool: fee:2% - block:10% - direct:10% - donate:5% Mining port: **pool.xdag.us:13654** Owner: Frozen *** Website: http://xdagger.corpopool.com/ Location: Europe Pool: fee:1% - block:1% - direct:1% - donate:1% Mining port: **xdag.corpopool.com:443** Owner: AnotherYou *** Website: http://xdagpool.com Location: Germany Pool: fee:1% - block:0% - direct:0% - donate:1% Mining port: **pool.xdagpool.com:13654** Owner: kbs1 *** Website: https://www.vspool.com/xdag Location: Germany, Singapore Pool: fee:1% block reward:5% direct reward:10% donate:1% Mining port: **xdag.vspool.com:13654 | cn.xdag.vspool.com:13654** Owner: Thomasab *** Website: http://xdag.coolmine.top/ Location: Singapore Pool: fee:1% - block:1% - direct:1% - donate:1% Mining port: **xdag.coolmine.top:13654** Owner: Alex *** Website: http://172.105.216.53/stats.txt Location: Japan Pool: fee:1% - block:6% - direct:8% - donate:1% Mining port: **172.105.216.53:3355** Owner: Khaak *** Website: https://xdag.poolaroid.com/ Location: Germany Pool: fee:1% - block:0% - direct:80% - donate:1% Mining port: **xdag.poolaroid.cash:443** Owner: asdSd1n23ffd *** Website: https://xdagger.yourspool.com/ Location: Europe Pool: fee:1% - block:15% - direct:5% - donate:1% Mining port: **xdag.yourspool.com:443** Owner: Zyzakus *** Website: https://xdag.signal2noi.se/ Location: Germany Pool: fee:0% - block:0% - direct:0% - donate:1% Mining port: **pool1.xdag.signal2noi.se:13655** | **pool1.xdag.signal2noi.se:443** Owner: Gribbly *** Website: http://xdag.pool.ga/ Location: Canada Pool: fee:1% - block:1% - direct:1% - donate:1% Mining port: **142.44.143.234:777** Owner: RamsesIII *** Website: https://xdagpool.org/ Location: Finland Pool: fee:1% - block:0% - direct:0% - donate:1% Mining port: **95.216.36.234:13654** Owner: Filipo *** Website: https://xdagmining.com/ Location: Germany Pool: fee:1% - block:10% - direct:5% - donate:1% Mining port: **78.46.82.220:13654** Owner: Maximous *** Website: http://119.28.37.154/status Location: Hongkong Pool: fee:2% - block:0% - direct:0% - donate:1% Mining port: **xdag.f2pool.com:13654** Owner: w e *** Website: http://xdag.jeepool.com/ Location: Germany Pool: fee:1% - block:1% - direct:1% - donate:1% Mining port: **xdag.jeepool.com:13654** Owner: Bitopia *** *** ### 捐赠社区矿池列表 > **矿池费: 矿池使用费用** > **爆款奖励: 从总收益中奖励给挖到块的矿工** > **参与奖励: 从总收益中奖励给参与挖块的矿工** > **捐赠社区: 捐赠给社区账户用于社区开发维护** *** 网址 http://pool.xdag.us 位置 日本 配置 矿池费0% 爆块奖励10% 参与奖励5% 捐赠社区1% 挖矿 **pool.xdag.us:13654** 池主 Frozen *** 网址 http://xdagger.corpopool.com/ 位置 欧洲中部 配置 矿池费1% 爆块奖励1% 参与奖励1% 捐赠社区1% 挖矿 **xdag.corpopool.com:443** 池主 AnotherYou *** 网址 http://xdagpool.com 位置 德国 配置 矿池费1% 爆块奖励0% 参与奖励0% 捐赠社区1% 挖矿 **pool.xdagpool.com:13654** 池主 kbs1 *** 网址 https://www.vspool.com/xdag 位置 德国、新加坡 配置 矿池费1% 爆块奖励5% 参与奖励10% 捐赠社区1% 挖矿 **xdag.vspool.com:13654** | **cn.xdag.vspool.com:13654** 池主 Thomasab *** 网址 http://xdag.coolmine.top/ 位置 新加坡 配置 矿池费1% 爆块奖励1% 参与奖励1% 捐赠社区1% 挖矿 **xdag.coolmine.top:13654** 池主 Alex *** 网址 http://172.105.216.53/stats.txt 位置 日本 配置 矿池费1% 爆块奖励6% 参与奖励8% 捐赠社区1% 挖矿 **172.105.216.53:3355** 池主 Khaak *** 网址 https://xdag.poolaroid.com/ 位置 德国 配置 矿池费0% 爆块奖励0% 参与奖励80% 捐赠社区1% 挖矿 **xdag.poolaroid.cash:443** 池主 asdSd1n23ffd *** 网址 https://xdagger.yourspool.com/ 位置 欧洲中部 配置 矿池费1% 爆块奖励15% 参与奖励5% 捐赠社区1% 挖矿 **xdag.yourspool.com:443** 池主 Zyzakus *** 网址 https://xdag.signal2noi.se/ 位置 德国 配置 矿池费0% 爆块奖励0% 参与奖励0% 捐赠社区1% 挖矿 **pool1.xdag.signal2noi.se:13655** | **pool1.xdag.signal2noi.se:443** 池主 Gribbly *** 网站 http://xdag.pool.ga/ 位置 加拿大 配置 矿池费1% 爆块奖励1% 参与奖励1% 捐赠社区1% 挖矿 **142.44.143.234:777** 池主 RamsesIII *** 网站 https://xdagpool.org/ 位置 芬兰 配置 矿池费1% 爆块奖励0% 参与奖励0% 捐赠社区1% 挖矿 **95.216.36.234:13654** 池主 Filipo *** 网站 https://xdagmining.com/ 位置 德国 配置 矿池费1% 爆块奖励10% 参与奖励5% 捐赠社区1% 挖矿 **78.46.82.220:13654** 池主 Maximous *** 网站 http://119.28.37.154/status 位置 香港 配置 矿池费2% 爆块奖励0% 参与奖励0% 捐赠社区1% 挖矿 **xdag.f2pool.com:13654** 池主 w e *** 网站 http://xdag.jeepool.com/ 位置 德国 配置 矿池费1% 爆块奖励1% 参与奖励1% 捐赠社区1% 挖矿 **xdag.jeepool.com:13654** 池主 Bitopia
21.400922
84
0.63006
yue_Hant
0.22516
9ab2a0053024354108146661b4345373648d0cde
1,379
md
Markdown
2020/11/12/2020-11-12 03:10.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
3
2020-07-14T14:54:15.000Z
2020-08-21T06:48:24.000Z
2020/11/12/2020-11-12 03:10.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020/11/12/2020-11-12 03:10.md
zhzhzhy/WeiBoHot_history
32ce4800e63f26384abb17d43e308452c537c902
[ "MIT" ]
null
null
null
2020年11月12日03时数据 Status: 200 1.周深 裸辞某个程度上是在解救自己 微博热度:280814 2.百香果遇害女童母亲回应案件再审 微博热度:171978 3.刘涛双11直播11小时 微博热度:166643 4.令人心动的offer 微博热度:164864 5.私生跟车追尾致木子洋受伤 微博热度:161803 6.大润发 女装尺码建议表 微博热度:136723 7.言承旭张熙恩 微博热度:124855 8.专家建议女性退休年龄延至55岁 微博热度:110514 9.特朗普团队称出现死人票 微博热度:100403 10.王骁太凡尔赛了 微博热度:95827 11.李易峰 酒窝和发色都是原厂配件 微博热度:95484 12.李诞 娱乐圈90%艺人靠运气 微博热度:90582 13.从结婚开始恋爱 微博热度:75470 14.福克斯新闻掐断白宫发布会直播 微博热度:65572 15.顶楼 微博热度:64520 16.隐秘而伟大 微博热度:62352 17.土耳其一工人在奶厂泡牛奶浴 微博热度:62322 18.哔哩哔哩注册呵呵呵商标 微博热度:62278 19.韩国试飞载人空中出租车 微博热度:62232 20.白敬亭夜礼服假面 微博热度:62186 21.双11 微博热度:62160 22.双十一最该打折的东西 微博热度:62101 23.双11啥都没买是省了还是亏了 微博热度:62060 24.南京拦截一批新冠阳性进口冷链食品 微博热度:61997 25.凑单 微博热度:61977 26.cat回归 微博热度:59769 27.杜锋被驱逐 微博热度:56488 28.CBA 微博热度:55375 29.国台办回应美大选是否会影响两岸关系 微博热度:54955 30.著名播音艺术家关山去世 微博热度:54916 31.美国1.5万只貂死于新冠病毒 微博热度:54142 32.天津一份大比目鱼外包装样本核酸阳性 微博热度:53335 33.13年前被烧伤双手的消防员仍在一线 微博热度:51366 34.新修改的著作权法明年6月1日起施行 微博热度:49030 35.燕云台 微博热度:48453 36.今日闵行 微博热度:46507 37.使徒行者3大结局 微博热度:46368 38.易烊千玺衣服设计灵感来自奥特曼吧 微博热度:46197 39.57岁妈妈和儿子同年高考上大学 微博热度:45485 40.如意芳霏 微博热度:42544 41.李晋晔情商好高 微博热度:41747 42.北大保安小哥英语词汇量一万五 微博热度:40294 43.九尾狐传 微博热度:38116 44.赵丽颖演的幸福好可爱 微博热度:37566 45.褚嬴被发现了 微博热度:34451 46.你最后网购的那个东西拥有了2万倍 微博热度:34038 47.棋魂 微博热度:30955 48.空降兵与月亮同框了 微博热度:29689 49.郑爽陈立农撞衫 微博热度:29302 50.双十一后的凡尔赛人 微博热度:29061
6.759804
20
0.774474
yue_Hant
0.261488
9ab316e8693ff8e6e5a436aee632feae8c8158cb
26
md
Markdown
README.md
maSchoeller/dhbw-vorlesung-datenbank
049dd7309b636ae3fb73b088321f518ee396db4b
[ "MIT" ]
null
null
null
README.md
maSchoeller/dhbw-vorlesung-datenbank
049dd7309b636ae3fb73b088321f518ee396db4b
[ "MIT" ]
null
null
null
README.md
maSchoeller/dhbw-vorlesung-datenbank
049dd7309b636ae3fb73b088321f518ee396db4b
[ "MIT" ]
null
null
null
# dhbw-vorlesung-datenbank
26
26
0.846154
deu_Latn
0.445794
9ab34789a48ea59b42c027b0dbf95b83f8e4aa0f
199
md
Markdown
README.md
crmacd/adventofcode
826eff0b80ad859803385ce245e9dad9752abeb7
[ "CC0-1.0" ]
null
null
null
README.md
crmacd/adventofcode
826eff0b80ad859803385ce245e9dad9752abeb7
[ "CC0-1.0" ]
null
null
null
README.md
crmacd/adventofcode
826eff0b80ad859803385ce245e9dad9752abeb7
[ "CC0-1.0" ]
null
null
null
# Advent Of Code ## Challenges * Results May Vary ## Note * Repo for www.adventofcode.com * Many solutions are not the best programming practices these were written for function not form/function.
22.111111
106
0.768844
eng_Latn
0.98779
9ab526ff6ba266f32ce9536a6f7a31905dd86c9c
1,504
md
Markdown
website/content/ChapterFour/0028.Implement-strStr.md
lhfelis/LeetCode-Go
d46930712e24e79900c3f45efa4ee41cfc8d27bc
[ "MIT" ]
1
2021-01-25T04:26:10.000Z
2021-01-25T04:26:10.000Z
website/content/ChapterFour/0028.Implement-strStr.md
kecode/LeetCode-Go
bfb9839828e37b4897d99b2e94f732e8b22b85a3
[ "MIT" ]
null
null
null
website/content/ChapterFour/0028.Implement-strStr.md
kecode/LeetCode-Go
bfb9839828e37b4897d99b2e94f732e8b22b85a3
[ "MIT" ]
1
2021-08-12T06:32:09.000Z
2021-08-12T06:32:09.000Z
# [28. Implement strStr()](https://leetcode.com/problems/implement-strstr/) ## 题目 Implement strStr(). Return the index of the first occurrence of needle in haystack, or -1 if needle is not part of haystack. **Example 1**: ``` Input: haystack = "hello", needle = "ll" Output: 2 ``` **Example 2**: ``` Input: haystack = "aaaaa", needle = "bba" Output: -1 ``` **Clarification**: What should we return when needle is an empty string? This is a great question to ask during an interview. For the purpose of this problem, we will return 0 when needle is an empty string. This is consistent to C's strstr() and Java's indexOf(). ## 题目大意 实现一个查找 substring 的函数。如果在母串中找到了子串,返回子串在母串中出现的下标,如果没有找到,返回 -1,如果子串是空串,则返回 0 。 ## 解题思路 这一题比较简单,直接写即可。 ## 代码 ```go package leetcode import "strings" // 解法一 func strStr(haystack string, needle string) int { for i := 0; ; i++ { for j := 0; ; j++ { if j == len(needle) { return i } if i+j == len(haystack) { return -1 } if needle[j] != haystack[i+j] { break } } } } // 解法二 func strStr1(haystack string, needle string) int { return strings.Index(haystack, needle) } ``` ---------------------------------------------- <div style="display: flex;justify-content: space-between;align-items: center;"> <p><a href="https://books.halfrost.com/leetcode/ChapterFour/0027.Remove-Element/">⬅️上一页</a></p> <p><a href="https://books.halfrost.com/leetcode/ChapterFour/0029.Divide-Two-Integers/">下一页➡️</a></p> </div>
16
138
0.633644
eng_Latn
0.572501
9ab575a5ffca1a398765a764be02e7494ce64377
1,522
md
Markdown
docs/interfaces/_src_customer_structures_.getdynamicconfigurationrequest.md
livechat/lc-sdk-js
65c22c23af60f3737a4c0880cf57ffca34e2fa64
[ "Apache-2.0" ]
2
2020-08-26T08:13:41.000Z
2020-09-22T13:14:22.000Z
docs/interfaces/_src_customer_structures_.getdynamicconfigurationrequest.md
livechat/lc-sdk-js
65c22c23af60f3737a4c0880cf57ffca34e2fa64
[ "Apache-2.0" ]
10
2020-09-18T18:55:19.000Z
2022-02-13T20:53:41.000Z
docs/interfaces/_src_customer_structures_.getdynamicconfigurationrequest.md
livechat/lc-sdk-js
65c22c23af60f3737a4c0880cf57ffca34e2fa64
[ "Apache-2.0" ]
4
2021-03-09T14:03:24.000Z
2021-11-19T17:30:59.000Z
[@livechat/lc-sdk-js](../README.md) › [Globals](../globals.md) › ["src/customer/structures"](../modules/_src_customer_structures_.md) › [GetDynamicConfigurationRequest](_src_customer_structures_.getdynamicconfigurationrequest.md) # Interface: GetDynamicConfigurationRequest ## Hierarchy * **GetDynamicConfigurationRequest** ## Index ### Properties * [channel_type](_src_customer_structures_.getdynamicconfigurationrequest.md#optional-channel_type) * [group_id](_src_customer_structures_.getdynamicconfigurationrequest.md#optional-group_id) * [test](_src_customer_structures_.getdynamicconfigurationrequest.md#optional-test) * [url](_src_customer_structures_.getdynamicconfigurationrequest.md#optional-url) ## Properties ### `Optional` channel_type • **channel_type**? : *undefined | string* *Defined in [src/customer/structures.ts:189](https://github.com/livechat/lc-sdk-js/blob/ac28f06/src/customer/structures.ts#L189)* ___ ### `Optional` group_id • **group_id**? : *undefined | number* *Defined in [src/customer/structures.ts:187](https://github.com/livechat/lc-sdk-js/blob/ac28f06/src/customer/structures.ts#L187)* ___ ### `Optional` test • **test**? : *undefined | false | true* *Defined in [src/customer/structures.ts:190](https://github.com/livechat/lc-sdk-js/blob/ac28f06/src/customer/structures.ts#L190)* ___ ### `Optional` url • **url**? : *undefined | string* *Defined in [src/customer/structures.ts:188](https://github.com/livechat/lc-sdk-js/blob/ac28f06/src/customer/structures.ts#L188)*
31.061224
229
0.764783
yue_Hant
0.285001
9ab5be2066bcf210127d8f6820871c606900c61c
1,174
md
Markdown
CHANGELOG.md
YunWisdom/nutui
d7718fcac218a17d6e3d71d481f71ba505e6e53f
[ "MIT" ]
1
2021-04-22T01:23:31.000Z
2021-04-22T01:23:31.000Z
CHANGELOG.md
Miazzy/nutui
d7718fcac218a17d6e3d71d481f71ba505e6e53f
[ "MIT" ]
null
null
null
CHANGELOG.md
Miazzy/nutui
d7718fcac218a17d6e3d71d481f71ba505e6e53f
[ "MIT" ]
1
2021-04-22T01:23:34.000Z
2021-04-22T01:23:34.000Z
## v3.0.0-beta.13 `2021-04-18` * :zap: feat(cell-group): 列表组组件 @richard1015 * :bug: fix(toast): function undefined bug #444 @richard1015 * :bug: fix(address): style bug @szg2008 * :bug: fix(pxCheck): typeof number check (#441) @xjh22222228 * :zap: feat: steps @szg2008 @ailululu ## v3.0.0-beta.12 `2021-04-14` * :zap: chore(dialog): 重构函数式,标签式 @richard1015 * :zap: chore(textarea): 重构展示方式 & 部分Api @richard1015 * :zap: doc: icon size type desc modify @richard1015 * :bug: fix(inputnumber): blur event calc error @richard1015 * :bug: fix: button icon style bug @richard1015 * :bug: fix: tabbar 样式修改 (#445) @Drjingfubo * :bug: fix: radio @szg2008 ## v3.0.0-beta.11 `2021-04-01` * :zap: chore(popup): dep component 'show' replace 'visible' @richard1015 ## v3.0.0-beta.10 `2021-04-7` * :bug: fix(datepicker):修复日期展示问题 #428 @yangkaixuan ## v3.0.0-beta.9 `2021-3-31` > :tada: :tada: :tada: NutUI 3.0 来了! ### 新特性 - :zap: 全新的架构,基于 vite 构建 - :lipstick: 全新的视觉样式,参照京东 APP v9.0视觉规范 - :sparkles: 全新的按需加载方式 - :art: 支持定制主题 - :sparkles: 支持 TypeScript - :sparkles: 重构所有 2.0 组件 - :sparkles: 详尽的文档和示例 ### ⚠️ 升级必读 - 3.x 基于 Vue3 版本 3.x 不兼容 2.x,建议直接升级到 3.x 的最新版本
19.898305
74
0.660136
yue_Hant
0.255505
9ab672d41a9a8a47f96c852ff240858245bb63ce
6,986
md
Markdown
docs/standard/datetime/enumerate-time-zones.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/datetime/enumerate-time-zones.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/datetime/enumerate-time-zones.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Como: enumerar os fusos horários presentes em um computador' ms.date: 04/10/2017 dev_langs: - csharp - vb helpviewer_keywords: - time zones [.NET], enumerating - enumerating time zones [.NET] ms.assetid: bb7a42ab-6bd9-4c5c-b734-5546d51f8669 ms.openlocfilehash: 276c13bb95685e9588e25238f1a6e45cd57a6c91 ms.sourcegitcommit: 965a5af7918acb0a3fd3baf342e15d511ef75188 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 11/18/2020 ms.locfileid: "94817959" --- # <a name="how-to-enumerate-time-zones-present-on-a-computer"></a>Como: enumerar os fusos horários presentes em um computador Trabalhar com êxito com um fuso horário designado requer que informações sobre o fuso horário em questão estejam disponíveis no sistema. Os sistemas operacionais Windows XP e Windows Vista armazenam essas informações no registro. Embora o número total de fusos horários existentes seja grande, o Registro contém informações apenas sobre um subconjunto deles. Além disso, o Registro em si é uma estrutura dinâmica cujo conteúdo está sujeito a alterações deliberadas ou acidentais. Como resultado, um aplicativo nem sempre pode presumir que um determinado fuso horário esteja definido e disponível no sistema. A primeira etapa para muitos aplicativos que usam informações de fuso horário é determinar se os fusos horários necessários estão disponíveis no sistema local ou dar ao usuário uma lista dos fusos horários entre os quais escolher. Isso requer que o aplicativo enumere os fusos horários definidos no sistema local. > [!NOTE] > Se um aplicativo depender da presença de um fuso horário específico que não possa ser definido em um sistema local, o aplicativo poderá garantir sua presença serializando e desserializando as informações sobre o fuso horário. O fuso horário pode então ser adicionado a um controle de lista para que o usuário do aplicativo possa selecioná-lo. Para obter detalhes, consulte [como: salvar fusos horários em um recurso incorporado](save-time-zones-to-an-embedded-resource.md) e [como restaurar fusos horários de um recurso inserido](restore-time-zones-from-an-embedded-resource.md). ### <a name="to-enumerate-the-time-zones-present-on-the-local-system"></a>Para enumerar os fusos horários presentes no sistema local 1. Chame o método <xref:System.TimeZoneInfo.GetSystemTimeZones%2A?displayProperty=nameWithType> . O método retorna uma <xref:System.Collections.ObjectModel.ReadOnlyCollection%601> coleção genérica de <xref:System.TimeZoneInfo> objetos. As entradas na coleção são classificadas por sua <xref:System.TimeZoneInfo.DisplayName%2A> propriedade. Por exemplo: [!code-csharp[System.TimeZone2.Concepts#1](../../../samples/snippets/csharp/VS_Snippets_CLR_System/system.TimeZone2.Concepts/CS/TimeZone2Concepts.cs#1)] [!code-vb[System.TimeZone2.Concepts#1](../../../samples/snippets/visualbasic/VS_Snippets_CLR_System/system.TimeZone2.Concepts/VB/TimeZone2Concepts.vb#1)] 2. Enumere os <xref:System.TimeZoneInfo> objetos individuais na coleção usando um `foreach` loop (em C#) ou um `For Each` ...`Next` loop (em Visual Basic) e execute qualquer processamento necessário em cada objeto. Por exemplo, o código a seguir enumera a <xref:System.Collections.ObjectModel.ReadOnlyCollection%601> coleção de <xref:System.TimeZoneInfo> objetos retornados na etapa 1 e lista o nome de exibição de cada fuso horário no console. [!code-csharp[System.TimeZone2.Concepts#12](../../../samples/snippets/csharp/VS_Snippets_CLR_System/system.TimeZone2.Concepts/CS/TimeZone2Concepts.cs#12)] [!code-vb[System.TimeZone2.Concepts#12](../../../samples/snippets/visualbasic/VS_Snippets_CLR_System/system.TimeZone2.Concepts/VB/TimeZone2Concepts.vb#12)] ### <a name="to-present-the-user-with-a-list-of-time-zones-present-on-the-local-system"></a>Para apresentar ao usuário uma lista de fusos horários presentes no sistema local 1. Chame o método <xref:System.TimeZoneInfo.GetSystemTimeZones%2A?displayProperty=nameWithType> . O método retorna uma <xref:System.Collections.ObjectModel.ReadOnlyCollection%601> coleção genérica de <xref:System.TimeZoneInfo> objetos. 2. Atribua a coleção retornada na etapa 1 à `DataSource` propriedade de um controle de lista do Windows Forms ou ASP.net. 3. Recupere o <xref:System.TimeZoneInfo> objeto que o usuário selecionou. O exemplo fornece uma ilustração para um aplicativo do Windows. ## <a name="example"></a>Exemplo O exemplo inicia um aplicativo do Windows que exibe os fusos horários definidos em um sistema em uma caixa de listagem. Em seguida, o exemplo exibe uma caixa de diálogo que contém o valor da <xref:System.TimeZoneInfo.DisplayName%2A> Propriedade do objeto de fuso horário selecionado pelo usuário. [!code-csharp[System.TimeZone2.Concepts#2](../../../samples/snippets/csharp/VS_Snippets_CLR_System/system.TimeZone2.Concepts/CS/TimeZone2Concepts.cs#2)] [!code-vb[System.TimeZone2.Concepts#2](../../../samples/snippets/visualbasic/VS_Snippets_CLR_System/system.TimeZone2.Concepts/VB/TimeZone2Concepts.vb#2)] A maioria dos controles de lista (como o <xref:System.Windows.Forms.ListBox?displayProperty=nameWithType> <xref:System.Web.UI.WebControls.BulletedList?displayProperty=nameWithType> controle ou) permite que você atribua uma coleção de variáveis de objeto à sua `DataSource` propriedade, desde que essa coleção implemente a <xref:System.Collections.IEnumerable> interface. (A <xref:System.Collections.ObjectModel.ReadOnlyCollection%601> classe genérica faz isso.) Para exibir um objeto individual na coleção, o controle chama o método desse objeto `ToString` para extrair a cadeia de caracteres usada para representar o objeto. No caso de <xref:System.TimeZoneInfo> objetos, o `ToString` método retorna o <xref:System.TimeZoneInfo> nome de exibição do objeto (o valor de sua <xref:System.TimeZoneInfo.DisplayName%2A> Propriedade). > [!NOTE] > Como os controles de lista chamam o método de um objeto `ToString` , você pode atribuir uma coleção de <xref:System.TimeZoneInfo> objetos ao controle, fazer com que o controle exiba um nome significativo para cada objeto e recuperar o <xref:System.TimeZoneInfo> objeto que o usuário selecionou. Isso elimina a necessidade de extrair uma cadeia de caracteres para cada objeto na coleção, atribuir a cadeia de caracteres a uma coleção que, por sua vez, é atribuída à propriedade do controle `DataSource` , recuperar a cadeia de caracteres que o usuário selecionou e, em seguida, usar essa cadeia de caracteres para extrair o objeto que ele descreve. ## <a name="compiling-the-code"></a>Compilando o código Este exemplo requer: - Que os seguintes namespaces sejam importados: <xref:System> (em código C#) <xref:System.Collections.ObjectModel> ## <a name="see-also"></a>Confira também - [Datas, horas e fusos horários](index.md) - [Como: salvar fusos horários em um recurso inserido](save-time-zones-to-an-embedded-resource.md) - [Como: restaurar fusos horários de um recurso inserido](restore-time-zones-from-an-embedded-resource.md)
94.405405
921
0.800601
por_Latn
0.992622
9ab6e53924e0495a468e62aa9c9dd9eef956f0d4
15,778
md
Markdown
site/en/docs/webstore/get_started_simple/index.md
AnupBS28/developer.chrome.com
7b0f1be6febf1cdd0dce36234c874242941f4b16
[ "Apache-2.0" ]
3
2021-01-11T14:26:17.000Z
2022-01-06T21:36:07.000Z
site/en/docs/webstore/get_started_simple/index.md
AnupBS28/developer.chrome.com
7b0f1be6febf1cdd0dce36234c874242941f4b16
[ "Apache-2.0" ]
null
null
null
site/en/docs/webstore/get_started_simple/index.md
AnupBS28/developer.chrome.com
7b0f1be6febf1cdd0dce36234c874242941f4b16
[ "Apache-2.0" ]
1
2022-01-30T21:27:30.000Z
2022-01-30T21:27:30.000Z
--- layout: 'layouts/doc-post.njk' title: "Tutorial: Getting Started" date: 2017-08-30 description: > How to add an existing web app to the Chrome Web Store. --- Got a web app? Follow this tutorial to add your existing web app to the Chrome Web Store. If you don't already have a web app, you can still follow this tutorial with any other website you own. Any website can be an installable web app, although the site should follow a few [design principles][1]. !!!.aside.aside--note If you're interested in developing a [Chrome Extension][2] or [Chrome App][3] instead of a web app, follow the [extension tutorial][4] or the [Chrome App tutorial][5]. Then return to this page, and start at [Step 5: Zip up your app][6]. !!! ## Step 1: Get ready Before you start, choose your web app and make sure that you can verify your ownership of its site. Also, note the location of the [Chrome Web Store Developer Dashboard][7]. 1. Choose the web app that you want to publish, making a note of its homepage URL and any other URLs that it includes. Don't have a web app? For the purposes of this tutorial, you can [create a blog][8] on Blogger and treat it like an app. Why Blogger? Because it automatically registers you as the owner of the blog's subdomain, which makes publishing your app a bit easier. 2. Make sure that you'll be able to prove ownership of the site that hosts your web app. (Don't worry about this if you created a Blogger blog; you already own it.) For information about claiming ownership, see the Google Webmaster Tools help article [Adding a site][9]. !!!.aside.aside--note You can wait to actually verify ownership until just before you publish the app. !!! 3. Bookmark the [Chrome Web Store Developer Dashboard][10]. ## Step 2: Create a manifest file Every app needs a manifest—a JSON-formatted file named `manifest.json` that describes the app. 1. Create a directory called `myapp` to contain your app's manifest. !!!.aside.aside--note You can use a different directory name, if you like. The only name that can't change is `manifest.json`. !!! 2. In this new directory, create a text file named `manifest.json` and copy the following code into it, changing the italicized text to reflect your app: ```json { "name": "Great App Name", "description": "Pithy description (132 characters or less, no HTML)", "version": "0.0.0.1", "manifest_version": 2, "icons": { "128": "icon_128.png" }, "app": { "urls": [ "http://mysubdomain.example.com/" ], "launch": { "web_url": "http://mysubdomain.example.com/" } } } ``` All new apps must specify manifest version 2 without quotes (see [manifest version documentation][12]). The remaining fields are fairly straightforward, except for "urls" and "web_url". In this example, those fields have the same values but mean different things. * "web_url": Specifies the page that the browser shows when a user launches the app. * "urls": Specifies the starting URLs of all other web pages that are in the app. In this example, both http://mysubdomain.example.com/**page1.html** and http://mysubdomain.example.com/**subdir/page2.html\*\* would be in the app. You don't need to specify the URLs for included files or for assets such as images. Currently, the domain name must be followed by at least one character ("/"). !!!.aside.aside--note If your app has only one page, you can omit the "urls" field. !!! For more information about what the manifest for a hosted app can contain, see [Hosted Apps][13]. ## Step 3: Get images Your app needs an icon, plus at least one screenshot and one promotional image. The icon is used in the New Tab page and may also be used in the store; the screenshot is used in the store's listing for your app; the promotional image is used on the store's main page and in other places where the store presents multiple apps. 1. Create or find an icon that follows the guidelines in the [App icon][15] section of Supplying Images. 2. Put the icon in the `myapp` directory, with the name `icon_128.png`. 3. Take at least one screenshot of your app, following the guidelines in the [Screenshots][16] section of Supplying Images. 4. Edit each screenshot to be 1280x800 or 640x400 pixels. Larger screenshots look better on high-resolution displays. 5. Create at least one promotional image. It must be 440x280 and follow the guidelines in the [Promotional images][17] section of Supplying Images. You can also provide 920x680 and 1400x560 promotional images. !!!.aside.aside--note Promotional images are the main way that people will notice your app. Make them pretty and informative! !!! !!!.aside.aside--note If you don't have an icon yet, you can continue with this tutorial by temporarily removing the following lines from the `manifest.json` file: `"icons": { "128": "_icon_128.png_" },` !!! ## Step 4: Verify that the app works In this step, you'll load the unpacked app into the browser, so you can confirm that the app is valid. 1. Bring up the extensions management page: chrome://extensions. 2. If **Developer mode** has a + by it, click the +. The + changes to a -, and more buttons and information appear. 3. Click the **Load unpacked extension** button. A file dialog appears. 4. In the file dialog, choose the `myapp` directory. Unless you get an error dialog, you've now installed the app. !!!.aside.aside--note If you get an error dialog saying you could not load the extension, make sure your manifest file has valid JSON formatting and is called `manifest.json` (not `manifest.json.txt` or `manifest.json.rtf`, for example). You can use a [JSON validator][19] to make sure your manifest's format is valid. Often, formatting issues are related to commas (`,`) and quotation marks (`"`)—either too many or not enough. For example, the last entry in the manifest should not have a comma after it. !!! 5. Create a new tab. The icon for the newly installed app appears in Google Chrome's launcher on the New Tab page. If the Apps area is minimized, then instead of the icon you specified, you should see the website favicon. 6. Click the icon for the app. You've now launched the app. You should see its website in your browser window. If you don't, try changing the value of "web_url" in the manifest, reload the app, and click its icon again. ## Step 5: Zip up your app Create a ZIP archive of the directory that contains `manifest.json` and the icon. On Windows, you can do this by right-clicking `myapp` and choosing the menu item **Send to > Compressed (zipped) folder**. On Mac OS X, control-click `myapp` and choose **Compress "myapp"**. If you like to use the command line, you might enter this: ```text zip -r myapp.zip myapp ``` !!!.aside.aside--note If your app is an extension or a packaged app that uses Native Client, you can structure your application directory hierarchy and ZIP file in a way that reduces the size of the user download package. For details, see [Reducing the size of the user download package][21]. !!! ## Step 6: Upload your app In this step, you upload the ZIP file that you created in Step 5. 1. Decide which Google Account is your **developer account**—the account that you'll use to verify ownership and publish your app. Instead of using your personal Gmail account, you might want to create a dedicated account for your apps. For details, see [Choose a developer account][23] in Publishing Your App. !!!.aside.aside--note If you created a blog, choose the account that you used to create that blog. !!! 2. Go to the [Chrome Web Store Developer Dashboard][24], and sign into your developer account. !!!.aside.aside--note If you don't have a developer account, then when you try and access the dashboard you'll be asked to [register as a Chrome Web Store developer][25] now and pay a one-time fee. If you already have a developer account, but have never paid the one-time fee, you'll see the same retistration page and will need to pay the one-time fee now. !!! Once you sign in, you'll see a list of any installable web apps, extensions, and themes that you've already uploaded. 3. Click the **Add new item** button in the dashboard. If you've never uploaded an installable web app, extension, or theme before, you need to accept the developer agreement before going on. 4. Click **Choose file**, choose the ZIP file you created in Step 5, and click **Upload**. If you see an error message, fix the error, zip up the directory again, and upload the ZIP file. Within seconds you should see the Edit page for your app. At the top, you might see a warning that you must verify ownership for whatever sites you specified in the "urls" and "web_url" fields. That warning has a link that takes you to Google Webmaster Tools, where you can verify ownership at any time before you publish the app. ## Step 7: Fill out the form, and upload images In this step, you use the dashboard's Edit page to provide information for the store's listing of your app. 1. In the **Detailed description** section, enter text that describes your app. 2. In the **Icon** section, upload the same 128x128 icon that you put into the ZIP file. 3. In the **Screenshots** section, upload at least one screenshot of the app (1280x800 or 640x400). Four or five would be better. 4. In the **Promotional images** section, upload your promotional images (you must upload at least the 440x280 image). 5. Under the **Verified website** section, select the website for which the app is an "official item". The drop-down lists only the domains for which you're the verified owner. 6. In the **Categories** section, select at least one category for your app. For more information, see [Choose your app's category well][27] in Best Practices. 7. If you want to charge for your app, specify the price and the payment system in the **Pricing and Payments** section. You can use the built-in Chrome Web Store Payments system if you are in a [supported region][28]. For more information about payment options, see [Monetizing Your App][29]. 8. At the bottom of the edit page, click **Save draft and return to dashboard**. !!!.aside.aside--note If you are already hosting your app in Google Play, and you want your Chrome Web Store app listing to show an “Available for Android” link, your app must have the same name as your Google Play listing and both apps must be owned by the same developer account. To have your CWS item transferred to a different developer account, you must submit this [form][30]. !!! ## Step 8: Preview and improve your app's listing Now you get to see what your app's listing in the store will look like. 1. In the [dashboard][32], click your app's "Edit" link to return to its Edit page. 2. At the bottom of the Edit page, click **Preview changes**. You should see a page that looks something like this: {% img src="image/BrQidfK9jaQyIHwdw91aVpkPiib2/DnAipknPku2gPKNyqXpG.png", alt="a screenshot of the store listing", height="328", width="550" %} !!!.aside.aside--note If you haven't provided a screenshot yet, your page will have a big blank space at the upper left. !!! 3. Make sure that the images and text look good and that any links you provide are valid. If you refer to Google brands, follow the [Branding Guidelines][33]. 4. At the top of the page, click the "Edit" link to return to the Edit page. 5. After making any improvements you'd like, click **Save draft and return to dashboard**. !!!.aside.aside--note Before finalizing your app's listing, think about how users are going to find your app, and make sure the detailed description includes the terms they're likely to search for. Check out listings for similar apps to get ideas for how you should present your app. !!! ## Step 9: Test, deploy, publish, improve, repeat 1. Test your app to make sure its users will have a good experience with it. If your app uses features such as geolocation or notifications that require user consent, consider adding a permissions field to the [manifest][35]. 2. If you don't already have a support site, create one. Providing a link to a support site in your app's listing gives users an alternative to using the User reviews section as a way of reporting issues. 3. Once your app and the sites it depends on are ready, deploy the app to the web (if it isn't already public). 4. In the [Chrome Web Store Developer Dashboard][36], click the "Publish" link to publish your app's listing in the Chrome Web Store. When you publish your app, its listing becomes visible to you and to anyone who has access to the Chrome Web Store. People can buy your app (if you charge) and install it. When they install your app, they download a `.crx` file that contains everything you uploaded in the ZIP file. 5. After you publish your app, whenever you want to change your app's listing or `.crx` file, use the dashboard to update them. To push an updated `.crx` file for your app, just increment the version number in the manifest, upload a new ZIP file to the dashboard, and publish the change. People who have already installed your app will automatically get the update. ## What next? Here are some choices for where to go next: * [Overview][38]: Get the conceptual background you need to use the Chrome Web Store well. * [Checking for Payment][39]: Learn how to use the Licensing API to check whether the user has paid for your app. * [Samples][40]: Find samples in multiple languages of hosted apps that use the Licensing API. * [Tutorial: Licensing API][41]: Walk through a full example of creating an app that uses the Licensing API to check whether the user has paid. If you just want to write your app, see the developer doc for the type of app you're interested in: - [Installable Web Apps][42] - [Chrome Apps][43] - [Themes][44] - [Extensions][45] [1]: https://developers.google.com/chrome/apps/articles/thinking_in_web_apps [2]: /docs/extensions/mv2/overview [3]: /docs/apps/about_apps [4]: /docs/extensions/mv2/getstarted [5]: /docs/apps/first_app [6]: #step5 [7]: https://chrome.google.com/webstore/developer/dashboard [8]: http://www.blogger.com [9]: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=34592 [10]: https://chrome.google.com/webstore/developer/dashboard [12]: /docs/extensions/mv2/manifestVersion [13]: https://developers.google.com/chrome/apps/docs/developers_guide [15]: /docs/webstore/images#icons [16]: /docs/webstore/images#screenshots [17]: /docs/webstore/images#promo [19]: http://www.google.com/search?q=json+validator [21]: https://developers.google.com/native-client/dev/devguide/distributing#multi-platform-zip [23]: /docs/webstore/publish#step1 [24]: https://chrome.google.com/webstore/developer/dashboard [25]: /docs/webstore/register [27]: /docs/webstore/best_practices#categories [28]: /docs/webstore/pricing#seller [29]: /docs/webstore/money [30]: https://support.google.com/chrome_webstore/contact/dev_account_transfer [32]: https://chrome.google.com/webstore/developer/dashboard [33]: /docs/webstore/branding [35]: /docs/extensions/mv2/tabs [36]: https://chrome.google.com/webstore/developer/dashboard [38]: /docs/webstore/ [39]: /docs/webstore/check_for_payment [40]: /docs/webstore/samples [41]: /docs/webstore/get_started [42]: /docs/chrome/apps/ [43]: /docs/apps/about_apps [44]: /docs/extensions/mv2/themes [45]: /docs/extensions/index
46.680473
133
0.730384
eng_Latn
0.996679
9ab7518e2d05bf791f497987f5228fdf9159e4e7
770
markdown
Markdown
app/_posts/2008-08-26-proyectando-de-nuevo.markdown
pacoguzman/pacoguzman.github.io
ef51ef42f661090b596da61d5a7e896b8effce74
[ "MIT" ]
null
null
null
app/_posts/2008-08-26-proyectando-de-nuevo.markdown
pacoguzman/pacoguzman.github.io
ef51ef42f661090b596da61d5a7e896b8effce74
[ "MIT" ]
null
null
null
app/_posts/2008-08-26-proyectando-de-nuevo.markdown
pacoguzman/pacoguzman.github.io
ef51ef42f661090b596da61d5a7e896b8effce74
[ "MIT" ]
null
null
null
--- layout: post title: Proyectando de nuevo date: 2008-08-26 15:34:41 description: '' tags: ['latex'] --- Después de mi regreso a la universidad para completar mis estudios o al menos mis títulos académicos vuelvo a reencontrarme con mi querido amigo LaTeX. Una vez más vuelvo a redactar mi proyecto fin de carrera con LaTeX, después de un año abandonado me ha costado arreglar algunos errores que aparecieron en mi plantilla del proyecto después de muchas actualizaciones de MiKTeX, pero al fin lo he conseguido y ya tengo 73 páginas escritas acerca del diseño de mi aplicación Ruby On Rails. Bien aclarar que muchas páginas son meras introducciones a tecnologías y conceptos. Espero que pronto puede hacer la presentación y acabar la carrera, os iré informando.
55
419
0.796104
spa_Latn
0.999432
9ab760d02376324185858a7d9d1441cc311fae7b
167
md
Markdown
docs/demo.md
pankona/aquaproj.github.io
5c406b780da4d3e35a06e743f9038aeaf1765a32
[ "MIT" ]
null
null
null
docs/demo.md
pankona/aquaproj.github.io
5c406b780da4d3e35a06e743f9038aeaf1765a32
[ "MIT" ]
null
null
null
docs/demo.md
pankona/aquaproj.github.io
5c406b780da4d3e35a06e743f9038aeaf1765a32
[ "MIT" ]
null
null
null
# Demo The demo is built with [asciinema.org](https://asciinema.org/). [![asciicast](https://asciinema.org/a/457021.svg)](https://asciinema.org/a/457021?autoplay=1)
27.833333
93
0.718563
eng_Latn
0.143304
9ab800bc0b7afbed2a00ebd37868f22a52b9fa7a
3,793
md
Markdown
articles/azure-sql/database/scripts/move-database-between-elastic-pools-powershell.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
43
2017-08-28T07:44:17.000Z
2022-02-20T20:53:01.000Z
articles/azure-sql/database/scripts/move-database-between-elastic-pools-powershell.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
676
2017-07-14T20:21:38.000Z
2021-12-03T05:49:24.000Z
articles/azure-sql/database/scripts/move-database-between-elastic-pools-powershell.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
153
2017-07-11T00:08:42.000Z
2022-01-05T05:39:03.000Z
--- title: 'PowerShell : Déplacer une base de données entre les pools élastiques' description: Utilisez un exemple de script Azure PowerShell pour déplacer une base de données dans SQL Database entre deux pools élastiques. services: sql-database ms.service: sql-database ms.subservice: elastic-pools ms.custom: sqldbrb=1, devx-track-azurepowershell ms.devlang: PowerShell ms.topic: sample author: arvindshmicrosoft ms.author: arvindsh ms.reviewer: mathoma ms.date: 03/12/2019 ms.openlocfilehash: cd953a250378745714d8efdd2e62f8f95bdbbe08 ms.sourcegitcommit: 20acb9ad4700559ca0d98c7c622770a0499dd7ba ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 05/29/2021 ms.locfileid: "110708002" --- # <a name="use-powershell-to-create-elastic-pools-and-move-a-database-between-them"></a>Utiliser PowerShell pour créer des pools élastiques et déplacer une base de données entre eux [!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)] Cet exemple de script PowerShell crée deux pools élastiques, déplace une base de données mise en pool dans SQL Database d’un pool élastique SQL vers un autre, puis déplace la base de données mise en pool hors du pool élastique SQL afin de la convertir en une base de données unique dans Azure SQL Database. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [updated-for-az](../../../../includes/updated-for-az.md)] [!INCLUDE [cloud-shell-try-it.md](../../../../includes/cloud-shell-try-it.md)] Si vous choisissez d’installer et d’utiliser PowerShell localement, ce tutoriel nécessite Az PowerShell 1.4.0 ou ultérieur. Si vous devez effectuer une mise à niveau, consultez [Installer le module Azure PowerShell](/powershell/azure/install-az-ps). Si vous exécutez PowerShell en local, vous devez également lancer `Connect-AzAccount` pour créer une connexion avec Azure. ## <a name="sample-script"></a>Exemple de script [!code-powershell-interactive[main](../../../../powershell_scripts/sql-database/move-database-between-pools-and-standalone/move-database-between-pools-and-standalone.ps1?highlight=18-19 "Move a database between pools")] ## <a name="clean-up-deployment"></a>Nettoyer le déploiement Utilisez la commande suivante pour supprimer le groupe de ressources et toutes les ressources associées. ```powershell Remove-AzResourceGroup -ResourceGroupName $resourcegroupname ``` ## <a name="script-explanation"></a>Explication du script Ce script utilise les commandes suivantes. Chaque commande du tableau renvoie à une documentation spécifique. | Commande | Notes | |---|---| | [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Crée un groupe de ressources dans lequel toutes les ressources sont stockées. | | [New-AzSqlServer](/powershell/module/az.sql/new-azsqlserver) | Crée un serveur qui héberge des bases de données et des pools élastiques. | | [New-AzSqlElasticPool](/powershell/module/az.sql/new-azsqlelasticpool) | Crée un pool élastique. | | [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) | Crée une base de données sur un serveur. | | [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) | Met à jour les propriétés de la base de données ou déplace une base de données vers, hors ou entre des pools élastiques. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Supprime un groupe de ressources, y compris toutes les ressources imbriquées. | ||| ## <a name="next-steps"></a>Étapes suivantes Pour plus d’informations sur Azure PowerShell, consultez la [documentation Azure PowerShell](/powershell/azure/). Vous trouverez des exemples supplémentaires de scripts SQL Database PowerShell dans [Scripts PowerShell Azure SQL Database](../powershell-script-content-guide.md).
59.265625
372
0.782758
fra_Latn
0.772059
9ab997d399ecd9ca3c4ec599657d8eec61e0d61e
269
md
Markdown
_posts/2007-09-10-great-ad.md
davidascher/davidascher.github.io
c48eae780cfd5cd77f9b9b456d01accaa1b2239c
[ "MIT" ]
null
null
null
_posts/2007-09-10-great-ad.md
davidascher/davidascher.github.io
c48eae780cfd5cd77f9b9b456d01accaa1b2239c
[ "MIT" ]
null
null
null
_posts/2007-09-10-great-ad.md
davidascher/davidascher.github.io
c48eae780cfd5cd77f9b9b456d01accaa1b2239c
[ "MIT" ]
null
null
null
--- id: 317 title: Great ad date: 2007-09-10T11:55:14+00:00 author: David Ascher layout: post restapi_import_id: - 5780561eab8f6 original_post_id: - "317" categories: - General --- [It&#8217;s an ad](http://www.aglassandahalffullproductions.com/), I warned you.
16.8125
80
0.717472
eng_Latn
0.226943
9ab9f2c409e0250b3ed106fa8a7e8eeb1cb768ed
1,770
md
Markdown
docs/sasautos/metacodaIdentityLoginExtract.md
Metacoda/idsync-utils
73f994301e7743b79ab2a7e8d8e332564680f3de
[ "Apache-2.0" ]
4
2017-07-31T15:06:06.000Z
2020-04-10T01:40:51.000Z
docs/sasautos/metacodaIdentityLoginExtract.md
Metacoda/idsync-utils
73f994301e7743b79ab2a7e8d8e332564680f3de
[ "Apache-2.0" ]
null
null
null
docs/sasautos/metacodaIdentityLoginExtract.md
Metacoda/idsync-utils
73f994301e7743b79ab2a7e8d8e332564680f3de
[ "Apache-2.0" ]
null
null
null
# SAS Macro: %metacodaIdentityLoginExtract ## Purpose This macro is used to extract basic attribute values for SAS metadata Login (account) objects that are associated with Identity (user and/or group) objects. ## Parameters %macro metacodaIdentityLoginExtract( table=, identityType=, append=0, xmlDir=, debug=0 ); This macro accepts several mandatory and optional named parameters and generates a SAS table as output. ***table***: _(MANDATORY)_ The output table name (1 or 2 level) that will be overwritten (or appended to). ***identityType***: _(OPTIONAL)_ The SAS metadata model type for the type of identity whose Logins will be extracted. The value must be either blank, Person, or IdentityGroup. If the value is blank then Logins for both Person (user) and IdentityGroup (group) types will be extracted. The default is blank. ***append***: _(OPTIONAL)_ A flag (0/1) indicating whether to overwrite or append to the specified table. The default is zero to overwrite. This parameter is ignored when the identityType parameter is blank (and the table will be overwritten). ***xmlDir***: _(OPTIONAL)_ Path to a directory where PROC METADATA request, response, and map XML files will be written. If unspecified the work directory path will be used by default. ***debug***: _(OPTIONAL)_ A flag (0/1) indicating whether to generate additional debug info for troubleshooting purposes. The default is zero for no debug. ## Examples Extract login metadata for all users and groups: %metacodaIdentityLoginExtract(table=work.identityLogins) For more examples see [metacodaIdentityLoginExtractSample.sas](https://github.com/Metacoda/idsync-utils/blob/master/samples/metacodaIdentityLoginExtractSample.sas).
32.181818
164
0.759887
eng_Latn
0.982307
9aba48a27f22ae6577e92a466b131d9a4f8671a0
11,967
md
Markdown
wallet_and_tools/wanwallet_desktop.md
wanchain/explore-wanchain
34297bf315a5ba81405cfd8d791383871e913228
[ "Apache-2.0" ]
4
2021-01-19T17:47:17.000Z
2021-12-02T12:53:47.000Z
wallet_and_tools/wanwallet_desktop.md
wanchain/explore-wanchain
34297bf315a5ba81405cfd8d791383871e913228
[ "Apache-2.0" ]
2
2020-12-06T02:47:37.000Z
2021-02-22T10:40:05.000Z
wallet_and_tools/wanwallet_desktop.md
wanchain/explore-wanchain
34297bf315a5ba81405cfd8d791383871e913228
[ "Apache-2.0" ]
7
2020-12-06T02:30:43.000Z
2021-11-22T03:38:17.000Z
# WanWallet Desktop ![](media/wan_wallet_1.png) ## Wallet Features Wanchain's first official desktop light wallet is available on Mac, Windows, and Linux and currently supports these features: * WAN asset management * Standard transfer and receive transactions * Making and managing delegations under Galaxy Consensus Proof of Stake * Ledger hardware wallet support * Staking support * Multi-crypto asset management support * Private transactions * Cross chain transactions ## Setup #### Download and Install Download [WanWallet Desktop](https://www.wanchain.org/getstarted/). Double click the install package and follow the on screen instructions to install. #### Register New Account When you first open Wan Wallet, you must register a new account. First set your password: ![](media/wan_wallet_2.png) IMPORTANT: Next record your backup mnemonic phrase. Do not take a screenshot, rather you should write it by hand. Do not share your phrase with anyone. This phrase is the only way to recover your account.  ![](media/wan_wallet_3.png) *Note: this is a throw away account, NEVER share your seed phrase with anyone Click 'Next' to complete the new account registration process.* #### Generate a New Address  Wan Wallet supports the creation of multiple addresses for one account, simply click: Wallet > WAN > Create ![](media/wan_wallet_4.png) *Wallet > WAN > Create* Accounts may also be renamed as you wish. ## Get Testnet WAN You can get testnet WAN from our [faucet](http://54.201.62.90/). Follow the instructions in the link to have testnet WAN sent to your account. ## Normal Transactions Click 'Send', enter sending and receiving account information along with the transaction amount, select the service fee, and click next, and then click send again to complete your transaction. Under 'Advanced Options' are additional parameters which advanced users may adjust. ![](media/wan_wallet_5.png) ![](media/wan_wallet_6.png) ## Cross Chain Transactions The Desktop Light Wallet lets you quickly and easily make cross chain transactions. To get started, click on the 'Cross Chain' tab on the side menu: ![](media/wan_wallet_24.png) Then choose the asset from the drop down menu you would like to make a cross chain transaction with, and click the 'Convert' button: *(if the asset you wish to make a transaction with is not displayed, you can add additional Wanchain supported assets from the menu in 'Settings' --> 'Config' --> 'Wallet Options')* ![](media/wan_wallet_25.png) After clicking convert, a form will pop up with pre-populated values. * 'From (Ethereum)' - Shows account type and name of the 'From' address * 'Balance' - Shows the current balance of the asset in the 'From' address * 'Storeman' - Shows the address of the Storeman who your transaction will be sent to * 'Capacity' - Shows the total capacity of cross chain assets which the Storeman is able to manage. * 'Capacity Left' - Shows the remaining capacity of the Storeman. *For the above five fields, you do not need to change anything. You only need to make certain that your transaction value does not go over the Storeman's capacity.* * 'To' - Here you can select the target address where the cross chain asset generated from 'From' address will arrive on the target chain. * 'Estimated Fee' - Shows the estimated fees of the cross chain transaction on both chains. * 'Amount' - This is the amount of the asset you wish to send in your cross chain transaction. After carefully filling in the 'To' and 'Amount' fields, click 'Next' to review your information and then 'Send' to begin the transaction. ![](media/wan_wallet_26.png) You may then check your transaction's current state under 'Transaction History' --> 'Status'. The entire transaction should take several minutes, perhaps longer depending on the current network conditions. Immediately after clicking 'Send', the status should say 'Lock Request Sent'. ![](media/wan_wallet_27.png) Shortly after the status will change to 'Locked'. ![](media/wan_wallet_28.png) Next the status will change to 'Redemption Request Sent'. ![](media/wan_wallet_29.png) And finally, the transaction status will change to 'Success', and the newly generated cross chain token will be available for you to transact with in your target 'To' address. ![](media/wan_wallet_30.png) The process for returning your asset back to the original chain is the same, except this time click the 'Convert' button on the cross chained asset instead of the native asset, and follow the same instructions as listed above. ## Delegation Wanchain's newly introduced Galaxy Consensus Proof of Stake has a completely non-custodial delegation mechanism. Users may choose from amongst all available validators the one which they trust to send their delegations too. By delegating their stake to validator nodes in this way, all users have the opportunity to earn consensus rewards. The minimum required amount for delegation is 100 WAN.  Within the delegation interface users may clearly check their amount of staked WAN, their accumulated rewards from delegation, the yearly return rate for the entire network, and pending withdrawals. Users can also check a table of all their previous delegation transaction history.  ![](media/wan_wallet_7.png) In order to make a new delegation, click 'New Delegation', choose a validator from the list, under 'My Account', choose the account from which you would like to delegate, and enter the amount you would like to delegate in the 'Amount' field. During the process of delegation, there are several parameters you should pay attention to. First, the validator's 'Quota' represents the amount of WAN that validator is able to accept in delegations. Please also note the 'Fee' parameter. This parameter is the percent fee charged as commission by the validator. For example, if the network reward is 50 WAN and the fee is 15%, then you will receive 42.5 WAN as your reward, and 7.5 WAN will be paid to the validator. In the images below, a user is delegating 100 WAN to a validator node.  ![](media/wan_wallet_8.png) ![](media/wan_wallet_9.png) After a completing a delegation, you can see 100 WAN displayed under "My Delegations." Previous delegation details may be viewed from within the delegations history. If the user wishes to increase their delegation, they may click on the 'Top up' button. **Note:** Users may exit their delegation at any time, but please note there is ~3 epoch unlocking period after which you can withdraw your WAN. **Note:** Be sure to turn Contract Data on if you are using a Ledger hardware wallet. ![](media/wan_wallet_10.png) ## Validator Node Registration To access the validator node menu, make sure that you have checked the "Enable Validator" box inside the Settings menu. The "Validator" option will then appear under the "Galaxy PoS" tab on the left. ![](media/wan_wallet_16.png) To register your validator node and start staking, click "Register". ![](media/wan_wallet_17.png) Fill in all the required parameters which were obtained during your [validator setup](staking/node-setup-mainnet) under the "Validator Account" section, and fill in all the information from your funding wallet under "My Account". If you are using Ledger, make certain that you have turned Contract Data on in the settings. When selecting locking period, please note that the longer your locking period, the higher your reward rate. Please use the [staking calculator](http://calculator.wandevs.org/) for reward rate estimates. ![](media/wan_wallet_18.png) ## Hardware Wallet Currently the Ledger hardware wallet is supported. Follow the on screen instructions to connect your Ledger. Other hardware wallets may be supported in the future. Make sure to turn Contract Data on for validation or delegation transactions. ![](media/wan_wallet_11.png) ## Privacy Transaction Privacy transactions on Wanchain are a way for a user to send WAN to another user without specifying who is the recipient. With a privacy transaction the world can see that a user made a transaction, and upon a deeper inspection of the transaction can see how much WAN was sent in the transaction, but the world cannot see who the recipient will be. Privacy transactions are carried out on Wanchain by way of a smart contract built into the protocol. Allowed amounts for privacy transactions: - 10 WAN - 20 WAN - 50 WAN - 100 WAN - 200 WAN - 500 WAN - 1000 WAN - 5000 WAN - 50000 WAN When locking funds into the privacy contract, a user can thus only lock an amount in the list above. Likewise, a user cannot redeem partial amounts, but must redeem the full amount that was locked to the one-time address. Go to **Wallet** -> **WAN** -> **@Wanchain**, and choose the WAN address that you want to receive WAN with privacy transaction method. Click the arrow to show a "long address" accordingly, and copy this long address. ![](media/wan_wallet_40.jpeg) Choose your sender address, and click **Send**. Paste your long address to the To row. In this case, the **Transaction Mode** will automatically change to **Private Transaction**. Meanwhile, enter the **amount** and select the **Fee**. Click **Next**. ![](media/wan_wallet_41.jpeg) Review the Confirm Transaction window, and click Send. ![](media/wan_wallet_42.jpeg) After around 1 minute, you will receive WAN in the recipient's long address. Click the arrow on the right and then click the button **Redeem**. ![](media/wan_wallet_43.jpeg) After around half a minute, you will finally receive WAN in your recipient's ordinary address. **Note: Make sure that both of your sender address and recipient address has enough WAN for the gas fees.** ## DApp Store [Video guide](https://youtu.be/dMpabWAR-iw) The DApp Store feature of Wan Wallet allows you to experience DApps from creators in the Wanchain ecosystem. In the DApp Store you can find new DApps and add them to your wallet. After adding them to your wallet, you may then directly interact with the DApps through the wallet interface. ![](media/wan_wallet_23.png) ## Settings There are currently three selections under settings, 'Config', 'Backup', 'Import', 'DApps', 'Restore', and 'Network'. Under 'Config', you may choose to require your password to be input for every transaction. ![](media/wan_wallet_12.png) Under 'Backup', you may enter your password to get your backup mnemonic phrase. ![](media/wan_wallet_13.png) Under 'DApps', you may hide or delete DApps you are not currently using. ![](media/wan_wallet_22.png) Under 'Restore', you may enter your backup phrase to restore a previous wallet. ![](media/wan_wallet_14.png) Under 'Network', you may check network status for troubleshooting purposes. ![](media/wan_wallet_21.png) ## Import From Old Desktop Wallet **IMPORTANT NOTICE:** *Imports from the old desktop wallet will not be backed up with the passphrase generated in the new desktop light wallet. If you import from the old wallet to this wallet, make certain to keep your keystore or private key so that you can import again in case the wallet file is corrupted or you need to reinstall the wallet.* **Method 1:** From the top menu under 'Wan Wallet' --> 'Developer' --> 'Assets' --> 'Wanchain' --> 'Import Keystore File' you may import from the old desktop wallet using your keystore file. ![](media/wan_wallet_20.png) **Method 2:** On the sidebar menu under 'Settings' --> 'Import' you may import accounts from the old Wan Desktop Wallet to the new, Light Desktop Wallet using your private key. ![](media/wan_wallet_19.png) ## Multilingual Support Under Settings > Language, you may choose your wallet language. We currently have support for English, Chinese, Thai, Korean, Spanish, Portuguese, and French. ![](media/wan_wallet_15.png) Thank you for downloading and testing the new Wan Wallet! If you have any questions, please feel free to get in touch at techsupport@wanchain.org.
50.707627
709
0.770619
eng_Latn
0.997496
9aba5008f8cf6531d00cb6397583b38a04f4ac24
6,816
md
Markdown
data-management-library/database/hybrid-adg/set-primary-standby/set-primary-standby.md
JadenMcElvey/learning-library
21ab3a75194ff7f057c5caf76a9f3083b3f8da6f
[ "UPL-1.0" ]
null
null
null
data-management-library/database/hybrid-adg/set-primary-standby/set-primary-standby.md
JadenMcElvey/learning-library
21ab3a75194ff7f057c5caf76a9f3083b3f8da6f
[ "UPL-1.0" ]
null
null
null
data-management-library/database/hybrid-adg/set-primary-standby/set-primary-standby.md
JadenMcElvey/learning-library
21ab3a75194ff7f057c5caf76a9f3083b3f8da6f
[ "UPL-1.0" ]
null
null
null
# Set connectivity between on-premise host and cloud host In a Data Guard configuration, information is transmitted in both directions between primary and standby databases. This requires basic configuration, network tuning and opening of ports at both primary and standby databases. ##Prerequisites This lab assumes you have already completed the following labs: - Prepare on-premise Database - Provision DBCS on OCI In this Lab, you can use 2 terminal windows, one connected to the on-premise host, the other connected to the cloud host. ## Step 1: Name Resolution Configure 1. Connect as the opc user. ``` ssh -i labkey opc@xxx.xxx.xxx.xxx ``` 2. Edit `/etc/hosts` on both sides. ``` <copy>sudo vi /etc/hosts</copy> ``` - From on-premise side, add the cloud host **public ip** and host name in the file like the following: ``` xxx.xxx.xxx.xxx dbstby.sub01230246570.myvcn.oraclevcn.com dbstby ``` - From the cloud side, add the on-premise host **public ip** and host name in the file like the following: ``` xxx.xxx.xxx.xxx workshop.subnet1.examplevcn.oraclevcn.com workshop ``` 3. Validate the connectivity, install telnet on both sides. ``` <copy>sudo yum -y install telnet</copy> ``` - From the on-premise side, telnet the public ip of the cloud host, enter `^]` and return to exist. ``` [opc@adgstudent1 ~]$ telnet xxx.xxx.xxx.xxx 1521 Trying 158.101.136.61... Connected to 158.101.136.61. Escape character is '^]'. ^] telnet> q Connection closed. [opc@adgstudent1 ~]$ ``` - From the cloud side, telnet the public ip of the on-premise host, enter `^]` and return to exist. ``` [opc@dbstby ~]$ telnet xxx.xxx.xxx.xxx 1521 Trying 140.238.18.190... Connected to 140.238.18.190. Escape character is '^]'. ^] telnet> q Connection closed. [opc@dbstby ~]$ ``` ## Step 2: Prompt-less SSH configure Now you will configure the prompt-less ssh for oracle users between on-premise and the cloud. 1. su to **oracle** user in both side. ``` <copy>sudo su - oracle</copy> ``` 2. Configure prompt-less ssh from on-premise to cloud. - From on-premise side, generate the ssh key, and cat the public key, copy all the content in the id_rsa.pub ``` [oracle@workshop ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: SHA256:2S+UtAXQdwgNLRA7hjLP4RsMfDM0pW3p75hus8UQaG8 oracle@adgstudent1 The key's randomart image is: +---[RSA 2048]----+ | o.==+= . | | . . * oo.= . | | = X O .o.. | | @ O * + | | * E = | | + = . | | . = . | | o= . | | o=o. | +----[SHA256]-----+ [oracle@workshop ~]$ cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCLV6NiFihUY4ItgfPLJR1EcjC7DjuVOL86G3VperrA8hEKP2uLSh7AEeKm4MZmPPIzO/HlMw3KkhhUZNX/C+b29tQ2l8+fbCzzMGmZSAGmT2vEmot/9lVT714l/rcfWNXv8qcj6x4wHUqygH87XSDcCRaQt7vUcFNITOb/4yGRc9LcSQdlV1Yf1eOfUnkpB1fOoEXFfkAxgd1UeuFS0pIiejutqbPSeppu9X2RrbAmZymAVa7MiNNG2mZHf9tWJrigXsTwmgOgPlsAIcbutoVRGPcP1xc43ut9oUWk8reBEyDj8X2bgeafG+KeXD6YRh53lqIbTNYz+k1sfHwyuUl oracle@workshop [oracle@workshop ~]$ ``` - From cloud side, edit the `authorized_keys` file, copy all the content in the id_rsa.pub into it, save and close ``` <copy>vi .ssh/authorized_keys</copy> ``` - From on-premise side, test the connect from on-premise to cloud, using the public ip of the cloud hosts. ``` [oracle@workshop ~]$ ssh oracle@xxx.xxx.xxx.xxx echo Test success The authenticity of host '158.101.136.61 (158.101.136.61)' can't be established. ECDSA key fingerprint is SHA256:c3ghvWrZxvOnJc6aKWIPbFC80h65cZCxvQxBVdaRLx4. ECDSA key fingerprint is MD5:a8:34:53:0f:3e:56:64:56:72:a1:cb:47:18:44:ac:4c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '158.101.136.61' (ECDSA) to the list of known hosts. Test success [oracle@workshop ~]$ ``` 3. Configure prompt-less ssh from cloud to on-premise. - From cloud side, generate the ssh key, and cat the public key, copy all the content in the id_rsa.pub. ``` [oracle@dbstby ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: SHA256:60bMHAglf6pIHKjDnQAm+35L79itld48VVg1+HCQxIM oracle@dbstby The key's randomart image is: +---[RSA 2048]----+ |o. ... +o+o.| |+o .o E *...| |o.. .... o= | |ooo.. .o. . .. | |o.+o .+S. . | | + . . =o . | | o + .+ . | | o = =.o. | | o.=o+ o. | +----[SHA256]-----+ [oracle@dbstby ~]$ cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC61WzEm1bYRkPnFf96Loq/eRGJKiSkeh9EFg3NzMBUmRq4rSWMsMkIkrLmrJUNF8I5tFMnSV+AQZo5vrtU23NVvxsQHF7rKYiMm9ARkACQmr1th8kefc/sJMn/3hQDm27FB5RLeZzbxyZoJAq7ZtLMfudlogaYxqLZLBnuHT8Oky/5FOa1EUVOaqiKm8f7pPlqnxpf1QdO8lswMvInWh3Zq9newfTmu/qt56shNd462uOyNjjCgRtmxsYXIxFhJecvDnkGJ+Tekq27nozBI+c3GyQS8tsyPnjt3DRg35sXJFWOeEswmxqxAjP0KWDFlSZ3aNm4ESS3ZPaTfSlgx0E1 oracle@dbstby [oracle@dbstby ~]$ ``` - From on-premise side, edit the `authorized_keys` file, copy all the content in the `id_rsa.pub` into it, save and close ``` <copy>vi .ssh/authorized_keys</copy> ``` - Change mode of the file. ``` <copy>chmod 600 .ssh/authorized_keys</copy> ``` - From cloud side, test the connect from cloud to on-premise, using the public ip of the on-premise hosts. ``` [oracle@dbstby ~]$ ssh oracle@xxx.xxx.xxx.xxx echo Test success The authenticity of host '140.238.18.190 (140.238.18.190)' can't be established. ECDSA key fingerprint is SHA256:1GMD9btUlIjLABsTsS387MUGD4LrZ4rxDQ8eyASBc8c. ECDSA key fingerprint is MD5:ff:8b:59:ac:05:dd:27:07:e1:3f:bc:c6:fa:4e:5d:5c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '140.238.18.190' (ECDSA) to the list of known hosts. Test success [oracle@dbstby ~]$ ```
35.5
401
0.662559
eng_Latn
0.883419
9ababa2dce9d870fda2dbfb7d521c6c022231f5c
1,218
md
Markdown
_definitions/editionDigital_2016_Sahle_2.md
WoutDLN/lexicon-scholarly-editing
c9b11e32dd786ade453a616bf60fb4f1b6417bbd
[ "CC-BY-4.0" ]
2
2021-04-26T12:28:47.000Z
2021-12-21T13:30:58.000Z
_definitions/editionDigital_2016_Sahle_2.md
WoutDLN/lexicon-scholarly-editing
c9b11e32dd786ade453a616bf60fb4f1b6417bbd
[ "CC-BY-4.0" ]
45
2020-04-04T19:51:35.000Z
2022-03-24T16:56:19.000Z
_definitions/editionDigital_2016_Sahle_2.md
WoutDLN/lexicon-scholarly-editing
c9b11e32dd786ade453a616bf60fb4f1b6417bbd
[ "CC-BY-4.0" ]
3
2020-04-19T14:17:32.000Z
2021-04-08T12:13:06.000Z
--- lemma: edition (digital) source: sahle_what_2016 page: 26-28 language: English contributor: Wout updated_by: Wout --- It can be said that digital editions follow a _digital paradigm_, just as printed editions have been following a paradigm that was shaped by the technical limitations and cultural practices of typography and book printing. With the mere [digitisation](digitization.html) of printed material, the implications of a truly digital paradigm cannot be realised. [...] _A digitised edition is not a digital edition._ As long as the contents and functionalities of a typographically born and typographically envisioned edition do not really change with the conversion to digital data, we should not call these derivate editions ‘digital’. It is the conceptual framework that makes the thing—not the method of storage of the information either on paper or as bits and bytes. We can make this more productive in a more definitional manner by stating that: _A digital edition cannot be given in print without significant loss of content and functionality._ [...] _Scholarly digital editions are scholarly editions that are guided by a digital paradigm in their theory, method and practice._
48.72
435
0.801314
eng_Latn
0.999517
9abace4ad09728651d7424c861db3bd143b90ab6
113
md
Markdown
doc/doc.md
b-wu8/License_Recog
57bf2c76379c106e6572d805225dbf4eed0d6b07
[ "MIT" ]
2
2019-09-29T18:14:29.000Z
2019-11-22T15:41:47.000Z
doc/doc.md
b-wu8/License_Recog
57bf2c76379c106e6572d805225dbf4eed0d6b07
[ "MIT" ]
2
2019-12-05T21:15:40.000Z
2019-12-05T21:17:02.000Z
doc/doc.md
b-wu8/License_Recog
57bf2c76379c106e6572d805225dbf4eed0d6b07
[ "MIT" ]
null
null
null
# Status update https://docs.google.com/document/d/15_oNa_tvSypt1vW13pLvu_8tDFKTPUUvIqxWbb6IQmY/edit?usp=sharing
37.666667
96
0.858407
yue_Hant
0.567641
9abb420f8573db33c2086079e827c1c637124368
14,713
md
Markdown
public/vendors/jqvmap/README.md
swoopfx/cm
9369c3baa1657ef9acee6ce6aedb2c2931647a10
[ "BSD-3-Clause" ]
2
2019-06-04T05:43:02.000Z
2020-02-22T11:00:09.000Z
public/vendors/jqvmap/README.md
swoopfx/cm
9369c3baa1657ef9acee6ce6aedb2c2931647a10
[ "BSD-3-Clause" ]
7
2019-12-22T14:36:04.000Z
2022-02-18T08:55:18.000Z
public/vendors/jqvmap/README.md
swoopfx/cm
9369c3baa1657ef9acee6ce6aedb2c2931647a10
[ "BSD-3-Clause" ]
1
2021-11-26T09:03:01.000Z
2021-11-26T09:03:01.000Z
![JQVMap](http://jqvmap.com/img/logo.png "JQVMap") This project is a heavily modified version of [jVectorMap](https://github.com/bjornd/jvectormap) as it was in April of 2012. I chose to start fresh rather than fork their project as my intentions were to take it in such a different direction that it would become incompatibale with the original source, rendering it near impossible to merge our projects together without extreme complications. **Tests:** [![Circle CI](https://circleci.com/gh/manifestinteractive/jqvmap/tree/master.svg?style=svg&circle-token=7bce3b80868ea5ca32009a195c4436db91e5ea67)](https://circleci.com/gh/manifestinteractive/jqvmap/tree/master) jQuery Vector Map ====== To get started, all you need to do is include the JavaScript and CSS files for the map you want to load ( contained in the `./dist` folder ). #### Here is a sample HTML page for loading the World Map with default settings: ```html <html> <head> <title>JQVMap - World Map</title> <link href="../dist/jqvmap.css" media="screen" rel="stylesheet" type="text/css"> <script type="text/javascript" src="http://code.jquery.com/jquery-1.11.3.min.js"></script> <script type="text/javascript" src="../dist/jquery.vmap.js"></script> <script type="text/javascript" src="../dist/maps/jquery.vmap.world.js" charset="utf-8"></script> <script type="text/javascript"> jQuery(document).ready(function() { jQuery('#vmap').vectorMap({ map: 'world_en' }); }); </script> </head> <body> <div id="vmap" style="width: 600px; height: 400px;"></div> </body> </html> ``` Making it Pretty ====== While initializing a map you can provide parameters to change its look and feel. ```js jQuery('#vmap').vectorMap( { map: 'world_en', backgroundColor: '#a5bfdd', borderColor: '#818181', borderOpacity: 0.25, borderWidth: 1, color: '#f4f3f0', enableZoom: true, hoverColor: '#c9dfaf', hoverOpacity: null, normalizeFunction: 'linear', scaleColors: ['#b6d6ff', '#005ace'], selectedColor: '#c9dfaf', selectedRegions: null, showTooltip: true, onRegionClick: function(element, code, region) { var message = 'You clicked "' + region + '" which has the code: ' + code.toUpperCase(); alert(message); } }); ``` More Examples ------ You can see a variety of examples in the `./examples` folder. Configuration Settings ------ **map** *'world_en'* Map you want to load. Must include the javascript file with the name of the map you want. Available maps with this library are world_en, usa_en, europe_en and germany_en **backgroundColor** *'#a5bfdd'* Background color of map container in any CSS compatible format. **borderColor** *'#818181'* Border Color to use to outline map objects **borderOpacity** *0.5* Border Opacity to use to outline map objects ( use anything from 0-1, e.g. 0.5, defaults to 0.25 ) **borderWidth** *3* Border Width to use to outline map objects ( defaults to 1 ) **color** *'#f4f3f0'* Color of map regions. **colors** Colors of individual map regions. Keys of the colors objects are country codes according to ISO 3166-1 alpha-2 standard. Keys of colors must be in lower case. **enableZoom** *boolean* Whether to Enable Map Zoom ( true or false, defaults to true) **hoverColor** *'#c9dfaf'* Color of the region when mouse pointer is over it. **hoverColors** Colors of individual map regions when mouse pointer is over it. Keys of the colors objects are country codes according to ISO 3166-1 alpha-2 standard. Keys of colors must be in lower case. **hoverOpacity** *0.5* Opacity of the region when mouse pointer is over it. **normalizeFunction** *'linear'* This function can be used to improve results of visualizations for data with non-linear nature. Function gets raw value as the first parameter and should return value which will be used in calculations of color, with which particular region will be painted. **scaleColors** *['#b6d6ff', '#005ace']* This option defines colors, with which regions will be painted when you set option values. Array scaleColors can have more then two elements. Elements should be strings representing colors in RGB hex format. **selectedColor** *'#333333'* Color for a region when you select it **selectedRegions** *['MO', 'FL', 'OR']* This is the Region that you are looking to have preselected (two letter ISO code, defaults to null ). See [REGIONS.md](REGIONS.md) **multiSelectRegion** *boolean* Whether to enable more than one region to be selected at a time. **showLabels** *boolean* Whether to show ISO Code Labels ( true or false, defaults to false ) **showTooltip** *boolean* Whether to show Tooltips on Mouseover ( true or false, defaults to true ) **onLoad** *function(event, map)* Callback function which will be called when map is loading, returning the map event and map details. **onLabelShow** *function(event, label, code)* Callback function which will be called before label is shown. Label DOM object and country code will be passed to the callback as arguments. **onRegionOver** *function(event, code, region)* Callback function which will be called when the mouse cursor enters the region path. Country code will be passed to the callback as argument. **onRegionOut** *function(event, code, region)* Callback function which will be called when the mouse cursor leaves the region path. Country code will be passed to the callback as argument. **onRegionClick** *function(event, code, region)* Callback function which will be called when the user clicks the region path. Country code will be passed to the callback as argument. This callback may be called while the user is moving the map. If you need to distinguish between a "real" click and a click resulting from moving the map, you can inspect **$(event.currentTarget).data('mapObject').isMoving**. **onRegionSelect** *function(event, code, region)* Callback function which will be called when the selects a region. Country code will be passed to the callback as argument. **onRegionDeselect** *function(event, code, region)* Callback function which will be called when the deselects a region. Country code will be passed to the callback as argument. **onResize** *function(event, width, height)* Callback function which will be called when the map is resized. Return event, width & height. **pins** *{ "pk" : "pk_pin_metadata", "ru" : "ru_pin_metadata", ... }* This option defines pins, which will be placed on the regions. The JSON can have only one element against one country code. Elements should be strings containing the HTML or id of the pin (depends on the 'pinMode' option explained next). **pinMode** *content* This option defines if the "pins" JSON contains the HTML strings of the pins or the ids of HTML DOM elements which are to be placed as pins. If the pin mode is "content" (or not specified) then the parameter "pins" contains the stringified html content to be placed as the pins. Example: ```js jQuery('#vmap').vectorMap({ map: 'world_en', pins: { "pk" : "\u003cimg src=\"pk.png\" /\u003e" /*serialized <img src="pk.png" />*/, ... }, pinMode: 'content' }); ``` If the pin mode is "id" then the parameter "pins" contains the value of "id" attribute of the html (DOM) elements to be placed as pins. Example: ```html <script> jQuery('#vmap').vectorMap({ map: 'world_en', pins: { "pk" : "pin_for_pk", "ru" : "pin_for_ru", ... }, pinMode: 'id' }); </script> <div style="display:none"> <img id="pin_for_pk" src="pk.png" /> <div id="pin_for_ru">...</div> </div> ``` *Note:* 1) The pin is placed at the center of the rectangle bounding the country. So depending on the shape of the country, the pin might not land on the country itself. For instance, the pin for 'US' lands in the center of Alaska and rest of the US, which happens to be in the ocean between them. 2) If the "pinMode" is set to "id", then the html DOM elements having those ids are NOT COPIED to the desired position, they are TRANSFERRED. This means that the elements will be removed from their original positions and placed on the map. Dynamic Updating ====== Most of the options can be changed after initialization using the following code: ```js jQuery('#vmap').vectorMap('set', 'colors', {us: '#0000ff'}); ``` Instead of colors can be used any parameter except callbacks. Callbacks can be added and deleted using standard jQuery patterns of working with events. You can define callback function when you initialize JQVMap: ```js jQuery('#vmap').vectorMap( { onLoad: function(event, map) { }, onLabelShow: function(event, label, code) { }, onRegionOver: function(event, code, region) { }, onRegionOut: function(event, code, region) { }, onRegionClick: function(event, code, region) { }, onResize: function(event, width, height) { } }); ``` Or later using standard jQuery mechanism: ```js jQuery('#vmap').bind('load.jqvmap', function(event, map) { } ); jQuery('#vmap').bind('labelShow.jqvmap', function(event, label, code) { } ); jQuery('#vmap').bind('regionMouseOver.jqvmap', function(event, code, region) { } ); jQuery('#vmap').bind('regionMouseOut.jqvmap', function(event, code, region) { } ); jQuery('#vmap').bind('regionClick.jqvmap', function(event, code, region) { } ); jQuery('#vmap').bind('resize.jqvmap', function(event, width, height) { } ); ``` Consider that fact that you can use standard features of jQuery events like event.preventDefault() or returning false from the callback to prevent default behavior of JQVMap (showing label or changing country color on hover). In the following example, when user moves mouse cursor over Canada label won't be shown and color of country won't be changed. At the same label for Russia will have custom text. ```js jQuery('#vmap').vectorMap( { onLabelShow: function(event, label, code) { if (code == 'ca') { // Hide the label event.preventDefault(); } else if (code == 'ru') { // Plain TEXT labels label.text('Bears, vodka, balalaika'); } else if (code == 'us') { // HTML Based Labels. You can use any HTML you want, this is just an example label.html('<div class="map-tooltip"><h1 class="header">Header</h1><p class="description">Some Description</p></div>'); } }, onRegionOver: function(event, code) { if (code == 'ca') { event.preventDefault(); } }, }); ``` Data Visualization ====== Here I want to demonstrate how visualization of some geographical-related data can be done using JQVMap. Let's visualize information about GDP in 2010 for every country. At first we need some data. Let it be site of International Monetary Fond. There we can get information in xsl format, which can be converted first to csv and then to json with any scripting language. Now we have file gdp-data.js with such content (globals are evil, I know, but just for the sake of simplification): ```js var gdpData = {"af":16.63,"al":11.58,"dz":158.97,...}; ``` Then connect it to the page and add some code to make visualization: ```js var max = 0, min = Number.MAX_VALUE, cc, startColor = [200, 238, 255], endColor = [0, 100, 145], colors = {}, hex; //find maximum and minimum values for (cc in gdpData) { if (parseFloat(gdpData[cc]) > max) { max = parseFloat(gdpData[cc]); } if (parseFloat(gdpData[cc]) < min) { min = parseFloat(gdpData[cc]); } } //set colors according to values of GDP for (cc in gdpData) { if (gdpData[cc] > 0) { colors[cc] = '#'; for (var i = 0; i<3; i++) { hex = Math.round(startColor[i] + (endColor[i] - startColor[i]) * (gdpData[cc] / (max - min))).toString(16); if (hex.length == 1) { hex = '0'+hex; } colors[cc] += (hex.length == 1 ? '0' : '') + hex; } } } //initialize JQVMap jQuery('#vmap').vectorMap( { colors: colors, hoverOpacity: 0.7, hoverColor: false }); ``` Functions ====== There are seven functions that can be called on map container: **zoomIn()** *Zoom one step in* Usage: ```js jQuery('#vmap').vectorMap('zoomIn'); ``` **zoomOut()** *Zoom one step out* Usage: ```js jQuery('#vmap').vectorMap('zoomOut'); ``` **getPinId(cc)** *Returns the html attribute "id" of the pin placed on the country whose country code is provided in "cc".* Usage: ```js var pinId = jQuery('#vmap').vectorMap('getPinId', 'pk'); ``` **getPin(cc)** *Returns stringified HTML of the pin placed on the country whose country code is provided in "cc".* Usage: ```js var pinContent = jQuery('#vmap').vectorMap('getPin', 'pk'); ``` **getPins()** *Returns an associative JSON string containing stringified HTML of all the pins.* Usage: ```js var pins = jQuery('#vmap').vectorMap('getPins'); ``` **removePin(cc)** *Removes the pin from the country whose country code is specified in "cc".* Usage: ```js jQuery('#vmap').vectorMap('removePin', 'pk'); ``` **removePins()** *Removes all the pins from the map.* Usage: ```js jQuery('#vmap').vectorMap('removePins'); ``` Events ====== There are three events which you can use to bind your own callbacks to: **drag** *When the map is dragged, this event is triggered.* **zoomIn** *When the map is zoomed in, this event is triggered.* **zoomOut** *When the map is zoomed out, this event is triggered.* You can bind your routines to any of these events by using jQuery on() For example: ```js //Do something when the map is dragged jQuery('#vmap').on('drag', function(event) { console.log('The map is being dragged'); //Do something }); ``` Custom Maps ====== So you want to create your own maps, or change some existing ones. Awesome. Make sure to check out [./create/README.md](./create) for details on how to do this.
29.663306
487
0.652348
eng_Latn
0.975413
9abb423dc25e74d3c470be06db3122e08beff905
1,392
md
Markdown
docs/linkedin.md
carlin-q-scott/AspNet.Security.OAuth.Providers
67572c368e947c9c63f8146fac1f13e4b6a92343
[ "Apache-2.0" ]
1,582
2015-05-03T10:02:11.000Z
2022-03-31T13:07:44.000Z
docs/linkedin.md
carlin-q-scott/AspNet.Security.OAuth.Providers
67572c368e947c9c63f8146fac1f13e4b6a92343
[ "Apache-2.0" ]
505
2015-05-12T06:07:54.000Z
2022-03-27T09:23:34.000Z
docs/linkedin.md
carlin-q-scott/AspNet.Security.OAuth.Providers
67572c368e947c9c63f8146fac1f13e4b6a92343
[ "Apache-2.0" ]
566
2015-05-03T10:02:15.000Z
2022-03-25T07:50:05.000Z
# Integrating the LinkedIn Provider ## Example ```csharp services.AddAuthentication(options => /* Auth configuration */) .AddLinkedIn(options => { options.ClientId = "my-client-id"; options.ClientSecret = "my-client-secret"; }); ``` ## Required Additional Settings _None._ ## Optional Settings | Property Name | Property Type | Description | Default Value | |:--|:--|:--|:--| | `EmailAddressEndpoint` | `string` | The address of the endpoint exposing the email addresses associated with the logged in user. | `LinkedInAuthenticationDefaults.EmailAddressEndpoint` | | `Fields` | `ISet<string>` | The fields to retrieve from the user's profile. The possible values are documented [here](https://docs.microsoft.com/en-us/linkedin/consumer/integrations/self-serve/sign-in-with-linkedin#retrieving-member-profiles "Sign In with LinkedIn"). | `[ "id", "firstName", "lastName", "emailAddress" ]` | | `MultiLocaleStringResolver` | `Func<IReadOnlyDictionary<string, string>, string?, string>` | A delegate to a method that returns a localized value for a field returned for the user's profile. | A delegate to a method that returns either the `preferredLocale`, the value for [`Thread.CurrentUICulture`](https://docs.microsoft.com/en-us/dotnet/api/system.threading.thread.currentuiculture "Thread.CurrentUICulture Property") or the first value. |
55.68
446
0.725575
eng_Latn
0.734434
9abb91075089c30a8e9af5f319403754ef8d30e7
654
md
Markdown
ViewModel App/README.md
Priyanshi-Sharma-142/Androapps
ea502ff5e949e44b97db1cf36d38417ad8637400
[ "MIT" ]
27
2021-09-30T14:27:00.000Z
2022-02-06T16:12:21.000Z
ViewModel App/README.md
Priyanshi-Sharma-142/Androapps
ea502ff5e949e44b97db1cf36d38417ad8637400
[ "MIT" ]
54
2021-09-30T19:32:09.000Z
2021-11-17T11:21:26.000Z
ViewModel App/README.md
Priyanshi-Sharma-142/Androapps
ea502ff5e949e44b97db1cf36d38417ad8637400
[ "MIT" ]
29
2021-09-30T18:16:44.000Z
2022-02-21T08:55:02.000Z
# ViewModel Exemplar App It's a Exemplar App for showing about How ViewModel works and what's the purpose of ViewModel in Android. Best way to learn ViewModel Architecture through this App. What is ViewModel ? Ans : A view model represents the data that you want to display on your view/page, whether it be used for static text or for input values (like textboxes and dropdown lists) that can be added to the database (or edited). It is something different than your domain model. It is a model for the view. # ScreenShots <img src="https://user-images.githubusercontent.com/50077510/136645339-c64972cb-9dd9-457d-9f43-e93248e34d71.gif" width="200">
43.6
137
0.785933
eng_Latn
0.99137
9abbb024b82260b8b63b892240d4a09b0830138e
2,248
md
Markdown
AlchemyInsights/add-remove-prevent-users-from-changing-profile-photos.md
pebaum/OfficeDocs-AlchemyInsights-pr.et-EE
da9e02f84f493e9188f4a5855e6117899feff1eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/add-remove-prevent-users-from-changing-profile-photos.md
pebaum/OfficeDocs-AlchemyInsights-pr.et-EE
da9e02f84f493e9188f4a5855e6117899feff1eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/add-remove-prevent-users-from-changing-profile-photos.md
pebaum/OfficeDocs-AlchemyInsights-pr.et-EE
da9e02f84f493e9188f4a5855e6117899feff1eb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Kasutajate lisamine, eemaldamine või vältimine profiilifotode muutmise kaudu ms.author: pebaum author: pebaum manager: mnirkhe ms.audience: Admin ms.topic: article ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.collection: Adm_O365 ms.custom: - "9001499" - "3552" ms.openlocfilehash: 3165cd1180cf1c1716692d270e27b1ba9e675c8f ms.sourcegitcommit: bc7d6f4f3c9f7060d073f5130e1ec856e248d020 ms.translationtype: MT ms.contentlocale: et-EE ms.lasthandoff: 06/02/2020 ms.locfileid: "44061992" --- # <a name="add-remove-or-prevent-users-from-changing-profile-photos"></a>Kasutajate lisamine, eemaldamine või vältimine profiilifotode muutmise kaudu - **Profiilifotode lisamine:** Profiilifotosid saab lisada administraator [Microsoft 365 halduskeskus, aktiivsed kasutajad](https://admin.microsoft.com/Adminportal/Home?source=applauncher#/users) või [Azure Active Directory Kasutajahaldus](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/AllUsers). Kui te ei näe suvandit "Muuda fotot", siis veenduge, et sellele kasutajale on määratud litsents. Fotosid saab lisada või muuta kasutaja profiili mis tahes Microsoft 365 teenus, klõpsates nende initsiaalid/foto ülemises paremas ülaosas. Profiilifoto lisamise kohta lisateabe saamiseks vaadake teemat [oma profiilifoto lisamine Microsoft 365](https://support.office.com/article/add-your-profile-photo-to-office-365-2eaf93fd-b3f1-43b9-9cdc-bdcd548435b7). - **Profiilifotode eemaldamine:** Profiili fotosid saab eemaldada administraator [Azure Active Directory Kasutajahaldus](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/AllUsers) või kasutaja Microsoft Teams kasutajaprofiili. - **Profiili foto muudatuste blokeerimine:** Foto muudatusi saab blokeerida kõik Microsoft 365 *, lisades Outlook Web Appi poliitika artikli kohta, [lukustades fotosid või piirates õigusi muuta Microsoft 365 profiilifoto](https://answers.microsoft.com/msoffice/forum/msoffice_o365admin-mso_manage/locking-photos-or-restricting-permissions-to/1d19ae4f-de5d-4c3d-a0ad-4b8b8ac32e3d). * Pange tähele, et Microsoft Teams ei toeta praegu Outlook Web Appi poliitika blokeerida foto muudatused, kuid on plaanis lisada toetust selle funktsiooni alguses 2020.
74.933333
778
0.829181
est_Latn
0.98481
9abcc4a6e35af60db2c940591eb50875feb0e04e
5,291
md
Markdown
src/pages/post/2016-review-2017-aims.md
endymion1818/deliciousreverie
fb9f56ac05b89bb432dfd5b1cf817d333055d8da
[ "MIT" ]
2
2020-04-29T15:35:39.000Z
2021-02-17T19:51:08.000Z
src/pages/post/2016-review-2017-aims.md
endymion1818/deliciousreverie
fb9f56ac05b89bb432dfd5b1cf817d333055d8da
[ "MIT" ]
4
2020-03-25T21:21:04.000Z
2022-02-14T23:26:54.000Z
src/pages/post/2016-review-2017-aims.md
endymion1818/deliciousreverie
fb9f56ac05b89bb432dfd5b1cf817d333055d8da
[ "MIT" ]
1
2020-04-29T15:11:29.000Z
2020-04-29T15:11:29.000Z
--- categories: - personal date: "2017-05-12T15:21:21+01:00" description: "2016 as a year was as unconventional as they come. Globally there have been some massive shifts politically, socially and in other ways. My life too has taken some pretty interesting turns. I'm following suit here by posting a quick review of my year and what I hope I can achieve in 2017 from a professional perspective." draft: false tags: - year in review title: '2016 Review / 2017 Aims' --- **2016 as a year was as unconventional as they come. Globally there have been some massive shifts politically, socially and in other ways. My life too has taken some pretty interesting turns. I'm following suit here by posting a quick review of my year and what I hope I can achieve in 2017 from a professional perspective.** When I look back at the start of 2016 it is with a huge amount of mixed feelings. Hannah and I were still getting used to the sleepless nights with Morgan, who was just then a year old. We'd also recently got told to move out of our house with 4 weeks notice. That proved to be incredibly stressful. ## The Summit Media project From a professional perspective, I was doing some of the best work of my career for Ech Design on the Summit website, a monster Wordpress build with multiple custom post types, and some pretty involved animations. The deployment process was also a bit of a minefield for a team that had only just started using git. That build nearly killed the 4 of us, especially Neil and Phil, the project leads, but it's something we are all incredibly proud of even today. It was nice to have the confidence and gratitude of the Summit team after a very intense 4 / 5 month design and build period. It was also my first introduction to using a slew of new technologies, including Composer, Atlassian JIRA, a range of new animation techniques and git branching & deployment strategies. ## Moving on from Ech Until late in October I was still contracting at Ech, a great team with some exciting ambitions. I'm looking forward to hearing what they achieve in the future. I was sad to say farewell after almost a year there to go and work at Indigo Tree, a move which was conducive to a better work / life balance (because it's a much shorter commute). Looking back, I certainly was pushed to move my skills forwards, at times much further than I believed I could. I also was able to leave Ech having imparted to them some useful processes that I had acquired already (kanban boards for project management, git for version control, and working locally instead of via ftp). I also moved to Indigo Tree for the chance to work with the current team and their excellent in-house Wordpress theme. This theme has many fantastic features, including the fact that it uses OO PHP and has command-line interface similar to Laravel's Artisan tool for making meta fields, widgets and custom settings area. ``` $ origin make:metabox mymetabox ``` The one thing that really got to me about it's nearest competitor, Elliot Condon's Advanced Custom Fields, is that you can create everything in the theme in code. This speeds up my development time exponentially, to the point that I can create an entire theme in as few as 3 days. ## Aims for 2017 A few things I had in my goals list which are underway already include the following: ### 1. Build a home web server After a few false starts (I initially used Debian, but struggled without the support as a beginner at this), my home server is up and running . The main reasons for wanting to do this is because we currently maxed out on iCloud storage and don't want to pay increasing amounts ad infinitum to store & share our stuff. It was also a good task to help me discover more about servers and the LAMP stack in general, which will help me understand my role and how it connects to other roles in the workplace. ### 2. Get to grips with SVGs and SVG animations I really believe SVGs and animations are the future of the web. There's an insane amount going on with these at the moment, and I've built sites using some rudimentary animations. Unfortunately, none of them have gone live yet. I would really like to showcase some of what I've done and find out new ways of using animations to tell stories on websites. ### 3. Graduate to Laravel I've already had the opportunity at Indigo Tree to start learning Laravel, and already love it. I think this could quickly develop into a standalone blog post (umm, yep, it probably will) so I won't say anything more about it, other than I know I'm going to love the framework. A colleague has already told me "learning laravel will make you a better Wordpress developer". I can already see the truth of that statement. ### 4. Design More This one is a bit more of a vague idea than a concrete goal:- I only designed one website in 2016 and think I'm losing my sensitivity to design a bit. I want to get that back a little so that I can continue to break down the siloed approach to building websites which is sadly so prevalent still. --- OK I'd better stop there! I'm going to have a lot going on in my personal life this year too with the imminent arrival of baby no.2, so here's hoping I can cope with the lack of sleep and still turn out some stuff I am proud of.
80.166667
419
0.780004
eng_Latn
0.999943
9abd370455046ff11b1eb238163c59aa86cbc24e
7,851
md
Markdown
README.md
fecorreiabr/ngraph.forcelayout
05db79870a585e4538d32204bca86dc8cda847e0
[ "BSD-3-Clause" ]
null
null
null
README.md
fecorreiabr/ngraph.forcelayout
05db79870a585e4538d32204bca86dc8cda847e0
[ "BSD-3-Clause" ]
null
null
null
README.md
fecorreiabr/ngraph.forcelayout
05db79870a585e4538d32204bca86dc8cda847e0
[ "BSD-3-Clause" ]
null
null
null
# ngraph.forcelayout [![build status](https://github.com/anvaka/ngraph.forcelayout/actions/workflows/tests.yaml/badge.svg)](https://github.com/anvaka/ngraph.forcelayout/actions/workflows/tests.yaml) This is a [force directed](http://en.wikipedia.org/wiki/Force-directed_graph_drawing) graph layout algorithm, that works in any dimension (2D, 3D, and above). The library uses quad tree to speed up computation of long-distance forces. This repository is part of [ngraph family](https://github.com/anvaka/ngraph), and operates on [`ngraph.graph`](https://github.com/anvaka/ngraph.graph) data structure. # API All force directed algorithms are iterative. We need to perform multiple iterations of an algorithm, before graph starts looking good: ``` js // graph is an instance of `ngraph.graph` object. var createLayout = require('ngraph.forcelayout'); var layout = createLayout(graph); for (var i = 0; i < ITERATIONS_COUNT; ++i) { layout.step(); } // now we can ask layout where each node/link is best positioned: graph.forEachNode(function(node) { console.log(layout.getNodePosition(node.id)); // Node position is pair of x,y coordinates: // {x: ... , y: ... } }); graph.forEachLink(function(link) { console.log(layout.getLinkPosition(link.id)); // link position is a pair of two positions: // { // from: {x: ..., y: ...}, // to: {x: ..., y: ...} // } }); ``` If you'd like to perform graph layout in space with more than two dimensions, just add one argument to this line: ``` js let layout = createLayout(graph, {dimensions: 3}); // 3D layout let nodePosition = layout.getNodePosition(nodeId); // has {x, y, z} attributes ``` Even higher dimensions are not a problem for this library: ``` js let layout = createLayout(graph, {dimensions: 6}); // 6D layout // Every layout with more than 3 dimensions, say N, gets additional attributes: // c4, c5, ... cN let nodePosition = layout.getNodePosition(nodeId); // has {x, y, z, c4, c5, c6} ``` Note: Higher dimensionality comes at exponential cost of memory for every added dimension. See a performance section below for more details. ## Node position and object reuse Recently immutability became a ruling principle of javascript world. This library doesn't follow the rules, and results of `getNodePosition()`/`getLinkPosition()` will be always the same for the same node. This is true: ``` js layout.getNodePosition(1) === layout.getNodePosition(1); ``` Reason for this is performance. If you are interested in storing positions somewhere else, you can do it and they still will be updated after each force directed layout iteration. ## "Pin" node and initial position Sometimes it's desirable to tell layout algorithm not to move certain nodes. This can be done with `pinNode()` method: ``` js var nodeToPin = graph.getNode(nodeId); layout.pinNode(nodeToPin, true); // now layout will not move this node ``` If you want to check whether node is pinned or not you can use `isNodePinned()` method. Here is an example how to toggle node pinning, without knowing it's original state: ``` js var node = graph.getNode(nodeId); layout.pinNode(node, !layout.isNodePinned(node)); // toggle it ``` What if you still want to move your node according to some external factor (e.g. you have initial positions, or user drags pinned node)? To do this, call `setNodePosition()` method: ``` js layout.setNodePosition(nodeId, x, y); ``` ## Monitoring changes Like many other algorithms in `ngraph` family, force layout monitors graph changes via [graph events](https://github.com/anvaka/ngraph.graph#listening-to-events). It keeps layout up to date whenever graph changes: ``` js var graph = require('ngraph.graph')(); // empty graph var layout = require('ngraph.layout')(graph); // layout of empty graph graph.addLink(1, 2); // create node 1 and 2, and make link between them layout.getNodePosition(1); // returns position. ``` If you want to stop monitoring graph events, call `dispose()` method: ``` js layout.dispose(); ``` ## Physics Simulator Simulator calculates forces acting on each body and then deduces their position via Newton's law. There are three major forces in the system: 1. Spring force keeps connected nodes together via [Hooke's law](http://en.wikipedia.org/wiki/Hooke's_law) 2. Each body repels each other via [Coulomb's law](http://en.wikipedia.org/wiki/Coulomb's_law) 3. The drag force slows the entire simulation down, helping with convergence. Body forces are calculated in `n*lg(n)` time with help of Barnes-Hut algorithm implemented with quadtree. ``` js // Configure var physicsSettings = { timeStep: 0.5, dimensions: 2, gravity: -12, theta: 0.8, springLength: 10, springCoefficient: 0.8, dragCoefficient: 0.9, }; // pass it as second argument to layout: var layout = require('ngraph.forcelayout')(graph, physicsSettings); ``` You can get current physics simulator from layout by checking `layout.simulator` property. This is a read only property. ## Space occupied by graph Finally, it's often desirable to know how much space does our graph occupy. To quickly get bounding box use `getGraphRect()` method: ``` js var rect = layout.getGraphRect(); // rect.min_x, rect.min_y - left top coordinates of the bounding box // rect.max_x, rect.max_y - right bottom coordinates of the bounding box ``` ## Manipulating bodies This is advanced technique to get to internal state of the simulator. If you need to get a node position use regular `layout.getNodePosition(nodeId)` described above. In some cases you really need to manipulate physic attributes on a body level. To get to a single body by node id: ``` js var graph = createGraph(); graph.addLink(1, 2); // Get body that represents node 1: var body = layout.getBody(1); assert( typeof body.pos.x === 'number' && typeof body.pos.y === 'number', 'Body has position'); assert(body.mass, 'Body has mass'); ``` To iterate over all bodies at once: ``` js layout.forEachBody(function(body, nodeId) { assert( typeof body.pos.x === 'number' && typeof body.pos.y === 'number', 'Body has position'); assert(graph.getNode(nodeId), 'NodeId is coming from the graph'); }); ``` # Section about performance This library is focused on performance of physical simulation. We use quad tree data structure in 2D space to approximate long distance forces, and reduce amount of required computations. When layout is performed in higher dimensions we use analogues tree data structure. By design such tree requires to store `2^dimensions_count` child nodes on each node. In practice, performing layout in 6 dimensional space on a graph with a few thousand nodes yields decent performance on modern mac book (graph can be both rendered and layed out at 60FPS rate). Additionally, the vector algebra is optimized by a ad-hoc code generation. Essentially this means that upon first load of the library, we check the dimension of the space where you want to perform layout, and generate all required data structure to run fast in this space. The code generation happens only once when dimension is requested. Any subsequent layouts in the same space would reuse generated codes. It is pretty fast and cool. # install With [npm](https://npmjs.org) do: ``` npm install ngraph.forcelayout ``` Or download from CDN: ``` html <script src='https://unpkg.com/ngraph.forcelayout@3.0.0/dist/ngraph.forcelayout.min.js'></script> ``` If you download from CDN the library will be available under `ngraphCreateGraph` global name. # license MIT # Feedback? I'd totally love it! Please email me, open issue here, [tweet](https://twitter.com/anvaka) to me, or join discussion [on gitter](https://gitter.im/anvaka/VivaGraphJS). If you love this library, please consider sponsoring it at https://github.com/sponsors/anvaka or at https://www.patreon.com/anvaka
32.576763
177
0.739524
eng_Latn
0.977797
9abd8ccb9255bf55ec56575a7eb658a1b143e182
946
md
Markdown
docs/Plan.md
zimmermanc/esp-sdk-python
cdef13c0dc6c3996b6c444160c71b2f1e3910c97
[ "MIT" ]
6
2017-06-05T20:37:19.000Z
2019-04-10T08:43:59.000Z
docs/Plan.md
zimmermanc/esp-sdk-python
cdef13c0dc6c3996b6c444160c71b2f1e3910c97
[ "MIT" ]
18
2016-06-22T16:14:33.000Z
2018-10-29T21:53:15.000Z
docs/Plan.md
zimmermanc/esp-sdk-python
cdef13c0dc6c3996b6c444160c71b2f1e3910c97
[ "MIT" ]
18
2016-07-27T19:20:01.000Z
2020-11-17T02:09:58.000Z
# Plan ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **id** | **int** | Unique ID | [optional] **name** | **str** | The name of the plan | [optional] **amount** | **int** | Cost of the plan per interval | [optional] **max_external_accounts** | **int** | Number of external accounts allowed for this plan. | [optional] **max_users** | **int** | Number of users allowed for this plan. | [optional] **max_custom_signatures** | **int** | Number of Custom Signatures allowed for this plan. | [optional] **plan_type** | **str** | Kind of plan. Valid values are appliance, enterprise, free, stripe | [optional] **grandfathered** | **bool** | Changes to the plan are prohibited if grandfathered is true. | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
52.555556
161
0.625793
eng_Latn
0.813997
9abdcbcc701194df40978b2ddce6b902fb6771eb
2,317
md
Markdown
jweb/README.md
sawied/swp
576c10bd4dd9124dcc02cbf2cdc1da63bdfcbbc2
[ "Apache-2.0" ]
2
2017-04-01T13:11:40.000Z
2018-08-04T13:25:01.000Z
jweb/README.md
sawied/swp
576c10bd4dd9124dcc02cbf2cdc1da63bdfcbbc2
[ "Apache-2.0" ]
8
2017-04-02T13:42:15.000Z
2021-12-14T21:11:08.000Z
jweb/README.md
sawied/swp
576c10bd4dd9124dcc02cbf2cdc1da63bdfcbbc2
[ "Apache-2.0" ]
null
null
null
Bootstrap is a popular, open source framework. Complete with pre-built components it allows web designers of all skill levels to quickly build a site. first of all, you need to install compass dependencies of ruby runtime. **1.install ruby runtime** >*I am test installation in ubuntu 16 LTS,if you are in china,you are better to use mirrors of [aliyun](http://mirrors.aliyun.com/).* ``` $ sudo apt-get update $ apt-cache search ruby $ sudo apt-get install ruby-dev ``` **2.verify installation of ruby runtime** ``` $ ruby -v ruby 2.3.1p112 (2016-04-26) [x86_64-linux-gnu] $ gem -v 2.5.1 ``` OK ,by now all is ready to install **sass**,**compass** and **bootstrap** **3.change the gem sources** ``` $ gem sources --add https://gems.ruby-china.org/ --remove https://rubygems.org/ https://gems.ruby-china.org/ added to sources https://rubygems.org/ removed from sources $ gem sources -l *** CURRENT SOURCES *** https://gems.ruby-china.org/ ``` ***4.install compass and bootstrap*** ``` $ sudo gem install compass $ sudo gem install bootstrap-sass $ gem list --local sass (3.4.25) compass (1.0.3) compass-core (1.0.3) compass-import-once (1.0.5) autoprefixer-rails (7.2.5) bootstrap-sass (3.3.7) ``` ***5.create a project using compass*** ``` $ compass create sweb -r bootstrap-sass --using bootstrap ``` Usage: compass help [command] Primary Commands: * clean - Remove generated files and the sass cache * compile - Compile Sass stylesheets to CSS * create - Create a new compass project * init - Add compass to an existing project * watch - Compile Sass stylesheets to CSS when they change ***6.custom bootstrap imports*** As usual ,we need to custom bootstrap components for using. you can create a file of bootstrap-custom.scss . then import you liked.for example: ``` @import "bootstrap/code"; @import "bootstrap/grid"; @import "bootstrap/tables"; @import "bootstrap/forms"; @import "bootstrap/buttons"; ``` > some time ,you will encounter install error of node-sass, while downloading or installing. can try run following comands to fix the issue: 1.Error: ENOENT: no such file or directory, scandir '**/node_modules/node-sass/vendor' solution: run 'npm rebuild node-sass ' or download binary file form [github node-sass](https://github.com/sass/node-sass/releases)
33.57971
151
0.716012
eng_Latn
0.867092
9abdce0ff0e0d5399d83e1b5b04ebb64f04a7672
174
md
Markdown
README.md
geoangelotti/wudc_pushNotifications
126ea99659418487a81383fc06a5783a9c167c9b
[ "MIT" ]
1
2022-03-27T14:43:18.000Z
2022-03-27T14:43:18.000Z
README.md
geoangelotti/wudc_pushNotifications
126ea99659418487a81383fc06a5783a9c167c9b
[ "MIT" ]
null
null
null
README.md
geoangelotti/wudc_pushNotifications
126ea99659418487a81383fc06a5783a9c167c9b
[ "MIT" ]
null
null
null
# wudc_pushNotifications A Windows Presentation Foundation solution that sends toast notifications to the Azure Push Notifications applications, formatted for Windows Phone.
58
148
0.862069
eng_Latn
0.836266
9abeb7798bf6d509dd12faf56d5cfbe518a20b0e
2,348
md
Markdown
docs/mfc/copying-a-device-image-image-editor-for-icons.md
OpenLocalizationTestOrg/cpp-docs.it-it
05d8d2dcc95498d856f8456e951d801011fe23d1
[ "CC-BY-4.0" ]
1
2020-05-21T13:04:35.000Z
2020-05-21T13:04:35.000Z
docs/mfc/copying-a-device-image-image-editor-for-icons.md
OpenLocalizationTestOrg/cpp-docs.it-it
05d8d2dcc95498d856f8456e951d801011fe23d1
[ "CC-BY-4.0" ]
null
null
null
docs/mfc/copying-a-device-image-image-editor-for-icons.md
OpenLocalizationTestOrg/cpp-docs.it-it
05d8d2dcc95498d856f8456e951d801011fe23d1
[ "CC-BY-4.0" ]
null
null
null
--- title: Copying a Device Image (Image Editor for Icons) | Microsoft Docs ms.custom: ms.date: 11/04/2016 ms.reviewer: ms.suite: ms.technology: - devlang-cpp ms.tgt_pltfrm: ms.topic: article dev_langs: - C++ helpviewer_keywords: - display devices, copying images - cursors, copying - icons, copying ms.assetid: 0510c20c-f820-4770-92a5-e9263a63d8be caps.latest.revision: 11 author: mikeblome ms.author: mblome manager: ghogen translation.priority.ht: - cs-cz - de-de - es-es - fr-fr - it-it - ja-jp - ko-kr - pl-pl - pt-br - ru-ru - tr-tr - zh-cn - zh-tw translationtype: Human Translation ms.sourcegitcommit: 5187996fc377bca8633360082d07f7ec8a68ee57 ms.openlocfilehash: 8918a7991828553d1fe26e211110e30f9efbfe75 --- # Copying a Device Image (Image Editor for Icons) ### To copy a device image 1. On the **Image** menu, click **Open Device Image** and select an image from the current images list. For example, choose the 32 × 32, 16-color version of an icon. 2. Copy the currently displayed icon image (**CTRL+C**). 3. Open a different image of the icon in another **Image Editor** window. For example, open the 16 × 16, 16-color version of the icon. 4. Paste the icon image (**CTRL+V**) from one **Image Editor** window to the other. If you are pasting a larger size into a smaller size, you can use the icon handles to resize the image. For information on adding resources to managed projects, please see [Resources in Applications](http://msdn.microsoft.com/library/8ad495d4-2941-40cf-bf64-e82e85825890) in the *.NET Framework Developer's Guide.* For information on manually adding resource files to managed projects, accessing resources, displaying static resources, and assigning resources strings to properties, see [Walkthrough: Localizing Windows Forms](http://msdn.microsoft.com/en-us/9a96220d-a19b-4de0-9f48-01e5d82679e5) and [Walkthrough: Using Resources for Localization with ASP.NET](http://msdn.microsoft.com/library/bb4e5b44-e2b0-48ab-bbe9-609fb33900b6). Requirements None ## See Also [Icons and Cursors: Image Resources for Display Devices](../mfc/icons-and-cursors-image-resources-for-display-devices-image-editor-for-icons.md) [Accelerator Keys](../mfc/accelerator-keys-image-editor-for-icons.md) <!--HONumber=Jan17_HO2-->
34.529412
633
0.73552
eng_Latn
0.622562
9abed2462bcf0660346d74df0422772f7cd132a5
964
md
Markdown
catalog/kuroi-tsubasa-fukushuuya-hana/en-US_kuroi-tsubasa-fukushuuya-hana.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/kuroi-tsubasa-fukushuuya-hana/en-US_kuroi-tsubasa-fukushuuya-hana.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/kuroi-tsubasa-fukushuuya-hana/en-US_kuroi-tsubasa-fukushuuya-hana.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# Kuroi Tsubasa: Fukushuuya Hana ![kuroi-tsubasa-fukushuuya-hana](https://cdn.myanimelist.net/images/manga/2/107345.jpg) - **type**: manga - **volumes**: 1 - **original-name**: 黒い翼 復讐屋・華 - **start-date**: 2010-04-13 ## Tags - drama - shoujo ## Authors - Minase - Akira (Story & Art) ## Sinopse "If someone sins, someday they will pay for it. Retribution will be received." Plain and unassuming high school student Sannomiya Hana runs the 'Revenge Shop." As a child, her entire family was slaughtered while Hana alone survived. But the people trying to escape their hatred for the criminal suffer more. So Hana saves people wounded by the chains of hatred formed from bullying, betrayal, etc, and splendidly metes out retribution for the client! Yet for Hana herself, what can cut apart these chains of hatred...!? (Source: Nevermore Scans) ## Links - [My Anime list](https://myanimelist.net/manga/19957/Kuroi_Tsubasa__Fukushuuya_Hana)
33.241379
519
0.731328
eng_Latn
0.961461
9abf656bc06ca5a12ca8c584cdc13010549a254a
1,057
md
Markdown
docs/parameters/ms.work.unit.pacing.seconds.md
chris9692/data-integration-library
4e286b6ea0b05934ea14d5e7af753d6cdc0f274f
[ "BSD-2-Clause" ]
11
2021-05-26T21:02:40.000Z
2022-02-11T03:09:02.000Z
docs/parameters/ms.work.unit.pacing.seconds.md
chris9692/data-integration-library
4e286b6ea0b05934ea14d5e7af753d6cdc0f274f
[ "BSD-2-Clause" ]
6
2021-08-31T04:34:19.000Z
2022-02-03T23:43:05.000Z
docs/parameters/ms.work.unit.pacing.seconds.md
booddu/data-integration-library
ce4d5f2b1cf56388edd3d745677c8fe83f39ea70
[ "BSD-2-Clause" ]
3
2021-08-09T19:43:51.000Z
2021-10-05T22:26:44.000Z
# ms.work.unit.pacing.seconds **Tags**: [watermark & work unit](categories.md#watermark-work-unit-properties) **Type**: integer **Default value**: 0 **Related**: - [job property: ms.work.unit.partition](ms.work.unit.partition.md) ## Description ms.work.unit.pacing.seconds can spread out work unit execution by adding a waiting time in the front of each work unit's execution. The amount of wait time is based on the order of the work units. It is calculated as `i * ms.work.unit.pacing.seconds`, where `i` is the sequence number of the work unit. **Note**: this property can be easily used inappropriately. When there are 3600 work units, and `ms.work.unit.pacing.seconds=1`, the last work unit will not start processing until 1 hour later, no matter how fast other work units are processed. ## Example Assuming there are 100 work units, and we set `ms.work.unit.pacing.seconds=10`, then the second work unit will not start processing until 10th second. Therefore, work units are spread out by 10 second gaps. [back to summary](summary.md)
34.096774
103
0.750237
eng_Latn
0.996962
9abfc9410ddbc6bd35ee31e889ecd42364b8e45e
341
md
Markdown
README.md
pkkrsingh877/Javascript_doc_actions
b410e0c2ade7efa2e8f1dc769f3a0786c25c7a37
[ "MIT" ]
null
null
null
README.md
pkkrsingh877/Javascript_doc_actions
b410e0c2ade7efa2e8f1dc769f3a0786c25c7a37
[ "MIT" ]
5
2021-12-07T07:03:07.000Z
2022-02-26T04:14:24.000Z
README.md
pkkrsingh877/Javascript_doc_actions
b410e0c2ade7efa2e8f1dc769f3a0786c25c7a37
[ "MIT" ]
1
2021-12-09T18:45:23.000Z
2021-12-09T18:45:23.000Z
<h2>Javascript Documentation</h2> <h4>Github Action</h4> Add a new issue, github bot will add documentation label. <br> This action is performed using Github Actions.<br> <br><br> <h4>Contribution</h4> New contributors are most welcome :)<br> Add concepts of javascript in "doc/" Eg - ES6.md (add images and explanation of the concept)
24.357143
62
0.741935
eng_Latn
0.949866
9abfdc9da842bfe19d49efdf1f96bb12ad7f06c5
73
md
Markdown
ledcontrol/README.md
mzahnd/ITBA_RaspberryPi
f38ff8f9a28711235a00eb7f99f1c5f7d55fea9c
[ "MIT" ]
1
2020-07-04T18:31:58.000Z
2020-07-04T18:31:58.000Z
ledcontrol/README.md
mzahnd/ITBA_RaspberryPi
f38ff8f9a28711235a00eb7f99f1c5f7d55fea9c
[ "MIT" ]
null
null
null
ledcontrol/README.md
mzahnd/ITBA_RaspberryPi
f38ff8f9a28711235a00eb7f99f1c5f7d55fea9c
[ "MIT" ]
null
null
null
# ledcontrol Enable or disable using GPIO pins 23 and 24 by autohotspot
18.25
58
0.794521
eng_Latn
0.995902
9ac07839e59e4a8760ef785363ac90d553fad092
45
md
Markdown
content/time-machine/2020-08-06.md
jonblatho/covid-19
79a009c2de245eb513e4b9164cdf08ac9dda7293
[ "MIT" ]
1
2021-06-23T05:21:51.000Z
2021-06-23T05:21:51.000Z
content/time-machine/2020-08-06.md
jonblatho/covid-19
79a009c2de245eb513e4b9164cdf08ac9dda7293
[ "MIT" ]
66
2021-03-30T23:25:53.000Z
2022-03-28T05:49:17.000Z
content/time-machine/2020-08-06.md
jonblatho/covid-19
79a009c2de245eb513e4b9164cdf08ac9dda7293
[ "MIT" ]
null
null
null
--- date: 2020-08-06 layout: time-machine ---
11.25
20
0.644444
eng_Latn
0.311959
9ac245bb8676689401a175bb6a0ab94dee71b969
1,980
md
Markdown
README.md
zerenli1992/PS630-R-Lab
e866bd6c5cbf94c718563ba8e2c72b7768890e82
[ "Apache-2.0" ]
3
2020-08-01T01:44:05.000Z
2021-04-08T12:44:51.000Z
README.md
zerenli1992/PS630-R-Lab
e866bd6c5cbf94c718563ba8e2c72b7768890e82
[ "Apache-2.0" ]
null
null
null
README.md
zerenli1992/PS630-R-Lab
e866bd6c5cbf94c718563ba8e2c72b7768890e82
[ "Apache-2.0" ]
4
2020-01-28T04:04:05.000Z
2021-11-18T04:57:45.000Z
# PS630 R Lab ## Zeren Li This repository contains the lab session material for the first course of the quantitative method series of Duke Political Science Ph.D. Program: **Probability and Regressionbility (PS630)**. [R Quick Guide](../../tree/master/r-quick-guide): `R`, `RMarkdown` installation and resources [Lab 1: R Basics](../../tree/master/lab-1): R base function, importing data; manually compute summary stastics [Lab 2: `dplyr` & T-test ](../../tree/master/lab-2): clean and manage data using `dplyr`, `magrittr`, etc.; manually perform T-test [Lab 3: `ggplot` & OLS ](../../tree/master/lab-3): data visualization using using `ggplot`, manually perform OLS, export regression table using `stargazer` [Lab 4: Hypothesis tests, Heteroskedasticity, Regression Diagnostics, Non-linearity ](../../tree/master/lab-4): using `R` to conduct hypothesis tests mannually, run regression diagnostics using `plot()`, logged transformation, and quadratic regression. [Lab 5: Matrix & Intro to Multivariate Regression](../../tree/master/lab-5): matrix, multivariate regression, omitted variable bias, multicollinearity [Lab 6: Heteroskedasticity](../../tree/master/lab-6): test for heteroskedasticity (visualization, rvf, BP, and White test),heteroskedasticity-robust standard error, Weighted Least Squares & Feasible Generalizable Least Squares [Lab 7: Goodness of Fit](../../tree/master/lab-7): functional programming in R, goodness of fit, joint significance tests [Lab 8: Dummy Variable & Interaction](../../tree/master/lab-8): dummy variable & categortical variable, interaction effect, marginal Effect [Lab 9: Difference-in-Differences](../../tree/master/lab-9): model setup, assumptions, generalized DiDs [Lab 10: Sampling](../../tree/master/lab-10): power calculator, sampling methods Some materials are from *Introduction to Econometrics with R* by Christoph Hanck, Martin Arnold, Alexander Gerber, and Martin Schmelzer https://www.econometrics-with-r.org/.
63.870968
252
0.750505
eng_Latn
0.772209
9ac24bb2f08a8e733ec67f4693f6b3ca411f6877
5,499
md
Markdown
docs/fr/guides/common-tips.md
affiliatescoder/vue-test-utils
386cc2ca14aa71c37a206e0c2343070ad2b8f6dc
[ "MIT" ]
null
null
null
docs/fr/guides/common-tips.md
affiliatescoder/vue-test-utils
386cc2ca14aa71c37a206e0c2343070ad2b8f6dc
[ "MIT" ]
1
2022-03-02T13:10:59.000Z
2022-03-02T13:10:59.000Z
docs/fr/guides/common-tips.md
affiliatescoder/vue-test-utils
386cc2ca14aa71c37a206e0c2343070ad2b8f6dc
[ "MIT" ]
1
2018-04-23T10:51:59.000Z
2018-04-23T10:51:59.000Z
# Astuces ## Savoir quoi tester Pour les composants graphiques (UI), nous ne recommandons pas une couverture complète. En effet, cela mène à trop d'attention sur les détails de l'implémentation interne des composants et pourrait produire des tests instables. A contrario, nous recommandons d'écrire des tests qui vérifient le bon fonctionnement de l'interface public de vos composants et ainsi traiter le cœur de ceux-ci telle une boîte noire. Un simple test pourrait vérifier qu'une entrée utilisateur (due à une interaction ou un changement de props) passée au composant nous donnerait le résultat attendu (cela peut être un nouveau rendu ou l'envoi d'un évènement). Par exemple, pour le composant `Counter`, qui incrémente un compteur visible de 1 à chaque fois qu'un bouton est cliqué, le scénario de test devrait simuler le clic puis s'assurer que le rendu visuel a bien été incrémenté d'un aussi. Le test se fiche de savoir comment le compteur a incrémenté la valeur, il s'occupe seulement de l'entrée et de la sortie (du résultat). Le bénéfice de cette approche est que tant que l'interface public de votre composant reste la même, vos tests passeront et ce peu importe le comportement interne de votre composant, qui pourrait changer avec le temps. Ce sujet est discuté plus en détails dans une [très bonne présentation de Matt O'Connell](http://slides.com/mattoconnell/deck#/). ## Rendu superficiel Dans des tests unitaires, on souhaite s'intéresser au composant qui est en train d'être testé comme une unité isolée et ainsi éviter de s'assurer du bon comportement des composants enfants. De plus, pour les composants qui contiennent beaucoup de composants enfants, l'intégralité de l'arbre de rendu peut être énorme. Répétitivement rendre tous les composants pourrait réduire la vitesse de nos tests. `vue-test-utils` vous permets de monter un composant sans avoir à rendre ses composants enfants (en les ignorants) avec la méthode `shallow` : ```js import { shallow } from '@vue/test-utils' const wrapper = shallow(Component) // retourne un wrapper contenant une instance de composant montée wrapper.vm // l'instance de Vue montée ``` ## Assertion d'évènements émis Chaque wrapper monté va automatiquement enregistrer les évènements émis par l'instance de Vue en question. Vous pouvez récupérer ces évènements en utilisant la méthode `wrapper.emitted()` : ``` js wrapper.vm.$emit('foo') wrapper.vm.$emit('foo', 123) /* `wrapper.emitted()` retourne l'objet suivant : { foo: [[], [123]] } */ ``` Vous pouvez ensuite réaliser des assertions sur ces données : ``` js // asserte que l'évènement est bien émit expect(wrapper.emitted().foo).toBeTruthy() // asserte la taille du compteur d'évènement expect(wrapper.emitted().foo.length).toBe(2) // asserte le contenu lié à l'évènement expect(wrapper.emitted().foo[1]).toEqual([123]) ``` Vous pouvez aussi récupérer un tableau des évènements dans l'ordre d'émition en appelant [`wrapper.emittedByOrder()`](../api/wrapper/emittedByOrder.md). ## Manipuler l'état d'un composant Vous pouvez directement manipuler l'état d'un composant en utilisant la méthode `setData` ou `setProps` sur le wrapper : ```js wrapper.setData({ count: 10 }) wrapper.setProps({ foo: 'bar' }) ``` ## Simuler des props Vous pouvez passer des props au composant en utilisant l'option `propsData` de Vue : ```js import { mount } from '@vue/test-utils' mount(Component, { propsData: { aProp: 'une valeur' } }) ``` Vous pouvez aussi mettre à jour les props d'un composant déjà monté avec la méthode `wrapper.setProps({})`. *Pour une liste complète des options, veuillez regarder [la section sur les options de montage](../api/options.md) de la documentation.* ## Appliquer des plugins globaux et des mixins Des composants pourraient se fier à des fonctionnalités injectées par un plugin global ou un mixin, par exemple `vuex` ou `vue-router`. Si vous écrivez des tests pour des composants dans une application spécifique, vous pouvez mettre en place les mêmes plugins globaux et mixins en une seule fois dans vos tests. Dans certains cas, comme tester un composant générique utilisé par des applications différentes, il est favorable de tester ces composants dans une installation plus isolée, sans avoir à polluer le constructeur global `Vue`. On peut utiliser la méthode [`createLocalVue`](../api/createLocalVue.md) pour faire cela : ``` js import { createLocalVue } from '@vue/test-utils' // créer un constructeur local de `Vue` const localVue = createLocalVue() // installer des plugins comme d'habitude localVue.use(MyPlugin) // passe la `localVue` aux options de montage mount(Component, { localVue }) ``` ## Simuler des injections Une stratégie alternative pour injecter des propriétés est de simplement les simuler. Vous pouvez faire cela avec l'option `mocks` : ```js import { mount } from '@vue/test-utils' const $route = { path: '/', hash: '', params: { id: '123' }, query: { q: 'bonjour' } } mount(Component, { mocks: { $route // ajoute l'objet `$route` simulé à l'instance de Vue avant de monter le composant } }) ``` ## Gérer le routage Depuis que le routage, par définition, porte sur la structure générale de l'application et implique plusieurs composants. Il est mieux testé via des tests d'intégration ou point à point (end-to-end). Pour des composants individuels qui se fie aux fonctionnalités de `vue-router`, vous pouvez les simuler en utilisant les techniques mentionnées plus haut.
41.345865
492
0.759956
fra_Latn
0.996082
9ac39c63fb2d4941fc96e65bea5858302cf676f8
4,285
md
Markdown
articles/cognitive-services/Labs/Anomaly-Finder/includes/request.md
YutongTie-MSFT/azure-docs.de-de
f7922d4a0ebfb2cbb31d7004d4f726202f39716b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Labs/Anomaly-Finder/includes/request.md
YutongTie-MSFT/azure-docs.de-de
f7922d4a0ebfb2cbb31d7004d4f726202f39716b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Labs/Anomaly-Finder/includes/request.md
YutongTie-MSFT/azure-docs.de-de
f7922d4a0ebfb2cbb31d7004d4f726202f39716b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Includedatei description: Includedatei services: cognitive-services author: chliang manager: bix ms.service: cognitive-services ms.subservice: anomaly-finder ms.topic: include ms.date: 04/13/2018 ms.author: chliang ms.custom: include file ms.openlocfilehash: d3e93af87f30d5d39451f90acd2cb0e97350e13f ms.sourcegitcommit: 95822822bfe8da01ffb061fe229fbcc3ef7c2c19 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 01/29/2019 ms.locfileid: "55228951" --- ``` json { "Period": 7, "Points": [ { "Timestamp": "2018-03-01T00:00:00Z", "Value": 32858923 }, { "Timestamp": "2018-03-02T00:00:00Z", "Value": 29615278 }, { "Timestamp": "2018-03-03T00:00:00Z", "Value": 22839355 }, { "Timestamp": "2018-03-04T00:00:00Z", "Value": 25948736 }, { "Timestamp": "2018-03-05T00:00:00Z", "Value": 34139159 }, { "Timestamp": "2018-03-06T00:00:00Z", "Value": 33843985 }, { "Timestamp": "2018-03-07T00:00:00Z", "Value": 33637661 }, { "Timestamp": "2018-03-08T00:00:00Z", "Value": 32627350 }, { "Timestamp": "2018-03-09T00:00:00Z", "Value": 29881076 }, { "Timestamp": "2018-03-10T00:00:00Z", "Value": 22681575 }, { "Timestamp": "2018-03-11T00:00:00Z", "Value": 24629393 }, { "Timestamp": "2018-03-12T00:00:00Z", "Value": 34010679 }, { "Timestamp": "2018-03-13T00:00:00Z", "Value": 33893888 }, { "Timestamp": "2018-03-14T00:00:00Z", "Value": 33760076 }, { "Timestamp": "2018-03-15T00:00:00Z", "Value": 33093515 }, { "Timestamp": "2018-03-16T00:00:00Z", "Value": 29945555 }, { "Timestamp": "2018-03-17T00:00:00Z", "Value": 22676212 }, { "Timestamp": "2018-03-18T00:00:00Z", "Value": 25262514 }, { "Timestamp": "2018-03-19T00:00:00Z", "Value": 33631649 }, { "Timestamp": "2018-03-20T00:00:00Z", "Value": 34468310 }, { "Timestamp": "2018-03-21T00:00:00Z", "Value": 34212281 }, { "Timestamp": "2018-03-22T00:00:00Z", "Value": 38144434 }, { "Timestamp": "2018-03-23T00:00:00Z", "Value": 34662949 }, { "Timestamp": "2018-03-24T00:00:00Z", "Value": 24623684 }, { "Timestamp": "2018-03-25T00:00:00Z", "Value": 26530491 }, { "Timestamp": "2018-03-26T00:00:00Z", "Value": 35445003 }, { "Timestamp": "2018-03-27T00:00:00Z", "Value": 34250789 }, { "Timestamp": "2018-03-28T00:00:00Z", "Value": 33423012 }, { "Timestamp": "2018-03-29T00:00:00Z", "Value": 30744783 }, { "Timestamp": "2018-03-30T00:00:00Z", "Value": 25825128 }, { "Timestamp": "2018-03-31T00:00:00Z", "Value": 21244209 }, { "Timestamp": "2018-04-01T00:00:00Z", "Value": 22576956 }, { "Timestamp": "2018-04-02T00:00:00Z", "Value": 31957221 }, { "Timestamp": "2018-04-03T00:00:00Z", "Value": 33841228 }, { "Timestamp": "2018-04-04T00:00:00Z", "Value": 33554483 }, { "Timestamp": "2018-04-05T00:00:00Z", "Value": 32383350 }, { "Timestamp": "2018-04-06T00:00:00Z", "Value": 29494850 }, { "Timestamp": "2018-04-07T00:00:00Z", "Value": 22815534 }, { "Timestamp": "2018-04-08T00:00:00Z", "Value": 25557267 }, { "Timestamp": "2018-04-09T00:00:00Z", "Value": 34858252 }, { "Timestamp": "2018-04-10T00:00:00Z", "Value": 34750597 }, { "Timestamp": "2018-04-11T00:00:00Z", "Value": 34717956 }, { "Timestamp": "2018-04-12T00:00:00Z", "Value": 34132534 }, { "Timestamp": "2018-04-13T00:00:00Z", "Value": 30762236 }, { "Timestamp": "2018-04-14T00:00:00Z", "Value": 22504059 }, { "Timestamp": "2018-04-15T00:00:00Z", "Value": 26149060 }, { "Timestamp": "2018-04-16T00:00:00Z", "Value": 35250105 } ] } ```
19.837963
60
0.512485
yue_Hant
0.28993
9ac54f4c3203f4a3456e8d1f5bc7e827fee5e015
892
md
Markdown
README.md
Khan/git-workflow
31d58ed54c04d54b53dd082e7bf68ecf0d405b37
[ "MIT" ]
16
2015-05-26T18:44:53.000Z
2022-03-21T01:43:10.000Z
README.md
Khan/git-workflow
31d58ed54c04d54b53dd082e7bf68ecf0d405b37
[ "MIT" ]
null
null
null
README.md
Khan/git-workflow
31d58ed54c04d54b53dd082e7bf68ecf0d405b37
[ "MIT" ]
1
2018-01-07T19:58:48.000Z
2018-01-07T19:58:48.000Z
# khan academy git-workflow > Collection of scripts used to enable the [git workflow][git-at-ka] at Khan Academy. (see also: [arcanist]) [git-at-ka]: https://khanacademy.org/r/git-at-ka [arcanist]: https://github.com/khan/arcanist #### git deploy-branch Creates a remote _deploy branch_ for use with GitHub-style deploys. For GitHub-style deploys, all work must branch off a deploy branch. #### git review-branch Creates a new local branch ensuring that it's not based off master. Such a branch is called a _review branch_. All review branches should be based off a deploy branch. #### git recursive-grep Runs git grep recursively through submodules, showing file paths relative to cwd. #### git find-reviewers Find the best reviewer(s) for a given changeset. The idea is that if one user has modified all the lines you are editing, they are a good candidate to review your change.
30.758621
79
0.76009
eng_Latn
0.99224
9ac5fd68c77153355f56fa16c35790d92d8fe7b4
6,732
md
Markdown
README.md
mmnaseri/tuples4j
5e812c6993c2a525c650bfe97c0a0a6323c2664b
[ "MIT" ]
1
2022-02-04T17:44:41.000Z
2022-02-04T17:44:41.000Z
README.md
mmnaseri/tuples4j
5e812c6993c2a525c650bfe97c0a0a6323c2664b
[ "MIT" ]
10
2020-05-06T22:11:12.000Z
2021-01-17T04:47:55.000Z
README.md
mmnaseri/tuples4j
5e812c6993c2a525c650bfe97c0a0a6323c2664b
[ "MIT" ]
1
2020-11-30T11:47:33.000Z
2020-11-30T11:47:33.000Z
# tuples4j [![Donae](https://img.shields.io/badge/paypal-donate-yellow.svg)](https://paypal.me/mmnaseri) [![MIT license](http://img.shields.io/badge/license-MIT-brightgreen.svg)](http://opensource.org/licenses/MIT) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.mmnaseri.utils/tuples4j/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.mmnaseri.utils/tuples4j) [![Build Status](https://travis-ci.org/mmnaseri/tuples4j.svg?branch=master)](https://travis-ci.org/mmnaseri/tuples4j) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/1d667baee2084c42bf3c4b1db9c8a30e)](https://www.codacy.com/manual/mmnaseri/tuples4j?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=mmnaseri/tuples4j&amp;utm_campaign=Badge_Grade)[![CodeFactor](https://www.codefactor.io/repository/github/mmnaseri/tuples4j/badge)](https://www.codefactor.io/repository/github/mmnaseri/tuples4j) [![Coverage Status](https://coveralls.io/repos/github/mmnaseri/tuples4j/badge.svg)](https://coveralls.io/github/mmnaseri/tuples4j) --- A tiny library for working with [tuples](https://en.wikipedia.org/wiki/Tuple) as first-class citizens, in Java. This library has been designed to play well with Java 8+'s collections and streams APIs and fits nicely into the functional programming paradigm familiar to Java 8 developers. This code is well tested, and is easy to parse. ## Getting Starting To download the code and use it in your project, you can either clone this project and start using it: $ git clone https://github.com/mmnaseri/tuples4j.git or you can add a maven dependency since it is now available in Maven central: <dependency> <groupId>com.mmnaseri.utils</groupId> <artifactId>tuples4j</artifactId> <version>${tuples4j.version}</version> </dependency> To get started, use one of the many utility methods under the `Tuple` interface as the entrypoint: ```java ThreeTuple<Object, Integer, String, Boolean> t = Tuple.three(1, "x", true); // Or, if you prefer to get rid of the type-safety mechanism: Tuple<?> t = Tuple.three(1, "x", true); ``` Tuples are immutable, so, feel free to pass them around: ```java // Create a bunch of one-tuples. Set<Tuple<?>> tuples = Stream.of(Tuple.of(1), Tuple.of(2)) // Extend them each to two-tuples. .map(Tuple.extendOne("Hello")) // this is the same as: .map(t -> t.extend("Hello")) // Change the first element. .map(t -> t.first(Objects::toString)) // Sort by the second element. .sorted(Comparator.comparing(HasSecond::second)) // Collect them all as a set. .collect(toSet()); ``` ## Features ### JDK 8-14 Compatible This project is regular built against and tested with OpenJDK 8 - 14. Click on the build badge above to find out more. ### Tuple Sizes This library offers a wide set of tuples, from size 0 (the *empty* tuple), to size 12 and beyond. The build-in tuple sizes are: * `EmptyTuple`, * `OneTuple`, * `TwoTuple`, also available as `Pair` and `KeyValue`, * `ThreeTuple`, * `FourTuple`, * `FiveTuple`, * `SixTuple`, * `SevenTuple`, * `EightTuple`, * `NineTuple`, * `TenTuple`, * `ElevenTuple`, * `TwelveTuple`. Beyond these, there is the `ThirteenOrMoreTuple` class which holds data for thirteen or more elements. ### Type-safe Elements All elements are bound to an individual type argument, meaning that you can carry that type information and will not need to cast the items to the actual types. Moreover, each tuple type has a tightly bound, generically defined constructor and factory method, that can be used if you want to enforce an upperbound on the datatypes. For instance, to create a number-only three-tuple, you can write: ```java ThreeTuple<Number, Integer, Double, Long> tuple = ThreeTuple.of(1, 1.2D, 3L); ``` ### `equals`, `hashCode()`, and `toString()` All tuples have their own implementation of these three basic methods. These are also well-tested and you can consult the tests for examples. For instance: ```java Tuple.of(1, 2, 3).equals(Tuple.of(1, 2, 3, "a").dropFourth()) // => true ``` ### Labeled Tuples Some applications might require access to datum by labels. In this case, you can easily label your tuples like so: ```java LabeledTuple<?> t = Tuple.of("USA", 3.2).withLabels("country", "population"); System.out.println(t.get("population")); // => 3.2 ``` ### Named Accessors All tuples which have a fixed dimension, have facades that allow named interactions. For instance, a four-tuple, would implement all the facades for `HasFirst`, `HasSecond`, `HasThird`, and `HasFourth`. This means that to get the second element in the tuple, instead of writing `tuple.get(1)`, you can write `tuple.second()`. This also applies to changing values on the tuples. Instead of calling `tuple.change(2, "a")`, you can write `tuple.third("a")`. Not only is this syntax semantically stronger, but also it can help preserve type arguments on the tuple itself. ## Reflective Access The companion module `tuples4j-reflection` can be used to get tuples represented as classes: ```java ReflectiveTuple<?> reflectiveTuple = ReflectiveTuple.of(tuple); Customer customer = reflectiveTuple.as(Customer.class); String name = customer.name(); BigDecimal income = customer.income(); ``` You can do all sorts of customizations in how a value is read from the tuple and conveyed as a method's return value: ```java public interface Customer { String name(); BigDecimal income(); State state(); @Provided(by = IncomeBracketProvider.class) IncomeBracket incomeBracket(); double raise(); default nextYearIncome() { return income().multiply(raise()); } enum State { TX, WA, CA, OR } } ``` Look at [tuples4j-reflection/~/test/~/model/Country.java](https://github.com/mmnaseri/tuples4j/blob/master/tuples4j-reflection/src/test/java/com/mmnaseri/utils/tuples/model/Country.java) for a better example of the sort of things you can do with the reflective code. ### Using the Reflection Module To use the reflection module, get yourself a copy by either cloning this project or using Maven: <dependency> <groupId>com.mmnaseri.utils</groupId> <artifactId>tuples4j-reflection</artifactId> <version>${tuples4j.version}</version> </dependency> ## Building and Contributing Contributions are more than welcome :) To build the code and/or to contribute, you can use the provided `Dockerfile` image. This image sets up an Ubuntu Bionic (18.04 LTS) with OpenJDK 8. This is the minimum required environment for the project to work. You can get a working Docker image by running: ```bash docker build -t tuples4j:jdk8 . docker run -it tuples4j:jdk8 ```
35.246073
399
0.728758
eng_Latn
0.928044
9ac79b3415f3c32909d40bd470251a5a2e5e8346
2,956
md
Markdown
desktop-src/dlgbox/colorokstring.md
nbassett-MSFT/win32
13f1bdd126606b889e607594f7e8f7cfe58eb008
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/dlgbox/colorokstring.md
nbassett-MSFT/win32
13f1bdd126606b889e607594f7e8f7cfe58eb008
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/dlgbox/colorokstring.md
nbassett-MSFT/win32
13f1bdd126606b889e607594f7e8f7cfe58eb008
[ "CC-BY-4.0", "MIT" ]
1
2021-05-13T23:10:12.000Z
2021-05-13T23:10:12.000Z
--- title: COLOROKSTRING message (Commdlg.h) description: A Color dialog box sends the COLOROKSTRING registered message to your hook procedure, CCHookProc, when the user selects a color and clicks the OK button. ms.assetid: 18b28558-1262-4c88-becf-76ce799b7542 keywords: - COLOROKSTRING message Dialog Boxes topic_type: - apiref api_name: - COLOROKSTRING - COLOROKSTRINGA - COLOROKSTRINGW api_location: - Commdlg.h api_type: - HeaderDef ms.topic: reference ms.date: 05/31/2018 --- # COLOROKSTRING message A **Color** dialog box sends the **COLOROKSTRING** registered message to your hook procedure, [*CCHookProc*](https://msdn.microsoft.com/en-us/library/ms646908(v=VS.85).aspx), when the user selects a color and clicks the **OK** button. The hook procedure can accept the color and allow the dialog box to close, or reject the color and force the dialog box to remain open. ```C++ #define COLOROKSTRING TEXT("commdlg_ColorOK") ``` ## Parameters <dl> <dt> *wParam* </dt> <dd> This parameter is not used. </dd> <dt> *lParam* </dt> <dd> A pointer to a [**CHOOSECOLOR**](/windows/win32/api/commdlg/ns-commdlg-choosecolora~r1) structure. The **rgbResult** member of this structure contains the RGB color value of the selected color. </dd> </dl> ## Return value If the hook procedure returns zero, the **Color** dialog box accepts the selected color and closes. If the hook procedure returns a nonzero value, the **Color** dialog box rejects the selected color and remains open. ## Remarks The hook procedure must specify the **COLOROKSTRING** constant in a call to the [**RegisterWindowMessage**](https://docs.microsoft.com/windows/desktop/api/winuser/nf-winuser-registerwindowmessagea) function to get the identifier of the message sent by the dialog box. ## Requirements | | | |-------------------------------------|----------------------------------------------------------------------------------------------------------| | Minimum supported client<br/> | Windows 2000 Professional \[desktop apps only\]<br/> | | Minimum supported server<br/> | Windows 2000 Server \[desktop apps only\]<br/> | | Header<br/> | <dl> <dt>Commdlg.h (include Windows.h)</dt> </dl> | | Unicode and ANSI names<br/> | **COLOROKSTRINGW** (Unicode) and **COLOROKSTRINGA** (ANSI)<br/> | ## See also <dl> <dt> **Reference** </dt> <dt> [**CHOOSECOLOR**](/windows/win32/api/commdlg/ns-commdlg-choosecolora~r1) </dt> <dt> [**RegisterWindowMessage**](https://docs.microsoft.com/windows/desktop/api/winuser/nf-winuser-registerwindowmessagea) </dt> <dt> **Conceptual** </dt> <dt> [Common Dialog Box Library](common-dialog-box-library.md) </dt> </dl>
29.56
370
0.61908
eng_Latn
0.525773
9ac7cea5890f6f529aa023d8016388ad5219f1e6
445
md
Markdown
content/links/2020-12-01-08-58-how-to-write-an-essay-well.md
ys/brain
8e6c696dfef5eaa2cbce781c70547a1ade493a36
[ "MIT" ]
null
null
null
content/links/2020-12-01-08-58-how-to-write-an-essay-well.md
ys/brain
8e6c696dfef5eaa2cbce781c70547a1ade493a36
[ "MIT" ]
null
null
null
content/links/2020-12-01-08-58-how-to-write-an-essay-well.md
ys/brain
8e6c696dfef5eaa2cbce781c70547a1ade493a36
[ "MIT" ]
null
null
null
--- tags: "\U0001F5A5, article" source: link: https://www.julian.com/guide/write/intro title: How to write an essay well date: '2020-12-01T08:58:00+02:00' headImage: https://assets.website-files.com/54a5a40be53a05f34703dd18/5d3612c1918b28e348b1b374_writing%20opengraph.jpg uuid: 1d675f50-994b-4c85-ab0f-540552722164 --- # How to write an essay well https://www.julian.com/guide/write/intro ![[5d3612c1918b28e348b1b374_writing opengraph.jpeg]]
29.666667
117
0.786517
yue_Hant
0.305948
9ac7db8345cf492125d2e6e07ad6a5a169749fe8
600
md
Markdown
README.md
rpcx-ecosystem/rpcx-examples3
9278442468e64c1469631975c7080259163e5887
[ "Apache-2.0" ]
184
2017-10-23T00:34:10.000Z
2020-05-03T12:43:03.000Z
README.md
rpcx-ecosystem/rpcx-examples3
9278442468e64c1469631975c7080259163e5887
[ "Apache-2.0" ]
15
2017-12-07T15:22:40.000Z
2020-02-04T14:00:45.000Z
README.md
rpcx-ecosystem/rpcx-examples3
9278442468e64c1469631975c7080259163e5887
[ "Apache-2.0" ]
70
2017-10-28T06:31:30.000Z
2020-04-23T01:11:32.000Z
# Examples for the latest rpcx A lot of examples for [rpcx](https://github.com/smallnest/rpcx) ## How to run you should build rpcx with necessary tags, otherwise only need to install rpcx: ```sh go get -u -v github.com/smallnest/rpcx/... ``` If you install succeefullly, you can run examples in this repository. Enter one sub directory in this repository, `go run server.go` in one terminal and `cd client; go run client.go` in another ternimal, and you can watch the run result. For example, ```sh cd 101basic/server go run server.go ``` And ```sh cd 101basic/client go run client.go ```
20
168
0.728333
eng_Latn
0.993734
9ac854cd0370b43a76423f1384f56d918f22ee53
1,329
md
Markdown
api/Excel.visible.md
skucab/VBA-Docs
2912fe0343ddeef19007524ac662d3fcb8c0df09
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Excel.visible.md
skucab/VBA-Docs
2912fe0343ddeef19007524ac662d3fcb8c0df09
[ "CC-BY-4.0", "MIT" ]
1
2021-09-28T07:52:15.000Z
2021-09-28T07:52:15.000Z
api/Excel.visible.md
skucab/VBA-Docs
2912fe0343ddeef19007524ac662d3fcb8c0df09
[ "CC-BY-4.0", "MIT" ]
1
2021-09-28T07:45:29.000Z
2021-09-28T07:45:29.000Z
--- title: Visible Property (Graph) keywords: vbagr10.chm66094 f1_keywords: - vbagr10.chm66094 ms.prod: excel ms.assetid: 8a2b1b7a-b880-0e43-ca9f-c5d2207f7cfd ms.date: 06/08/2017 localization_priority: Normal --- # Visible Property (Graph) Visible property as it applies to the **Application** object. Determines whether the object is visible. Read/write Boolean. _expression_.**Visible** _expression_ Required. An expression that returns an [Application](./Excel.Application-graph-property.md) object. Visible property as it applies to the **ChartFillFormat** object. Determines whether the application is visible. Read/write MsoTriState . |MsoTriState can be one of these MsoTriState constants.| | **msoCTrue**| | **msoFalse**| | **msoTriStateMixed**| | **msoTriStateToggle**| | **msoTrue** The object is visible.| _expression_. **Visible** _expression_ Required. An expression that returns a [ChartFillFormat](./Excel.ChartFillFormat.md) object. ## Example As it applies to the **ChartFillFormat** object. This example formats the chart's fill with a preset gradient and then makes the fill visible. ```vb With myChart.ChartArea.Fill .Visible = msoTrue .PresetGradient msoGradientDiagonalDown, _ 3, msoGradientBrass End With ``` [!include[Support and feedback](~/includes/feedback-boilerplate.md)]
25.075472
114
0.761475
eng_Latn
0.874485
9ac86b38471109faa7e2d9a813b36a91c60098b4
173
md
Markdown
README.md
marhi/i2cat
ae0c3abc52f5a7ab598b2e54a82f5eb3f8cc2304
[ "Apache-2.0" ]
3
2018-04-19T19:09:53.000Z
2018-07-17T19:38:46.000Z
README.md
marhi/i2cat
ae0c3abc52f5a7ab598b2e54a82f5eb3f8cc2304
[ "Apache-2.0" ]
null
null
null
README.md
marhi/i2cat
ae0c3abc52f5a7ab598b2e54a82f5eb3f8cc2304
[ "Apache-2.0" ]
null
null
null
i2cat ===== iTerm image cat utility that displays JPEG/GIF/PNG images in iTerm2. License: Apache 2.0 ![Display as arguments](example.png) ![Display from stdin](stdin.png)
19.222222
68
0.739884
eng_Latn
0.599656
9ac8bc2f9c6789d43ac762e33370729af99aadaa
814
md
Markdown
_posts/2020-03-07-(Baekjoon)1357_reverse_add.md
TakeaimK/TakeaimK.github.io
13ef7dd7093fed5f60b16599b6b6d76190a2aaf8
[ "MIT" ]
null
null
null
_posts/2020-03-07-(Baekjoon)1357_reverse_add.md
TakeaimK/TakeaimK.github.io
13ef7dd7093fed5f60b16599b6b6d76190a2aaf8
[ "MIT" ]
null
null
null
_posts/2020-03-07-(Baekjoon)1357_reverse_add.md
TakeaimK/TakeaimK.github.io
13ef7dd7093fed5f60b16599b6b6d76190a2aaf8
[ "MIT" ]
null
null
null
--- layout: post title: 29. 뒤집힌 덧셈 categories: - Baekjoon --- ## 문제 원문 : [Baekjoon NO.1357 : 뒤집힌 덧셈](https://www.acmicpc.net/problem/1357){: target="\_blank"} ### 문제 난이도 (solved.ac 기준) : Bronze I ### 문제 내용 ![1357_reverse_add](/assets/images/Baekjoon/1357_reverse_add.PNG) ### 입력 1 ``` 123 100 ``` ### 출력 1 ``` 223 ``` ### 문제 이해 정수를 문자열로 받아 뒤집어준 뒤 다시 정수로 변환하여 더하고 문자열로 바꿔서 뒤집어 준 다음 다시 정수로 바꾸어 출력한다. ### 소스 코드 (Python) ```python def reverse(string): temp = "" for i in range(len(string)): temp += string[len(string)-i-1] return temp if __name__ == "__main__": num1, num2 = map(str, input().strip().split()) rev1 = int(reverse(num1)) rev2 = int(reverse(num2)) print(int(reverse(str(rev1+rev2)))) ``` ### 소스 코드 (Java) ```java ``` ### 소스 코드 (C++) ```cpp ```
13.129032
96
0.578624
kor_Hang
0.988723
9ac8ef185ecb22daaebd17c2d8f627bb7de93ba1
48
md
Markdown
README.md
allancher/portfolio
44b3aea551f33367e481147ee0a3ac52b213ba8c
[ "CC-BY-3.0" ]
null
null
null
README.md
allancher/portfolio
44b3aea551f33367e481147ee0a3ac52b213ba8c
[ "CC-BY-3.0" ]
null
null
null
README.md
allancher/portfolio
44b3aea551f33367e481147ee0a3ac52b213ba8c
[ "CC-BY-3.0" ]
null
null
null
# portfolio Portfolio to showcase my past works
16
35
0.8125
eng_Latn
0.999331
9ac9081d82dff7127567e1663476987f97cf3e5e
12,725
md
Markdown
README.md
elp4nsho/ngx-time-scheduler
f390045bb063d1f477cccfb39e07035ba99f4499
[ "MIT" ]
28
2019-11-04T16:40:57.000Z
2022-03-08T02:04:20.000Z
README.md
elp4nsho/ngx-time-scheduler
f390045bb063d1f477cccfb39e07035ba99f4499
[ "MIT" ]
21
2020-02-13T02:34:57.000Z
2022-03-06T16:36:00.000Z
README.md
elp4nsho/ngx-time-scheduler
f390045bb063d1f477cccfb39e07035ba99f4499
[ "MIT" ]
41
2019-11-21T04:30:55.000Z
2022-03-23T11:24:16.000Z
# Angular Time Scheduler [![GitHub issues](https://img.shields.io/github/issues/abhishekjain12/ngx-time-scheduler.svg)](https://github.com/abhishekjain12/ngx-time-scheduler/issues) [![GitHub forks](https://img.shields.io/github/forks/abhishekjain12/ngx-time-scheduler.svg)](https://github.com/abhishekjain12/ngx-time-scheduler/network) [![GitHub stars](https://img.shields.io/github/stars/abhishekjain12/ngx-time-scheduler.svg)](https://github.com/abhishekjain12/ngx-time-scheduler/stargazers) [![GitHub license](https://img.shields.io/github/license/abhishekjain12/ngx-time-scheduler.svg)](https://github.com/abhishekjain12/ngx-time-scheduler/blob/master/LICENSE) [![latest](https://img.shields.io/npm/v/ngx-time-scheduler/latest.svg)](http://www.npmjs.com/package/ngx-time-scheduler) [![npm](https://img.shields.io/npm/dt/ngx-time-scheduler.svg)](https://www.npmjs.com/packagengx-time-scheduler) A simple Angular Timeline Scheduler library # Installation Install via [NPM](https://npmjs.com) ``` npm i ngx-time-scheduler ``` # Getting Started Import the `NgxTimeSchedulerModule` in your app module. ```typescript import {NgxTimeSchedulerModule} from 'ngx-time-scheduler'; @NgModule({ imports: [ BrowserModule, NgxTimeSchedulerModule, ... ], ... }) export class AppModule { } ``` Use `ngx-ts` in your `app-component.html` template. ```html <ngx-ts [items]="items" [periods]="periods" [sections]="sections" [events]="events" [showBusinessDayOnly]="false" [allowDragging]="true" ></ngx-ts> ``` And in your `app.component.ts` component class: ```typescript import {Component, OnInit} from '@angular/core'; import {Item, Period, Section, Events, NgxTimeSchedulerService} from 'ngx-time-scheduler'; import * as moment from 'moment'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent implements OnInit { events: Events = new Events(); periods: Period[]; sections: Section[]; items: Item[]; constructor(private service: NgxTimeSchedulerService) {} ngOnInit() { this.events.SectionClickEvent = (section) => { console.log(section); }; this.events.ItemClicked = (item) => { console.log(item); }; this.events.ItemDropped = (item) => { console.log(item); }; this.periods = [ { name: '3 days', timeFramePeriod: (60 * 3), timeFrameOverall: (60 * 24 * 3), timeFrameHeaders: [ 'Do MMM', 'HH' ], classes: 'period-3day' }, { name: '1 week', timeFrameHeaders: ['MMM YYYY', 'DD(ddd)'], classes: '', timeFrameOverall: 1440 * 7, timeFramePeriod: 1440, }, { name: '2 week', timeFrameHeaders: ['MMM YYYY', 'DD(ddd)'], classes: '', timeFrameOverall: 1440 * 14, timeFramePeriod: 1440, }]; this.sections = [{ name: 'A', id: 1 }, { name: 'B', id: 2 }, { name: 'C', id: 3 }, { name: 'D', id: 4 }, { name: 'E', id: 5 }]; this.items = [{ id: 1, sectionID: 1, name: 'Item 1', start: moment().startOf('day'), end: moment().add(5, 'days').endOf('day'), classes: '' }, { id: 2, sectionID: 3, name: 'Item 2', start: moment().startOf('day'), end: moment().add(4, 'days').endOf('day'), classes: '' }, { id: 3, sectionID: 1, name: 'Item 3', start: moment().add(1, 'days').startOf('day'), end: moment().add(3, 'days').endOf('day'), classes: '' }]; } addItem() { this.service.itemPush({ id: 4, sectionID: 5, name: 'Item 4', start: moment().startOf('day'), end: moment().add(3, 'days').endOf('day'), classes: '' }); } popItem() { this.service.itemPop(); } removeItem() { this.service.itemRemove(4); } } ``` # Inputs | Name | Required | Type | Default | Description | | --- | :---: | --- | --- | --- | | periods | Yes | Period[] | `null` | An array of `Period` denoting what periods to display and use to traverse the calendar. | | sections | Yes | Section[] | `null` | An array of `Section` to fill up the sections of the scheduler. | | items | Yes | Item[] | `null` | An array of `Item` to fill up the items of the scheduler. | | events | No | Events | `new Events()` | The events that can be hooked into. | | currentTimeFormat | No | string | `'DD-MMM-YYYY HH:mm'` | The momentjs format to use for concise areas, such as tooltips. | | showCurrentTime | No | boolean | `true` | Whether the current time should be marked on the scheduler. | | showGoto | No | boolean | `true` | Whether the Goto button should be displayed. | | showToday | No | boolean | `true` | Whether the Today button should be displayed. | | showBusinessDayOnly | No | boolean | `false` | Whether business days only displayed (Sat-Sun). | | allowDragging | No | boolean | `false` | Whether or not dragging should be allowed. | | headerFormat | No | string | `'Do MMM YYYY'` | The momentjs format to use for the date range displayed as a header. | | minRowHeight | No | number | `40` | The minimum height, in pixels, that a section should be. | | maxHeight | No | number | `null` | The maximum height of the scheduler. | | text | No | Text | `new Text()` | An object containing the text use in the scheduler, to be easily customized. | | start | No | moment | `moment().startOf('day')` | The start time of the scheduler as a moment object. It's recommend to use `.startOf('day')` on the moment for a clear starting point. | | locale | No | string | `` (empty === 'en') | To load a locale, pass the key and the string values to `moment.locale`. By default, Moment.js uses English (United States) locale strings. | **NOTE:** Date locale is currently not available for Goto(button) datepicker. It will apply a date locale as per the user's system setting. Feel free to provide suggestions. # Methods Object with properties which create periods that can be used to traverse the calendar. | Name | Parameter | Return Type | Description | | --- | --- | --- | --- | | itemPush | item: Item | `void` | Push the new item object into the existing one. | | itemPop | `None` | `void` | Pop the last item from the existing one. | | itemRemove | id: number | `void` | Remove the item with defined item id from the existing one. | | sectionPush | section: Section | `void` | Push the new section object into the existing one. | | sectionPop | `None` | `void` | Pop the last section from the existing one. | | sectionRemove | id: number | `void` | Remove the section with defined section id from the existing one. | | refresh | `None` | `void` | Refresh the scheduler view. | # Models #### Period Object with properties which create periods that can be used to traverse the calendar. | Name | Type | Required | Default | Description | | --- | --- | --- | --- | --- | | name | string | Yes | `null` | The name is use to select the period and should be unique. | | classes | string | Yes | `null` | Any css classes you wish to add to this item. | | timeFramePeriod | number | Yes | `null` | The number of minutes between each "Timeframe" of the period. | | timeFrameOverall | number | Yes | `null` | The total number of minutes that the period shows. | | timeFrameHeaders | string[] | Yes | `null` | An array of [momentjs formats](http://momentjs.com/docs/#/displaying/format/) which is use to display the header rows at the top of the scheduler. Rather than repeating formats, the scheduler will merge all cells which are followed by a cell which shows the same date. For example, instead of seeing "Tuesday, Tuesday, Tuesday" with "3pm, 6pm, 9pm" below it, you'll instead see "Tuesday" a single time. | | timeFrameHeadersTooltip | string[] | No | `null` | An array of [momentjs formats](http://momentjs.com/docs/#/displaying/format/) which is use to display the tooltip of the header rows at the top of the scheduler. Rather than repeating formats, the scheduler will merge all cells which are followed by a cell which shows the same date. For example, instead of seeing "Tuesday, Tuesday, Tuesday" with "3pm, 6pm, 9pm" below it, you'll instead see "Tuesday" a single time. | | tooltip | string | No | `null` | It is use to display tooltip on period button. | #### Section Sections used to fill the scheduler. | Name | Type | Required | Default | Description | | --- | --- | --- | --- | --- | | id | number | Yes | `null` | A unique identifier for the section. | | name | string | Yes | `null` | The name to display for the section. | | tooltip | string | No | `null` | It is use to display tooltip for the section. | #### Item Items used to fill the scheduler. | Name | Type | Required | Default | Description | | --- | --- | --- | --- | --- | | id | number | Yes | `null` | An identifier for the item (doesn't have to be unique, but may help you identify which item was interacted with). | | name | string | Yes | `null` | The name to display for the item. | | start | any | Yes | `null` | A Moment object denoting where this object starts. | | end | any | Yes | `null` | A Moment object denoting where this object ends. | | classes | string | Yes | `null` | Any css classes you wish to add to this item. | | sectionID | number | Yes | `null` | The ID of the section that this item belongs to. | | tooltip | string | No | `null` | It is use to display tooltip for the section. | #### Text An object containing the text use in the scheduler, to be easily customized. | Name | Type | Default | | --- | --- | --- | | NextButton | string | `'Next'` | | PrevButton | string | `'Prev'` | | TodayButton | string | `'Today'` | | GotoButton | string | `'Go to'` | | SectionTitle | string | `'Section'` | #### Events A selection of events are provided to hook into when creating the scheduler, and are triggered with most interactions with items. | Name | Parameters | Return type | Description | | --- | --- | --- | --- | | ItemClicked | item: Item | void | Triggered when an item is clicked. | | ItemContextMenu | item: Item, event: MouseEvent | void | Triggered when an item is righted click (Context Menu). | | SectionClickEvent | section: Section | void | Triggered when a section is clicked. | | SectionContextMenuEvent | section: Section, event: MouseEvent | void | Triggered when a section is righted click (Context Menu). | | ItemDropped | item: Item | void | Triggered when an item is dropped onto a section. `item` is the new data after the action. | | PeriodChange | start: moment.Moment, end: moment.Moment | void | Triggered when an period is change. | **NOTE:** To prevent the default context menu of the browser, use event.preventDefault() in an event.ItemContextMenu() or event.SectionContextMenuEvent() function. # Demo [Demo](https://abhishekjain12.github.io/ngx-time-scheduler/) # Credits This time scheduler is based on the work done by [Zallist](https://github.com/Zallist/TimeScheduler). # License [MIT license](http://en.m.wikipedia.org/wiki/MIT_License)
47.481343
482
0.571316
eng_Latn
0.943541
9ac9f52ce2b6f109fe189b192edb6f525f8368ca
8,025
md
Markdown
archive/papers/2020-05/src/structure.md
git-sgmoore/ibc
ee71d0640c23ec4e05e924f52f557b5e06c1d82f
[ "Apache-2.0" ]
301
2019-02-09T00:28:12.000Z
2021-03-15T14:45:03.000Z
archive/papers/2020-05/src/structure.md
git-sgmoore/ibc
ee71d0640c23ec4e05e924f52f557b5e06c1d82f
[ "Apache-2.0" ]
401
2019-02-11T10:54:56.000Z
2021-03-16T15:25:15.000Z
archive/papers/2020-05/src/structure.md
git-sgmoore/ibc
ee71d0640c23ec4e05e924f52f557b5e06c1d82f
[ "Apache-2.0" ]
107
2019-03-01T17:26:49.000Z
2021-03-15T04:38:16.000Z
## Scope IBC handles authentication, transport, and ordering of opaque data packets relayed between modules on separate ledgers — ledgers can be run on solo machines, replicated by many nodes running a consensus algorithm, or constructed by any process whose state can be verified. The protocol is defined between modules on two ledgers, but designed for safe simultaneous use between any number of modules on any number of ledgers connected in arbitrary topologies. ## Interfaces IBC sits between modules — smart contracts, other ledger components, or otherwise independently executed pieces of application logic on ledgers — on one side, and underlying consensus protocols, blockchains, and network infrastructure (e.g. TCP/IP), on the other side. IBC provides to modules a set of functions much like the functions which might be provided to a module for interacting with another module on the same ledger: sending data packets and receiving data packets on an established connection and channel, in addition to calls to manage the protocol state: opening and closing connections and channels, choosing connection, channel, and packet delivery options, and inspecting connection and channel status. IBC requires certain functionalities and properties of the underlying ledgers, primarily finality (or thresholding finality gadgets), cheaply-verifiable consensus transcripts (such that a light client algorithm can verify the results of the consensus process with much less computation & storage than a full node), and simple key/value store functionality. On the network side, IBC requires only eventual data delivery — no authentication, synchrony, or ordering properties are assumed. ## Operation The primary purpose of IBC is to provide reliable, authenticated, ordered communication between modules running on independent host ledgers. This requires protocol logic in the areas of data relay, data confidentiality and legibility, reliability, flow control, authentication, statefulness, and multiplexing. \vspace{3mm} ### Data relay \vspace{3mm} In the IBC architecture, modules are not directly sending messages to each other over networking infrastructure, but rather are creating messages to be sent which are then physically relayed from one ledger to another by monitoring "relayer processes". IBC assumes the existence of a set of relayer processes with access to an underlying network protocol stack (likely TCP/IP, UDP/IP, or QUIC/IP) and physical interconnect infrastructure. These relayer processes monitor a set of ledgers implementing the IBC protocol, continuously scanning the state of each ledger and requesting transaction execution on another ledger when outgoing packets have been committed. For correct operation and progress in a connection between two ledgers, IBC requires only that at least one correct and live relayer process exists which can relay between the ledgers. \vspace{3mm} ### Data confidentiality and legibility \vspace{3mm} The IBC protocol requires only that the minimum data necessary for correct operation of the IBC protocol be made available and legible (serialised in a standardised format) to relayer processes, and the ledger may elect to make that data available only to specific relayers. This data consists of consensus state, client, connection, channel, and packet information, and any auxiliary state structure necessary to construct proofs of inclusion or exclusion of particular key/value pairs in state. All data which must be proved to another ledger must also be legible; i.e., it must be serialised in a standardised format agreed upon by the two ledgers. \vspace{3mm} ### Reliability \vspace{3mm} The network layer and relayer processes may behave in arbitrary ways, dropping, reordering, or duplicating packets, purposely attempting to send invalid transactions, or otherwise acting in a Byzantine fashion, without compromising the safety or liveness of IBC. This is achieved by assigning a sequence number to each packet sent over an IBC channel, which is checked by the IBC handler (the part of the ledger implementing the IBC protocol) on the receiving ledger, and providing a method for the sending ledger to check that the receiving ledger has in fact received and handled a packet before sending more packets or taking further action. Cryptographic commitments are used to prevent datagram forgery: the sending ledger commits to outgoing packets, and the receiving ledger checks these commitments, so datagrams altered in transit by a relayer will be rejected. IBC also supports unordered channels, which do not enforce ordering of packet receives relative to sends but still enforce exactly-once delivery. \vspace{3mm} ### Flow control \vspace{3mm} IBC does not provide specific protocol-level provisions for compute-level or economic-level flow control. The underlying ledgers are expected to have compute throughput limiting devices and flow control mechanisms of their own such as gas markets. Application-level economic flow control — limiting the rate of particular packets according to their content — may be useful to ensure security properties and contain damage from Byzantine faults. For example, an application transferring value over an IBC channel might want to limit the rate of value transfer per block to limit damage from potential Byzantine behaviour. IBC provides facilities for modules to reject packets and leaves particulars up to the higher-level application protocols. \vspace{3mm} ### Authentication \vspace{3mm} All data sent over IBC are authenticated: a block finalised by the consensus algorithm of the sending ledger must commit to the outgoing packet via a cryptographic commitment, and the receiving ledger's IBC handler must verify both the consensus transcript and the cryptographic commitment proof that the datagram was sent before acting upon it. \vspace{3mm} ### Statefulness \vspace{3mm} Reliability, flow control, and authentication as described above require that IBC initialises and maintains certain status information for each datastream. This information is split between three abstractions: clients, connections, and channels. Each client object contains information about the consensus state of the counterparty ledger. Each connection object contains a specific pair of named identifiers agreed to by both ledgers in a handshake protocol, which uniquely identifies a connection between the two ledgers. Each channel, specific to a pair of modules, contains information concerning negotiated encoding and multiplexing options and state and sequence numbers. When two modules wish to communicate, they must locate an existing connection and channel between their two ledgers, or initialise a new connection and channel(s) if none yet exist. Initialising connections and channels requires a multi-step handshake which, once complete, ensures that only the two intended ledgers are connected, in the case of connections, and ensures that two modules are connected and that future datagrams relayed will be authenticated, encoded, and sequenced as desired, in the case of channels. \vspace{3mm} ### Multiplexing \vspace{3mm} To allow for many modules within a single host ledger to use an IBC connection simultaneously, IBC allows any number of channels to be associated with a single connection. Each channel uniquely identifies a datastream over which packets can be sent in order (in the case of an ordered channel), and always exactly once, to a destination module on the receiving ledger. Channels are usually expected to be associated with a single module on each ledger, but one-to-many and many-to-one channels are also possible. The number of channels per connection is unbounded, facilitating concurrent throughput limited only by the throughput of the underlying ledgers with only a single connection and pair of clients necessary to track consensus information (and consensus transcript verification cost thus amortised across all channels using the connection).
111.458333
1,197
0.819315
eng_Latn
0.999422
9aca45831bab0be05069a7c98a27f5491f897ea2
8,198
md
Markdown
vscodespaces/how-to/self-hosting-vscode.md
jongio/vscodespaces
80cd3b587f125551fd1e54cb49ce45f843fb4e7f
[ "CC-BY-4.0", "MIT" ]
null
null
null
vscodespaces/how-to/self-hosting-vscode.md
jongio/vscodespaces
80cd3b587f125551fd1e54cb49ce45f843fb4e7f
[ "CC-BY-4.0", "MIT" ]
null
null
null
vscodespaces/how-to/self-hosting-vscode.md
jongio/vscodespaces
80cd3b587f125551fd1e54cb49ce45f843fb4e7f
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- author: 2percentsilk ms.author: allisb ms.prod: visual-studio-family ms.technology: visual-studio-codespaces title: Register a self-hosted Codespaces environment with Visual Studio Code description: Register a self-hosted environment with Visual Studio Codespaces using Visual Studio Code. ms.topic: how-to ms.date: 04/06/2020 --- # Register a self-hosted Codespaces environment with Visual Studio Code > [!IMPORTANT] > Visual Studio Codespaces is being consolidated into [GitHub Codespaces](https://github.com/features/codespaces). Support for self-hosted codespaces in the Visual Studio service has ended and is not currently included in the GitHub service. You can join the conversation by visiting the GitHub Community issue regarding [self-hosted support](https://github.community/t/self-hosted-codespaces/131361). You can host Codespaces on your own environment using Visual Studio Code. This article describes how to register and connect to a self-hosted Codespaces environment. If you want to register a machine where interacting with Visual Studio Code UI isn't possible (for example, a server or headless OS), see [Register a headless, self-hosted Codespaces environment](self-hosting-cli.md). > [!NOTE] > A Microsoft Account and Azure Subscription are required to use Visual Studio Codespaces. Sign up for a Microsoft Account and Azure Subscription at [https://azure.microsoft.com/free/](https://azure.microsoft.com/free/). ## Install You'll need to have [Visual Studio Code](https://code.visualstudio.com/) and the [Codespaces extension](https://aka.ms/vso-dl) installed on the machine you wish to register. To install Visual Studio Code, see [Download Visual Studio Code](https://code.visualstudio.com/download). Install the [Codespaces extension](https://aka.ms/vso-dl) from the [VS Code Marketplace](https://marketplace.visualstudio.com/VSCode) by clicking on the green install button near the top of the page and following the prompts or from within VS Code by searching for "Visual Studio Codespaces" within the **Extensions** view, selecting the extension from the list, and pressing the **Install** button. When successfully installed, the **Codespaces** panel will be available in the **Remote Explorer** pane. If you have other VS Code Remote Development extensions installed, you may need to select **Codespaces** from the **Remote Explorer** dropdown. ![Screenshot of Visual Studio Codespaces Remote Explorer](../images/install-vsc-03.png) The panel shown in the previous screenshot provides a management interface for interacting with Codespaces environments, and is covered in full detail in the following sections. In addition to the panel, Visual Studio Code will also show the remote indicator on the Status bar when the Codespaces extension is installed. The remote indicator signals your connection status, and provides a list of available Codespaces commands when clicked. ![Visual Studio Codespaces remote indicator](../images/install-vsc-04.png) ## Sign In To sign into Visual Studio Codespaces, you can either press **F1** and select the **Codespaces: Sign In** command in the [Command Palette](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette), or select **Sign in to view Codespaces...** in **Codespaces** panel of the **Remote Explorer** side bar. ![Sign In to Visual Studio Codespaces](../images/sign-in-vsc-01.png) Follow the prompts in your browser to complete the sign in. <!-- TODO: Add content for: - Filtering Azure Subscription --> ## Create a plan Once you've signed up and created an Azure subscription, you can access Codespaces by creating a Codespaces Plan. You can create more than one plan, and plans can be used to group related environments together. To sign up, see [Create an Azure free account](https://azure.microsoft.com/free/). To create a new plan, you can either use the **Codespaces: Create Plan** command in the [Command Palette](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette), or by clicking the **Select Plan** button on the **Codespaces** title bar in the **Remote Explorer** side bar, then selecting **Create new plan...** from the quick pick list. ![Create Visual Studio Codespaces plan](../images/create-plan-vsc-01.png) Follow the prompts to select an Azure subscription to associate the plan with, an Azure region to create the plan in, a name for the Azure resource group to create the plan in, and a name for the plan itself. - **Azure subscription**: You can choose from any Azure subscriptions that was previously selected. To add or remove options from the list, use the **Azure: Select Subscriptions** command in the Command Palette. - **Azure resource group name**: Your Codespaces plan will be created in a new Azure resource group with the name provided in this step. - **Azure region**: Choose an [Azure region](https://azure.microsoft.com/global-infrastructure/regions/) to create the Codespaces plan in. All environments created within this plan will be provisioned in the selected region. Supported regions are: - East US - Southeast Asia - West Europe - West US 2 - **Codespaces plan name**: The name of the created Codespaces plan. This name is displayed in the **Remote Explorer** for organization purposes. - **Default instance type**: Choose the default Codespaces Instance Type, such as Standard (Linux). Once a plan is created, it will be the selected plan in the **Remote Explorer**. ![Selected Visual Studio Codespaces plan](../images/create-plan-vsc-02.png) Only environments contained within the selected plan will be displayed. To select a different plan, you can either use the **Codespaces: Select Plan** command in the Command Palette, or by clicking the **Select Plan** button on the **Codespaces** title bar. ## Register your environment Select the **Codespaces: Register Self-hosted Codespace** command in the [Command Palette](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette) or select **Register self-hosted Codespace...** under the **Self-hosted Codespaces** node in the **Codespaces** panel of the **Remote Explorer** side bar: - If no folders are currently open in VS Code, you will be prompted to select one. This folder will be opened every time you connect to this codespace from another machine. However, you can open any folder after connecting. ![Select a folder prompt while registering a self-hosted environment in Visual Studio Code](../images/register-local-env-vsc-01.png) - If no plan is selected, you will be prompted to select or create a plan. No charge is incurred for self-hosted environments. - You'll be asked to select how the self-hosted agent is run: - **Run as a process:** A temporary registration that terminates upon machine shutdown/restart. - **Register a service:** Allows the agent to be registered as a system service and persists after environment restarts. > [!NOTE] > You must run VS Code as an administrator to be able to select this option. You will be prompted to provide an admin username and password to complete registration. After registering, your self-hosted environment will appear under the **Self-hosted environments** node in the **Codespaces** panel of the **Remote Explorer** side bar. ![Local environment in Visual Studio Code Remote Explorer](../images/register-local-env-vsc-02.png) ## Access your environment You can now connect from any machine with the Codespaces extension installed or from the [Codespaces management portal](https://online.visualstudio.com/environments) in the browser. The first time you connect may take longer than usual. If your self-hosted environment becomes unavailable for any reason, see our [troubleshooting](../resources/troubleshooting.md#self-hosted-environments) reference documentation. ## Unregister an environment You can unregister an self-hosted environment with Codespaces through the [Codespaces management portal](https://online.visualstudio.com/environments). Click the **More** button to bring up the context menu and select **Unregister**. ![Unregister command on the Codespaces management portal](../images/unregister-self-hosted.png)
74.527273
401
0.779702
eng_Latn
0.992735
9aca60c8f3ea8c75682c5fed4bf0b117bd51c8d4
75
md
Markdown
README.md
pablorrp1/MyS1-DIC2019
59f5232716528c30efd4143bb5c783767a26833c
[ "MIT" ]
null
null
null
README.md
pablorrp1/MyS1-DIC2019
59f5232716528c30efd4143bb5c783767a26833c
[ "MIT" ]
null
null
null
README.md
pablorrp1/MyS1-DIC2019
59f5232716528c30efd4143bb5c783767a26833c
[ "MIT" ]
null
null
null
# MyS1-DIC2019 Se subirán los ejercicios o ejemplos que se hagan en clase.
25
59
0.786667
spa_Latn
0.999963