hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
6154f0993ef4cd260fb9240f3be39531b99e29fe
1,839
md
Markdown
CHANGELOG.md
snjypl/airflow-provider-great-expectations
00219e18b2219ee7d22a7902cbe0258dfc45f8c2
[ "Apache-2.0" ]
91
2020-11-16T23:35:42.000Z
2022-03-29T15:59:04.000Z
CHANGELOG.md
snjypl/airflow-provider-great-expectations
00219e18b2219ee7d22a7902cbe0258dfc45f8c2
[ "Apache-2.0" ]
35
2020-11-23T20:59:54.000Z
2022-03-28T19:50:11.000Z
CHANGELOG.md
snjypl/airflow-provider-great-expectations
00219e18b2219ee7d22a7902cbe0258dfc45f8c2
[ "Apache-2.0" ]
31
2020-11-23T15:53:47.000Z
2022-03-20T11:43:38.000Z
# Apache Airflow Provider for Great Expectations ## Upcoming * (please add here) ## 0.0.8 * [FEATURE] Add explicit Airflow version handling - thanks, @jeffkpayne! * [FEATURE] Add example DAG setup conveniences - thanks, @jeffkpayne! * [FEATURE] Defer initialization of DataContext and other resources - thanks, @jeffkpayne! * [DOCS] Update README with virtual env and testing process - thanks, @jeffkpayne! * [MAINTENANCE] Mark additional vars templated in BigQuery operator - thanks, @jeffkpayne! * [MAINTENANCE] Add type hints and update docstrings - thanks, @jeffkpayne! * [MAINTENANCE] Add unit and integration test coverage - thanks, @jeffkpayne! ## 0.0.7 * [BUGFIX] Addressed a bug whereby the fail_task_on_validation_failure in the BigQuery operator was being shadowed by the parent class. * [ENHANCEMENT] Add support for validation operators when running LegacyCheckpoints with the GreatExpectationsOperator * [DOCS] Fixed typos in documentation - thanks @armandduijn and @sunkickr! * [MAINTENANCE] Make some improvements to the package by updating setup.py dependencies, exposing the example_dags within that, and adding an __init__.py to the example DAG directory - thanks @petedejoy and @pgzmnk! ## 0.0.6 * [BUGFIX] Update setup.py with appropriate versions ## 0.0.5 * [BREAKING] Updated GreatExpectations operator to work with class-based Checkpoints (Great Expectations >= 0.13.9) * [ENHANCEMENT] Restructured project, provider metadata, examples, added "how to run" steps to README, and added tests ## 0.0.4 * [ENHANCEMENT] Adding BigQuery operator for easier use with BigQuery and Google Cloud Storage ## 0.0.3 * [ENHANCEMENT] Adding template fields * [ENHANCEMENT] Adding fail callback function parameter ## 0.0.2 * [ENHANCEMENT] Updated Great Expectations examples ## 0.0.1 * Initial release of the provider
45.975
216
0.771615
eng_Latn
0.939772
61554ef385108c4c608474304af1001ea98990e9
1,533
md
Markdown
index.md
nik1806/nik1806.github.io
52d9692432feabeca2293d99394c411fa67a5603
[ "Unlicense" ]
null
null
null
index.md
nik1806/nik1806.github.io
52d9692432feabeca2293d99394c411fa67a5603
[ "Unlicense" ]
null
null
null
index.md
nik1806/nik1806.github.io
52d9692432feabeca2293d99394c411fa67a5603
[ "Unlicense" ]
1
2020-03-05T11:10:54.000Z
2020-03-05T11:10:54.000Z
## Portfolio --- ### Computer Vision and Robotics projects --- [ multiple object tracking](https://github.com/nik1806/multi_obj_track_multiprocess) <img src="images/multiple.jpeg?raw=true"/> --- [ Motion-Analysis](https://github.com/nik1806/Motion-Analysis) <img src="images/motionanal.jpeg?raw=true"/> --- [sober_drunk_classification](https://github.com/nik1806/sober_drunk_classification) <img src="images/sober.jpeg?raw=true"/> --- [Human-Pose-Detection](https://github.com/nik1806/Human-Pose-Detection) <img src="images/pose.png?raw=true"/> --- [Plant-leaf-infection-detection](https://github.com/nik1806/Plant-leaf-infection-detection) <img src="images/leaf.png?raw=true"/> --- [Multi-UAV-collision-avoidance](https://github.com/nik1806/Multi-UAV-collision-avoidance) <img src="images/uav.png?raw=true"/> --- ### Other projects - [ multiple object tracking](https://github.com/nik1806/multi_obj_track_multiprocess) - [ Motion-Analysis](https://github.com/nik1806/Motion-Analysis) - [sober_drunk_classification](https://github.com/nik1806/sober_drunk_classification) - [Human-Pose-Detection](https://github.com/nik1806/Human-Pose-Detection) - [Plant-leaf-infection-detection](https://github.com/nik1806/Plant-leaf-infection-detection) - [Multi-UAV-collision-avoidance](https://github.com/nik1806/Multi-UAV-collision-avoidance) --- --- <p style="font-size:11px">Page template forked from <a href="https://github.com/evanca/quick-portfolio">evanca</a></p> <!-- Remove above link if you don't want to attibute -->
29.480769
118
0.746902
yue_Hant
0.303105
6156a9c3b6a93952c9b1e41d8ae8668e50e3ef1b
418
md
Markdown
wiki/translations/zh-tw/Category:PartDesign.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/zh-tw/Category:PartDesign.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/zh-tw/Category:PartDesign.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
# Category:PartDesign/zh-tw This category lists pages related to the [PartDesign Workbench/zh-tw](PartDesign_Workbench/zh-tw.md). ### Contents: [Template:PartDesign Tools navi/zh-tw](Template:PartDesign_Tools_navi/zh-tw.md) , [PartDesign Workbench/zh-tw](PartDesign_Workbench/zh-tw.md) [Category:Workbenches/zh-tw](Category:Workbenches/zh-tw.md) --- [documentation index](../README.md) > Category:PartDesign/zh-tw
34.833333
141
0.77512
yue_Hant
0.870403
6156be5044cb424b2898574f5de0300fef316714
13,276
md
Markdown
docs/spark/tutorials/hdinsight-deployment.md
Jteve-Sobs/docs.de-de
06092136f031dda8715cfe6928a4fdc0ec6c0899
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/spark/tutorials/hdinsight-deployment.md
Jteve-Sobs/docs.de-de
06092136f031dda8715cfe6928a4fdc0ec6c0899
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/spark/tutorials/hdinsight-deployment.md
Jteve-Sobs/docs.de-de
06092136f031dda8715cfe6928a4fdc0ec6c0899
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Bereitstellen einer .NET für Apache Spark-Anwendung in Azure HDInsight description: Erfahren Sie, wie Sie eine .NET für Apache Spark-Anwendung in HDInsight bereitstellen. ms.date: 01/23/2020 ms.topic: tutorial ms.custom: mvc ms.openlocfilehash: 77b57463375c36444532bdd383ec4b3bfe3ab056 ms.sourcegitcommit: 7588136e355e10cbc2582f389c90c127363c02a5 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/15/2020 ms.locfileid: "77504167" --- # <a name="tutorial-deploy-a-net-for-apache-spark-application-to-azure-hdinsight"></a>Tutorial: Bereitstellen einer .NET für Apache Spark-Anwendung in Azure HDInsight In diesem Tutorial erfahren Sie, wie Sie Ihre .NET-App für Apache Spark über einen Azure HDInsight-Cluster in der Cloud bereitstellen. HDInsight erleichtert das Erstellen und Konfigurieren eines Spark-Clusters in Azure, da Spark-Cluster in HDInsight mit Azure Storage und Azure Data Lake Storage kompatibel sind. In diesem Tutorial lernen Sie, wie die folgenden Aufgaben ausgeführt werden: > [!div class="checklist"] > > * Zugreifen über Azure Storage-Explorer auf Ihre Speicherkonten. > * Erstellen eines Azure HDInsight-Clusters. > * Veröffentlichen der .NET-App für Apache Spark. > * Erstellen und Ausführen einer HDInsight-Skriptaktion. > * Ausführen einer .NET-App für Apache Spark in einem HDInsight-Cluster. ## <a name="prerequisites"></a>Voraussetzungen Führen Sie die folgenden Schritte aus, bevor Sie beginnen: * Wenn Sie kein Azure-Abonnement besitzen, können Sie ein [kostenloses Konto](https://azure.microsoft.com/free/) erstellen. * Melden Sie sich beim [Azure-Portal](https://portal.azure.com/) an. * Installieren Sie Azure Storage-Explorer auf Ihrem [Windows](https://go.microsoft.com/fwlink/?LinkId=708343&clcid=0x409)-, [Linux](https://go.microsoft.com/fwlink/?LinkId=722418&clcid=0x409)- oder [MacOS](https://go.microsoft.com/fwlink/?LinkId=708342&clcid=0x409)-Computer. * Vervollständigen Sie das Tutorial [.NET für Apache Spark: Erste Schritte in 10 Minuten](https://dotnet.microsoft.com/learn/data/spark-tutorial/intro). ## <a name="access-your-storage-accounts"></a>Zugreifen auf Ihre Speicherkonten 1. Öffnen Sie den Azure Storage-Explorer. 2. Wählen Sie im linken Menü **Konto hinzufügen** aus, und melden Sie sich bei Ihrem Azure-Konto an. ![Anmelden bei einem Azure-Konto über Storage-Explorer](./media/hdinsight-deployment/signin-azure-storage-explorer.png) Nachdem Sie sich angemeldet haben, sollten alle Speicherkonten in Ihrem Besitz und alle Ressourcen angezeigt werden, die Sie in Ihre Speicherkonten hochgeladen haben. ## <a name="create-an-hdinsight-cluster"></a>Erstellen eines HDInsight-Clusters > [!IMPORTANT] > Die Abrechnung für die HDInsight-Cluster erfolgt anteilsmäßig auf Minutenbasis und ist unabhängig von ihrer Verwendung. Daher sollten Sie Ihren Cluster nach der Verwendung unbedingt wieder löschen. Weitere Informationen finden Sie im Abschnitt [Bereinigen von Ressourcen](#clean-up-resources) in diesem Tutorial. 1. Besuchen Sie das [Azure-Portal](https://portal.azure.com). 2. Wählen Sie **+ Ressource erstellen**. Wählen Sie dann **HDInsight** aus der Kategorie **Analyse** aus. ![Erstellen einer HDInsight-Ressource über das Azure-Portal](./media/hdinsight-deployment/create-hdinsight-resource.png) 3. Geben Sie unter **Grundlagen** die folgenden Werte an: |Eigenschaft |Beschreibung | |---------|---------| |Abonnement | Wählen Sie in der Dropdownliste eines Ihrer aktiven Azure-Abonnements aus. | |Resource group | Geben Sie an, ob Sie eine neue Ressourcengruppe erstellen oder eine vorhandene Ressourcengruppe verwenden möchten. Eine Ressourcengruppe ist ein Container, der verwandte Ressourcen für eine Azure-Lösung enthält. | |Clustername | Geben Sie einen Namen für den HDInsight Spark-Cluster an.| |Speicherort | Wählen Sie einen Speicherort für die Ressourcengruppe aus. Die Vorlage verwendet diesen Standort sowohl für die Erstellung des Clusters als auch für den Standardclusterspeicher. | |Clustertyp| Wählen Sie als Clustertyp **Spark** aus.| |Clusterversion|Nach der Auswahl des Clustertyps wird dieses Feld automatisch mit der Standardversion aufgefüllt. Wählen Sie eine Version 2.3 oder 2.4 von Spark aus.| |Benutzername für Clusteranmeldung| Geben Sie den Anmeldebenutzernamen für den Cluster ein. Der Standardname lautet *admin*. | |Kennwort für Clusteranmeldung| Geben Sie das Kennwort für die Anmeldung ein. | |SSH-Benutzername (Secure Shell)| Geben Sie den SSH-Benutzernamen ein. Standardmäßig gilt für dieses Konto dasselbe Kennwort wie für das Konto mit dem *Benutzernamen für die Clusteranmeldung*. | 4. Klicken Sie auf **Weiter: Speicher >>** , um zur Seite **Speicher** zu wechseln. Geben Sie unter **Speicher** die folgenden Werte an: |Eigenschaft |Beschreibung | |---------|---------| |Primärer Speichertyp|Übernehmen Sie den Standardwert **Azure Storage**.| |Auswahlmethode|Übernehmen Sie den Standardwert **Aus Liste auswählen**.| |Primäres Speicherkonto|Wählen Sie Ihr Abonnement und eines Ihrer aktiven Speicherkonten in diesem Abonnement aus.| |Container|Dieser Container ist der spezifische Blobcontainer in Ihrem Speicherkonto, in dem Ihr Cluster nach Dateien sucht, um Ihre App in der Cloud auszuführen. Sie können ihm einen beliebigen verfügbaren Namen zuweisen.| 5. Wählen Sie unter **Überprüfen + erstellen** die Option **Erstellen** aus. Das Erstellen des Clusters dauert ca. 20 Minuten. Der Cluster muss erstellt werden, bevor Sie mit dem nächsten Schritt fortfahren können. ## <a name="publish-your-app"></a>Veröffentlichen der App Anschließend veröffentlichen Sie die *mySparkApp*, die im Tutorial [.NET für Apache Spark: Erste Schritte in 10 Minuten](https://dotnet.microsoft.com/learn/data/spark-tutorial/intro) erstellt wurde, die Ihrem Spark-Cluster Zugriff auf alle Dateien ermöglicht, die er zum Ausführen Ihrer App benötigt. 1. Führen Sie zum Veröffentlichen der *mySparkApp* die folgenden Befehle aus: **Unter Windows:** ```dotnetcli cd mySparkApp dotnet publish -c Release -f netcoreapp3.0 -r ubuntu.16.04-x64 ``` **Unter Linux:** ```bash cd mySparkApp foo@bar:~/path/to/app$ dotnet publish -c Release -f netcoreapp3.0 -r ubuntu.16.04-x64 ``` 2. Führen Sie die folgenden Aufgaben aus, um die veröffentlichten App-Dateien zu komprimieren, damit Sie sie problemlos in ihren HDInsight-Cluster hochladen können. **Unter Windows:** Navigieren Sie zu *MySparkApp/bin/Release/netcoreapp3.0/ubuntu.16.04-x64*. Klicken Sie dann mit der rechten Maustaste auf den Ordner **Veröffentlichen**, und wählen Sie **Senden an > Komprimierter Ordner (ZIP-Ordner)** aus. Nennen Sie den neuen Ordner **publish.zip**. **Führen Sie unter Linux den folgenden Befehl aus:** ```bash zip -r publish.zip ``` ## <a name="upload-files-to-azure"></a>Hochladen von Dateien in Azure Im nächsten Schritt verwenden Sie Azure Storage-Explorer, um die folgenden fünf Dateien in den Blobcontainer hochzuladen, den Sie als Speicher für Ihren Cluster ausgewählt haben: * Microsoft.Spark.Worker * install-worker.sh * publish.zip * microsoft-spark-2.3.x-0.3.0.jar * input.txt. 1. Öffnen Sie Azure Storage-Explorer, und navigieren Sie im Menü auf der linken Seite zu Ihrem Speicherkonto. Führen Sie einen Drilldown zum Blobcontainer für Ihren Cluster unter **Blobcontainer** in Ihrem Speicherkonto aus. 2. *Microsoft.Spark.Worker* unterstützt Apache Spark bei der Ausführung Ihrer App, z.B. für benutzerdefinierte Funktionen (User-Defined Functions, UDFs), die Sie ggf. geschrieben haben. Laden Sie [Microsoft.Spark.Worker](https://github.com/dotnet/spark/releases/download/v0.3.0/Microsoft.Spark.Worker.netcoreapp2.1.linux-x64-0.3.0.tar.gz) herunter. Wählen Sie dann in Azure Storage-Explorer **Hochladen** aus, um den Worker hochzuladen. ![Hochladen von Dateien in Azure Storage-Explorer](./media/hdinsight-deployment/upload-files-to-storage.png) 3. *install-worker.sh* ist ein Skript, mit dem Sie von .NET für Apache Spark abhängige Dateien in die Knoten Ihres Clusters kopieren können. Erstellen Sie eine neue Datei mit dem Namen **install-worker.sh** auf Ihrem lokalen Computer, und fügen Sie den [Inhalt von install-worker.sh](https://raw.githubusercontent.com/dotnet/spark/master/deployment/install-worker.sh) auf GitHub ein. Laden Sie dann *install-worker.sh* in Ihren Blobcontainer hoch. 4. Ihr Cluster benötigt die Datei „publish.zip“, die die veröffentlichten Dateien Ihrer App enthält. Navigieren Sie zu Ihrem veröffentlichten Ordner **mySparkApp/bin/Release/netcoreapp3.0/ubuntu.16.04-x64**, und suchen Sie nach **publish.zip**. Laden Sie dann *publish.zip* in Ihren Blobcontainer hoch. 5. Ihr Cluster benötigt den Anwendungscode, der in einer JAR-Datei gepackt wurde. Navigieren Sie zu Ihrem veröffentlichten Ordner **mySparkApp/bin/Release/netcoreapp3.0/ubuntu.16.04-x64**, und suchen Sie nach **microsoft-spark-2.3.x-0.3.0.jar**. Laden Sie die JAR-Datei dann in Ihren Blobcontainer hoch. Möglicherweise gibt es mehrere JAR-Dateien (für die Versionen 2.3.x und 2.4.x von Spark). Sie müssen die JAR-Datei auswählen, die mit der Version von Spark übereinstimmt, die Sie während der Clustererstellung ausgewählt haben. Wählen Sie beispielsweise *microsoft-spark-2.3.x-0.3.0.jar* aus, wenn Sie Spark 2.3.2 während der Clustererstellung ausgewählt haben. 6. Ihr Cluster benötigt die Eingabe in Ihre App. Navigieren Sie zu Ihrem Verzeichnis **mySparkApp**, und suchen Sie nach **input.txt**. Laden Sie die Eingabedatei in das Verzeichnis **user/sshuser** in Ihren Blobcontainer hoch. Sie stellen über SSH eine Verbindung mit Ihrem Cluster her, und in diesem Ordner sucht der Cluster nach der Eingabe. Die Datei *input.txt* ist die einzige Datei, die in ein bestimmtes Verzeichnis hochgeladen wurde. ## <a name="run-the-hdinsight-script-action"></a>Ausführen der HDInsight-Skriptaktion Sobald der Cluster ausgeführt wird und Sie Ihre Dateien in Azure hochgeladen haben, führen Sie das Skript **install-worker.sh** auf dem Cluster aus. 1. Navigieren Sie im Azure-Portal zu Ihrem HDInsight Spark-Cluster, und wählen Sie dann **Skriptaktionen**aus. 2. Wählen Sie **+ Neue übermitteln** aus, und geben Sie die folgenden Werte an: |Eigenschaft |Beschreibung | |---------|---------| | Skripttyp |Benutzerdefiniert| | name | Installieren des Workers| | Bash-Skript-URI |https://mystorageaccount.blob.core.windows.net/mycontainer/install-worker.sh </br> Klicken Sie zum Bestätigen dieses URIs in Azure Storage-Explorer mit der rechten Maustaste auf „install-Worker.sh“, und wählen Sie „Eigenschaften“ aus. | | Knotentyp(en)| Worker| | Parameter | azure </br> wasbs://mycontainer@myStorageAccount.blob.core.windows.net/Microsoft.Spark.Worker.netcoreapp2.1.linux-x64-0.6.0.tar.gz </br> /usr/local/bin 3. Wählen Sie **Erstellen** aus, um das Skript zu übermitteln. ## <a name="run-your-app"></a>Ausführen der App 1. Navigieren Sie im Azure-Portal zu Ihrem HDInsight Spark-Cluster, und wählen Sie dann **SSH und Clusteranmeldung**aus. 2. Kopieren Sie die SSH-Anmeldeinformationen, und fügen Sie diese in ein Terminal ein. Melden Sie sich mit dem Kennwort, das Sie während der Clustererstellung festgelegt haben, bei Ihrem Cluster an. Es sollten Meldungen angezeigt werden, die Sie bei Ubuntu und Spark Willkommen heißen. 3. Verwenden Sie den Befehl **spark-submit**, um Ihre App auf dem HDInsight-Cluster auszuführen. Denken Sie daran, **mycontainer** und **mystorageaccount** im Beispielskript durch die tatsächlichen Namen Ihres Blobcontainers und Speicherkontos zu ersetzen. ```bash $SPARK_HOME/bin/spark-submit \ --master yarn \ --class org.apache.spark.deploy.dotnet.DotnetRunner \ wasbs://mycontainer@mystorageaccount.blob.core.windows.net/microsoft-spark-2.3.x-0.6.0.jar \ wasbs://mycontainer@mystorageaccount.blob.core.windows.net/publish.zip mySparkApp ``` Wenn Ihre App ausgeführt wird, wird die gleiche Wortzahltabelle aus der lokalen Ausführung von „Erste Schritte“ in der Konsole angezeigt. Herzlichen Glückwunsch, Sie haben Ihre erste .NET-Anwendung für Apache Spark in der Cloud ausgeführt! ## <a name="clean-up-resources"></a>Bereinigen von Ressourcen HDInsight speichert Ihre Daten in Azure Storage, sodass Sie einen Cluster problemlos löschen können, wenn er nicht verwendet wird. Für einen HDInsight-Cluster fallen auch dann Gebühren an, wenn er nicht verwendet wird. Da die Gebühren für den Cluster erheblich höher sind als die Kosten für den Speicher, ist es sinnvoll, nicht verwendete Cluster zu löschen. Sie können auch den Namen der Ressourcengruppe auswählen, um die Seite für die Ressourcengruppe zu öffnen, und dann **Ressourcengruppe löschen** auswählen. Indem Sie die Ressourcengruppe löschen, löschen Sie sowohl den HDInsight Spark-Cluster als auch das Standardspeicherkonto. ## <a name="next-steps"></a>Nächste Schritte In diesem Tutorial haben Sie eine .NET für Apache Spark-Anwendung in Azure HDInsight bereitgestellt. Weitere Informationen finden Sie in der Dokumentation zu Azure HDInsight. > [!div class="nextstepaction"] > [Azure HDInsight-Dokumentation](https://docs.microsoft.com/azure/hdinsight/)
69.507853
442
0.77433
deu_Latn
0.990295
61576892ac6d242afe5213f2c869ad5eae264652
1,613
md
Markdown
exercises/alphametics/README.md
Hilife-Miller/elixir
d4fd59a42d8829fe8a46e813a6d01a85a3f1fa31
[ "MIT" ]
null
null
null
exercises/alphametics/README.md
Hilife-Miller/elixir
d4fd59a42d8829fe8a46e813a6d01a85a3f1fa31
[ "MIT" ]
null
null
null
exercises/alphametics/README.md
Hilife-Miller/elixir
d4fd59a42d8829fe8a46e813a6d01a85a3f1fa31
[ "MIT" ]
1
2018-07-19T23:43:56.000Z
2018-07-19T23:43:56.000Z
# Alphametics Write a function to solve alphametics puzzles. [Alphametics](https://en.wikipedia.org/wiki/Alphametics) is a puzzle where letters in words are replaced with numbers. For example `SEND + MORE = MONEY`: ```text S E N D M O R E + ----------- M O N E Y ``` Replacing these with valid numbers gives: ```text 9 5 6 7 1 0 8 5 + ----------- 1 0 6 5 2 ``` This is correct because every letter is replaced by a different number and the words, translated into numbers, then make a valid sum. Each letter must represent a different digit, and the leading digit of a multi-digit number must not be zero. Write a function to solve alphametics puzzles. ## Running tests Execute the tests with: ```bash $ elixir alphametics_test.exs ``` ### Pending tests In the test suites, all but the first test have been skipped. Once you get a test passing, you can unskip the next one by commenting out the relevant `@tag :pending` with a `#` symbol. For example: ```elixir # @tag :pending test "shouting" do assert Bob.hey("WATCH OUT!") == "Whoa, chill out!" end ``` Or, you can enable all the tests by commenting out the `ExUnit.configure` line in the test suite. ```elixir # ExUnit.configure exclude: :pending, trace: true ``` For more detailed information about the Elixir track, please see the [help page](http://exercism.io/languages/elixir). ## Source Wikipedia [https://en.wikipedia.org/wiki/Alphametics](https://en.wikipedia.org/wiki/Alphametics) ## Submitting Incomplete Solutions It's possible to submit an incomplete solution so you can see how others have completed the exercise.
21.797297
101
0.723497
eng_Latn
0.990073
61584e0760cad492e2ad78310188fead4635f37c
553
md
Markdown
README.md
carlostojal/BarnabyChatbot
7a09f256ffdf1393e9b3a023afc1e33a74e5db73
[ "MIT" ]
1
2020-03-30T22:38:05.000Z
2020-03-30T22:38:05.000Z
README.md
carlostojal/BarnabyChatbot
7a09f256ffdf1393e9b3a023afc1e33a74e5db73
[ "MIT" ]
null
null
null
README.md
carlostojal/BarnabyChatbot
7a09f256ffdf1393e9b3a023afc1e33a74e5db73
[ "MIT" ]
null
null
null
# BarnabyChatbot BarnabyChatbot is a simple chatbot used as demo for [Barnaby API](https://github.com/carlostojal/Barnaby). Its goal is to show how easy is to use Barnaby API. ## How to use * Run ```setup.sh``` to install all required dependencies. * Start the Barnaby API. See Barnaby API documentation. Check the address and port (Default: 127.0.0.1:5000) * Start the Barnaby Chatbot. (```barnaby.py```) ![Barnaby Chatbot](https://raw.githubusercontent.com/carlostojal/BarnabyChatbot/master/img/screenshot.png?token=AIWB3W5KIG5GA7GLCOEZ3NC6QJGXQ)
46.083333
158
0.768535
eng_Latn
0.638043
61588ff2deed54f7ce5697f9b582dbc4e7b87904
86
md
Markdown
docs/src/index.md
kklot/Epi.jl
13498e5d98d028b318c1bd0ce7540afef250521c
[ "MIT" ]
1
2021-03-16T17:33:19.000Z
2021-03-16T17:33:19.000Z
docs/src/index.md
kklot/Epi.jl
13498e5d98d028b318c1bd0ce7540afef250521c
[ "MIT" ]
null
null
null
docs/src/index.md
kklot/Epi.jl
13498e5d98d028b318c1bd0ce7540afef250521c
[ "MIT" ]
null
null
null
# Epi.jl Documentation ```@autodocs Modules = [Epi] Order = [:function, :type] ```
12.285714
28
0.616279
yue_Hant
0.752256
61589fbc68e783f1105c99b846ea78183ad1be10
644
md
Markdown
README.md
Nixon120/go-api-docker-compose
b2ec99864cc07e91cc27033dd6aa5a9d572e6806
[ "MIT" ]
1
2022-01-22T15:49:03.000Z
2022-01-22T15:49:03.000Z
README.md
Nixon120/go-api-docker-compose
b2ec99864cc07e91cc27033dd6aa5a9d572e6806
[ "MIT" ]
2
2021-06-02T23:33:56.000Z
2021-06-02T23:51:56.000Z
README.md
abdulazeez-okteto/go-api-docker-compose
4fdce652d678be27d0845d36c7040f9023b0bc2e
[ "MIT" ]
1
2021-12-13T05:24:48.000Z
2021-12-13T05:24:48.000Z
# Get Started with Docker Compose on Okteto [![Develop on Okteto](https://okteto.com/develop-okteto.svg)](https://cloud.okteto.com/deploy?repository=https://github.com/okteto/go-api-docker-compose) This example shows how to deploy a docker-compose application to Okteto. Read the detailed guide on [how you can deploy your docker-compose applications to Okteto](https://okteto.com/blog/a-step-by-step-guide-on-deploying-your-docker-compose-application-to-okteto/) and [how you can develop your docker-compose application remotely on Okteto Cloud](https://okteto.com/blog/how-to-develop-docker-compose-applications-remotely-in-okteto-cloud/).
107.333333
443
0.795031
eng_Latn
0.425346
6158e7c64966c97b1dd6d16b320c7bd4f0b2e08c
5,037
md
Markdown
treebanks/yo_ytb/yo_ytb-dep-compound.md
emmettstr/docs
2d0376d6e07f3ffa828f6152d12cf260a530c64d
[ "Apache-2.0" ]
204
2015-01-20T16:36:39.000Z
2022-03-28T00:49:51.000Z
treebanks/yo_ytb/yo_ytb-dep-compound.md
emmettstr/docs
2d0376d6e07f3ffa828f6152d12cf260a530c64d
[ "Apache-2.0" ]
654
2015-01-02T17:06:29.000Z
2022-03-31T18:23:34.000Z
treebanks/yo_ytb/yo_ytb-dep-compound.md
emmettstr/docs
2d0376d6e07f3ffa828f6152d12cf260a530c64d
[ "Apache-2.0" ]
200
2015-01-16T22:07:02.000Z
2022-03-25T11:35:28.000Z
--- layout: base title: 'Statistics of compound in UD_Yoruba-YTB' udver: '2' --- ## Treebank Statistics: UD_Yoruba-YTB: Relations: `compound` This relation is universal. There are 2 language-specific subtypes of `compound`: <tt><a href="yo_ytb-dep-compound-prt.html">compound:prt</a></tt>, <tt><a href="yo_ytb-dep-compound-svc.html">compound:svc</a></tt>. 77 nodes (1%) are attached to their parents as `compound`. 74 instances of `compound` (96%) are left-to-right (parent precedes child). Average distance between parent and child is 1.48051948051948. The following 11 pairs of parts of speech are connected with `compound`: <tt><a href="yo_ytb-pos-VERB.html">VERB</a></tt>-<tt><a href="yo_ytb-pos-SCONJ.html">SCONJ</a></tt> (30; 39% instances), <tt><a href="yo_ytb-pos-NOUN.html">NOUN</a></tt>-<tt><a href="yo_ytb-pos-NOUN.html">NOUN</a></tt> (16; 21% instances), <tt><a href="yo_ytb-pos-ADJ.html">ADJ</a></tt>-<tt><a href="yo_ytb-pos-ADP.html">ADP</a></tt> (8; 10% instances), <tt><a href="yo_ytb-pos-PRON.html">PRON</a></tt>-<tt><a href="yo_ytb-pos-PART.html">PART</a></tt> (6; 8% instances), <tt><a href="yo_ytb-pos-VERB.html">VERB</a></tt>-<tt><a href="yo_ytb-pos-NOUN.html">NOUN</a></tt> (6; 8% instances), <tt><a href="yo_ytb-pos-ADJ.html">ADJ</a></tt>-<tt><a href="yo_ytb-pos-ADJ.html">ADJ</a></tt> (3; 4% instances), <tt><a href="yo_ytb-pos-NOUN.html">NOUN</a></tt>-<tt><a href="yo_ytb-pos-ADJ.html">ADJ</a></tt> (2; 3% instances), <tt><a href="yo_ytb-pos-VERB.html">VERB</a></tt>-<tt><a href="yo_ytb-pos-ADV.html">ADV</a></tt> (2; 3% instances), <tt><a href="yo_ytb-pos-VERB.html">VERB</a></tt>-<tt><a href="yo_ytb-pos-PART.html">PART</a></tt> (2; 3% instances), <tt><a href="yo_ytb-pos-ADJ.html">ADJ</a></tt>-<tt><a href="yo_ytb-pos-ADV.html">ADV</a></tt> (1; 1% instances), <tt><a href="yo_ytb-pos-ADV.html">ADV</a></tt>-<tt><a href="yo_ytb-pos-ADV.html">ADV</a></tt> (1; 1% instances). ~~~ conllu # visual-style 4 bgColor:blue # visual-style 4 fgColor:white # visual-style 3 bgColor:blue # visual-style 3 fgColor:white # visual-style 3 4 compound color:blue 1 Ọlọ́run ọlọ́run NOUN _ _ 3 nsubj _ Gloss=god|Ref=GEN_1.3 2 sì sì CCONJ _ _ 3 cc _ Gloss=then|Ref=GEN_1.3 3 wí wí VERB _ _ 0 root _ Gloss=said|Ref=GEN_1.3 4 pé pé SCONJ _ _ 3 compound _ Gloss=that|Ref=GEN_1.3|SpaceAfter=No 5 , , PUNCT _ _ 12 punct _ Gloss=,|Ref=GEN_1.3 6 “ “ PUNCT _ _ 12 punct _ Gloss=“|Ref=GEN_1.3|SpaceAfter=No 7 Jẹ́ jẹ́ VERB _ _ 12 ccomp _ Gloss=let|Ref=GEN_1.3 8 kí kí AUX _ _ 7 aux _ Gloss=|Ref=GEN_1.3 9 ìmọ́lẹ̀ ìmọ́lẹ̀ NOUN _ _ 12 nsubj _ Gloss=light|Ref=GEN_1.3 10 kí kí AUX _ _ 12 aux _ Gloss=let|Ref=GEN_1.3 11 ó ó PRON _ Case=Nom|Number=Sing|Person=3|PronType=Prs 12 expl _ Gloss=he|Ref=GEN_1.3 12 wà wà VERB _ _ 3 ccomp _ Gloss=was|Ref=GEN_1.3|SpaceAfter=No 13 , , PUNCT _ _ 12 punct _ Gloss=,|Ref=GEN_1.3|SpaceAfter=No 14 ” ” PUNCT _ _ 12 punct _ Gloss=”|Ref=GEN_1.3 15 ìmọ́lẹ̀ ìmọ́lẹ̀ NOUN _ _ 17 nsubj _ Gloss=light|Ref=GEN_1.3 16 sì sì CCONJ _ _ 17 cc _ Gloss=and|Ref=GEN_1.3 17 wà wà VERB _ _ 3 conj _ Gloss=was|Ref=GEN_1.3|SpaceAfter=No 18 . . PUNCT _ _ 3 punct _ Gloss=.|Ref=GEN_1.3 ~~~ ~~~ conllu # visual-style 11 bgColor:blue # visual-style 11 fgColor:white # visual-style 9 bgColor:blue # visual-style 9 fgColor:white # visual-style 9 11 compound color:blue 1 Wọ́n Wọ́n PRON _ Case=Acc|Number=Plur|Person=3|PronType=Prs 7 nsubj _ _ 2 bí bí SCONJ _ _ 7 mark _ _ 3 Ẹ̀bùn Ẹ̀bùn NOUN _ _ 7 nsubj _ _ 4 Olóyèdé Olóyèdé NOUN _ _ 3 nmod _ _ 5 ní ní ADP _ _ 6 case _ _ 6 ilú ilú NOUN _ _ 3 nmod _ _ 7 Kẹ́nta Kẹ́nta ADV _ _ 0 root _ SpaceAfter=No 8 , , PUNCT _ _ 9 punct _ _ 9 Òkè òkè NOUN _ _ 7 conj _ SpaceAfter=No 10 - - PUNCT _ _ 11 punct _ SpaceAfter=No 11 Èjìgbò Èjìgbò NOUN _ _ 9 compound _ _ 12 Abẹ́òkúta Abẹ́òkúta NOUN _ _ 9 nmod _ SpaceAfter=No 13 . . PUNCT _ _ 7 punct _ SpacesAfter=\n\n ~~~ ~~~ conllu # visual-style 3 bgColor:blue # visual-style 3 fgColor:white # visual-style 1 bgColor:blue # visual-style 1 fgColor:white # visual-style 1 3 compound color:blue 1 Alábùkún Alábùkún ADJ _ _ 8 nsubj _ Ref=MATT_5.4|SpaceAfter=No|Gloss=blessed 2 - - PUNCT _ _ 3 punct _ Ref=MATT_5.4|SpaceAfter=No|Gloss=the 3 fún fún ADP _ _ 1 compound _ Ref=MATT_5.4|Gloss=unto 4 ni ni PART _ _ 6 case _ Ref=MATT_5.4|Gloss=is 5 àwọn àwọn PRON _ Case=Nom|Number=Plur|Person=3|PronType=Prs 6 nmod _ Ref=MATT_5.4|Gloss=they 6 tí tí PRON _ PronType=Rel 8 nmod _ Ref=MATT_5.4|Gloss=that 7 ń ń AUX _ _ 8 aux _ Ref=MATT_5.4|Gloss=that 8 ṣọ̀fọ̀ ṣọ̀fọ̀ VERB _ _ 0 root _ Ref=MATT_5.4|SpaceAfter=No|Gloss=mourn 9 , , PUNCT _ _ 13 punct _ Ref=MATT_5.4|Gloss=, 10 nítorí nítorí SCONJ _ _ 13 mark _ Ref=MATT_5.4|Gloss=for 11 a a PRON _ Case=Nom|Number=Plur|Person=1|PronType=Prs 13 nsubj _ Ref=MATT_5.4|Gloss=we 12 ó ó AUX _ _ 13 aux _ Ref=MATT_5.4|Gloss=will 13 tù tù VERB _ _ 8 advcl _ Ref=MATT_5.4|Gloss=comfort 14 wọ́n wọ́n PRON _ Case=Acc|Number=Plur|Person=3|PronType=Prs 13 obj _ Ref=MATT_5.4|Gloss=them 15 nínú nínú ADP _ _ 13 obl _ Ref=MATT_5.4|SpaceAfter=No|Gloss=inside 16 . . PUNCT _ _ 8 punct _ Ref=MATT_5.4|Gloss=. ~~~
51.927835
1,346
0.698233
yue_Hant
0.583257
615943c01026e549aa9248eedfdd9cf64fb5d3cf
3,312
md
Markdown
microsoft-365/bookings/schedule-closures-time-off-vacation.md
MicrosoftDocs/microsoft-365-docs-pr.es-ES
90e5004934c592bb15f72edd88bec954c94ad710
[ "CC-BY-4.0", "MIT" ]
19
2020-05-18T20:10:47.000Z
2022-03-09T07:27:47.000Z
microsoft-365/bookings/schedule-closures-time-off-vacation.md
MicrosoftDocs/microsoft-365-docs-pr.es-ES
90e5004934c592bb15f72edd88bec954c94ad710
[ "CC-BY-4.0", "MIT" ]
2
2022-02-09T06:48:57.000Z
2022-02-09T06:49:35.000Z
microsoft-365/bookings/schedule-closures-time-off-vacation.md
MicrosoftDocs/microsoft-365-docs-pr.es-ES
90e5004934c592bb15f72edd88bec954c94ad710
[ "CC-BY-4.0", "MIT" ]
2
2019-05-25T04:43:49.000Z
2021-05-26T19:33:23.000Z
--- title: Programar cierres de negocios, tiempo libre y tiempo de vacaciones ms.author: kwekua author: kwekuako manager: scotv audience: Admin ms.topic: article ms.service: bookings ms.localizationpriority: medium ms.assetid: e3c0a4ee-e3d8-4fbe-bd8f-16d1c712d1f4 description: Programe los cierres de oficina y el tiempo de descanso de los empleados del calendario de Bookings para que los empleados se marquen como no disponibles para las reservas durante las horas especificadas. ms.openlocfilehash: 30f43af083da6a41ab8458e377bdc9fc99690678 ms.sourcegitcommit: d4b867e37bf741528ded7fb289e4f6847228d2c5 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 10/06/2021 ms.locfileid: "60164073" --- # <a name="schedule-business-closures-time-off-and-vacation-time"></a>Programar cierres de negocios, tiempo libre y tiempo de vacaciones En ocasiones, querrá cerrar su negocio por vacaciones o eventos de equipo, o sus empleados necesitarán tiempo libre cuando estén enfermos, de vacaciones o no estén disponibles por otros motivos. Puedes programar el tiempo de espera del calendario de Microsoft Bookings y el empleado no estará disponible para las reservas durante el tiempo especificado. Una vez que la empresa vuelva a abrirse o los empleados vuelvan al trabajo, todos aparecerán en la página de reserva de acuerdo con sus horas de trabajo establecidas. Vea este vídeo o siga los pasos que se indican a continuación para programar cierres de negocios o la baja de empleados. > [!VIDEO https://www.microsoft.com/videoplayer/embed/RE2TxDC] ## <a name="schedule-ad-hoc-business-closures"></a>Programar cierres de negocios ad hoc 1. En Microsoft 365, selecciona el iniciador de aplicaciones y, a continuación, selecciona Bookings. 1. En el panel de navegación, seleccione **Tiempo de** espera \> **del calendario**. ![Imagen de la vista de calendario de Bookings y botón de tiempo de espera.](../media/bookings-calendar-timeoff.png) 1. Rellene los detalles, incluidos el título, la fecha y las horas de inicio y finalización, la ubicación y las notas adicionales. 1. Seleccione **Evento Todo el día**. 1. Seleccione todos los miembros del personal. 1. Seleccione **Guardar**. Cuando un cliente intenta programar el servicio en un día que la oficina está cerrada, verá un mensaje en la página de reserva. ![Imagen del mensaje de ejemplo que el cliente ve al intentar reservar durante el tiempo libre.](../media/bookings-timeoff-message.png) ## <a name="schedule-employee-time-off"></a>Programar el tiempo de descanso de los empleados 1. En Microsoft 365, seleccione el iniciador de aplicaciones y, a continuación, seleccione **Bookings**. ![Imagen del iniciador de aplicaciones.](../media/bookings-applauncher.png) 1. En el panel de navegación, seleccione **Tiempo de** espera \> **del calendario**. ![Imagen de la vista de calendario de Bookings y botón de tiempo de espera.](../media/bookings-calendar-timeoff.png) 1. Rellene los detalles, incluidos el título, la fecha y las horas de inicio y finalización, la ubicación y las notas adicionales. Si el empleado se va a ir durante un día completo o durante varios días, seleccione **Evento Todo el día**. 1. Seleccione el miembro del personal o los miembros que están tomando el tiempo libre. 1. Haga clic en **Guardar**.
53.419355
520
0.785326
spa_Latn
0.99046
61599921520fc853e8f48d0e083bab6b55de0b4b
1,766
md
Markdown
sails-docs/reference/websockets/sails.io.js/socket.post.md
tindNan/sails
a29cab7ece19d744e98b39ef82e537ae67f59875
[ "MIT" ]
null
null
null
sails-docs/reference/websockets/sails.io.js/socket.post.md
tindNan/sails
a29cab7ece19d744e98b39ef82e537ae67f59875
[ "MIT" ]
1
2021-02-23T17:52:34.000Z
2021-02-23T17:52:34.000Z
sails-docs/reference/websockets/sails.io.js/socket.post.md
tindNan/sails
a29cab7ece19d744e98b39ef82e537ae67f59875
[ "MIT" ]
null
null
null
# `io.socket.post()` Send a socket request (virtual POST) to a Sails server using Socket.IO. ```js io.socket.post(url, data, function (resData, jwres){ // ... }); ``` ### Usage | | Argument | Type | Details | |---|------------|:------------:|---------| | 1 | url | ((string)) | The destination URL path, e.g. "/checkout". | 2 | _data_ | ((json?)) | Optional request data. If provided, it will be JSON-encoded and included as the virtual HTTP body. | 3 | _callback_ | ((function?)) | Optional callback. If provided, it will be called when the server responds. ##### Callback | | Argument | Type | Details | |---|-----------|:------------:|---------| | 1 | resData | ((json)) | Data received in the response from the Sails server (=== `jwres.body`, and also equivalent to the HTTP response body). | 2 | jwres | ((dictionary)) | A [JSON WebSocket Response](https://github.com/balderdashy/sails/blob/master/sails-docs/PAGE_NEEDED.md) object. Has `headers`, a `body`, and a `statusCode`. ### Example ```html <script> io.socket.post('/users', { name: 'Timmy Mendez' }, function (resData, jwRes) { jwRes.statusCode; // => 200 }); </script> ``` ### Notes > + Remember that you can communicate with _any of your routes_ using socket requests. > + Need to customize request headers? Check out the slightly lower-level [`io.socket.request()`](https://sailsjs.com/documentation/reference/web-sockets/socket-client/io-socket-request) method. To set custom headers for _all_ outgoing requests, check out [`io.sails.headers`](https://sailsjs.com/documentation/reference/web-sockets/socket-client/io-sails). <docmeta name="displayName" value="io.socket.post()"> <docmeta name="pageType" value="method">
37.574468
358
0.638165
eng_Latn
0.767344
6159fd6d58f0fb4115fa0229d6899cc41a34309e
4,543
md
Markdown
README.md
DanIulian/PeerToPeer-Torrent-Client
80fe0eb76fa33c9a7746020d5ddd2042b1c84569
[ "MIT" ]
1
2018-07-23T09:01:07.000Z
2018-07-23T09:01:07.000Z
README.md
DanIulian/PeerToPeer-Torrent-Client
80fe0eb76fa33c9a7746020d5ddd2042b1c84569
[ "MIT" ]
null
null
null
README.md
DanIulian/PeerToPeer-Torrent-Client
80fe0eb76fa33c9a7746020d5ddd2042b1c84569
[ "MIT" ]
null
null
null
# PeerToPeer-Torrent-Client The purpose of this project is to design and implement a scalable and fully functional PeerToPeer Torrent Application for files transfer. ## Table of Contents 1. [Project overview](#project-overview) 2. [Requirements](#requirements) 3. [Architecture](#architecture) - [Communication Protocol](#communication-protocol) - [Main Node](#main-node) - [Client](#client) ## Project Overview The purpose of this project is to implement in Java a scalable and fully functional PeerToPeer Torrent Application for files transfer. The application consists of a Central Node which stores all the information regarding the shared files, and a variable number of Clients (peers) which connect to the Central Node, search for information about a file, and download it from the peers that have the requested file. The clients connect and leave the network in an asynchronous manner and can choose to publish a file or to download one. Information about the published files will be available on the Main Node. However, the actual file will reside only at the client that published it, or at the clients that have downloaded it. Each file is uniquely identified by its name and when it is first published, it is splitted in fragments of equal sizes. The port and the IP for each Client will be variable and will depend on the time that the Client connected to the network. However, the Main Node will have a fixed public IP and port, known in advance by each client that connects to it. Every operation completed by the clients and the Main Node will be logged using the log4j API. ## Requirements * A machine with a 2 GB Ram * Ubuntu 14.04 or higher * JAVA Version 7 or higher and log4j JAR * Apache ANT for deployment In order to run the application: ./ ant build ./ ant run-server ./ ant run-client1[run-client2, run-client3] ./ ant clean ## Architecture ##### Communication Protocol The peers and Main Node communicates with each other using a custom protocol described by the comm_proto package which contains the classes FileDescription, FragmentDescription and FragmentFile. Each message exchanged between two entities in the network uses a 12 bytes HEADER. The first 4 bytes contains the thpe of the message ( Publish a fragment, Request Information about a File, Publish a File). The next 4 bytes specifies the length of the data message, and the final 4 bytes contains the port on which the client listens. The DATA filed of the message contains the serial representation of one of the three classes mentioned earlier or the serial representation of the name of the file that a client requests information about. Every time an entity in the network receive a message, it will first read 12 bytes which represents the HEADER and then it will read the DATA. #### Main Node The Main Node is implemented using the central_nod_pkg package which has three classes. The MainServer class starts the Main Node, the CentralNode class contains the implementation of the MainNode, and the PublishedFile class is used to encapsulate all the information required about a published file. The server has a list of all published file. Each element of the list has information about the file name, the file length, the first fragment length. There is also a list of FragmentFile objects that has information about each fragment and all the clients that have the fragment in question. The Main Node implementation uses asynchronous sockets and a selector. The server runs in a different thread. #### Client The client uses a thread pool for uploading and downloading files. Each thread is managing a single fragment of the file. There is also a special thread that exchanges messages with the Main Node, the class ClientAsServer implements the behaviour of this thread. For each fragment downloaded, a new socket connection is established with the other peer; the connection is closed after the fragment is successfully received. For the thread pool I used Executor class and I have set the maximum number of connections to 10. The class ClientRequest is in charge of processing a request for a fragment, and the class AskForFragment is in charge of downloading a fragment from another client. The fragment size depends on the length of the file and can vary from 2KB to 512KB Each client will store the files in different directory. ## Copyright and License This application is provided under the [MIT-license](https://github.com/DanIulian/PeerToPeer-Torrent-Client/blob/master/LICENSE).
57.506329
347
0.794849
eng_Latn
0.999624
615b8894fb47cf4c9878aa72f724b5e40a77cd6d
1,694
md
Markdown
_publications/2_tuple.md
oguztoragay/oguztoragay.github.io
853e2f5fa9a90ed2dee94b9f6c1c4b762b8566b0
[ "MIT" ]
1
2021-08-30T15:20:02.000Z
2021-08-30T15:20:02.000Z
_publications/2_tuple.md
oguztoragay/academicpages.github.io
8fabe272e174577473807868f270014122736430
[ "MIT" ]
null
null
null
_publications/2_tuple.md
oguztoragay/academicpages.github.io
8fabe272e174577473807868f270014122736430
[ "MIT" ]
null
null
null
--- title: "Performance Evaluation of Faculty Departments by a Delphi Method Based on 2-Tuple fuzzy Linguistic Representation Model and TOPSIS" collection: publications permalink: /publication/2010-10-01-paper-title-number-2 excerpt: '' date: 2015-10-10 venue: 'International Journal of Basic and Applied Sciences IJBAS-IJENS, Vol: 15, No: 05' paperurl: 'http://ijens.org/Vol_15_I_05/150205-7373-IJBAS-IJENS.pdf' citation: 'TORAGAY, Oguz, and Murat ARIKAN. "Performance Evaluation of Faculty Departments by a Delphi Method Based on 2-Tuple fuzzy Linguistic Representation Model and TOPSIS." International Journal of Basic & Applied Sciences 15 (2015): 1-10' --- The development of and competition in educational facilities gradually increase the service quality’s importance. To accommodate this rapid process, educational organisations attempt to measure their performance and to enhance their standards. In general, an organisation’s performance does not depend solely on one criterion; instead, it should be evaluated based on multiple criteria. In this study, the academic performances of the departments within the Engineering Faculty of one of the largest universities in Turkey, Gazi University, have been compared using a multi-attribute decision making (MADM) method, TOPSIS. The criteria weights for the TOPSIS method are determined dependent upon linguistically expressed expert opinions. Therefore, a Delphi method based on the 2-tuple fuzzy linguistic representation model for computing with words is proposed. A sensitivity analysis study is also performed to determine the most critical criterion. [Download paper here](http://ijens.org/Vol_15_I_05/150205-7373-IJBAS-IJENS.pdf)
112.933333
775
0.815821
eng_Latn
0.980801
615be5e0199627ab6e7e7f627aa1f82c309d232b
548
md
Markdown
integrations/adlearn-open-platform/README.md
carrotquest/analytics.js-integrations
c129f4647bab6ff846618cc2192f33b8952d34e6
[ "MIT" ]
116
2015-01-13T01:38:37.000Z
2022-03-25T09:21:16.000Z
integrations/adlearn-open-platform/README.md
carrotquest/analytics.js-integrations
c129f4647bab6ff846618cc2192f33b8952d34e6
[ "MIT" ]
495
2015-01-01T06:51:12.000Z
2022-03-29T21:06:59.000Z
integrations/adlearn-open-platform/README.md
carrotquest/analytics.js-integrations
c129f4647bab6ff846618cc2192f33b8952d34e6
[ "MIT" ]
205
2015-01-05T16:00:28.000Z
2022-03-23T11:32:41.000Z
# analytics.js-integration-adlearn-open-platform [![Build Status][ci-badge]][ci-link] Adlearn Open Platform integration for [Analytics.js][]. ## License Released under the [MIT license](License.md). [Analytics.js]: https://segment.com/docs/libraries/analytics.js/ [ci-link]: https://ci.segment.com/gh/segment-integrations/analytics.js-integration-adlearn-open-platform [ci-badge]: https://ci.segment.com/gh/segment-integrations/analytics.js-integration-adlearn-open-platform.svg?style=svg&circle-token=b8a9d9117d4dd82c72334dcfd98e02b3a810b783
42.153846
173
0.791971
hun_Latn
0.193095
615e18b40c5831986e01981de4424370f1cb209e
512
md
Markdown
packages/sudoku-solver/README.md
christianjuth/lerna-monorepo
c004d8ae9a1888c9107b049c8ddb058340d926ab
[ "MIT" ]
10
2021-12-11T03:13:44.000Z
2021-12-12T16:15:18.000Z
packages/sudoku-solver/README.md
christianjuth/lerna-monorepo
c004d8ae9a1888c9107b049c8ddb058340d926ab
[ "MIT" ]
1
2022-02-01T12:21:21.000Z
2022-02-16T21:43:42.000Z
packages/sudoku-solver/README.md
christianjuth/monorepo
c004d8ae9a1888c9107b049c8ddb058340d926ab
[ "MIT" ]
null
null
null
# `@christianjuth/sudoku-solver` Depth first search Sudoku solver. [Demo](https://npm.christianjuth.com/sudoku-solver) ## Usage ```javascript import { solve } from '../src/sudoku-solver' const sudoku = [ 0, 3, 0, 4, 9, 0, 0, 1, 0, 7, 4, 0, 0, 1, 8, 0, 0, 0, 1, 9, 6, 7, 0, 0, 0, 2, 4, 0, 0, 0, 5, 0, 1, 7, 6, 2, 0, 0, 3, 0, 2, 7, 0, 5, 9, 0, 0, 0, 0, 4, 0, 3, 0, 0, 0, 7, 8, 9, 0, 0, 0, 0, 0, 4, 2, 9, 0, 0, 0, 0, 7, 3, 0, 0, 0, 3, 7, 0, 0, 9, 8, ] const { solution } = solve(sudoku) ```
19.692308
51
0.496094
krc_Cyrl
0.998723
615e893c4e1d640c1422c39d59d69c36366fa063
1,153
markdown
Markdown
_posts/2012-02-22-file-upload.markdown
jackoliver/wallymathieu.github.io
50ae9b9854f9d9be659c5e3fedc633f6fd848ae0
[ "MIT" ]
1
2019-01-21T03:46:33.000Z
2019-01-21T03:46:33.000Z
_posts/2012-02-22-file-upload.markdown
jackoliver/wallymathieu.github.io
50ae9b9854f9d9be659c5e3fedc633f6fd848ae0
[ "MIT" ]
81
2019-01-06T21:52:15.000Z
2022-02-26T12:39:38.000Z
_posts/2012-02-22-file-upload.markdown
wallymathieu/assertfail
0a7e85a24084e4e767224e6dd14af46d443aae5e
[ "MIT" ]
1
2015-08-11T16:05:56.000Z
2015-08-11T16:05:56.000Z
--- layout: post title: "File upload" date: 2012-02-22T21:00:00+01:00 tags: javascript --- The alternatives right now (2012-02-22):<br><br><ol> <li>Standard html: File select together in a form that is submitted to a page. (firefox, chrome, IE)</li> <li>Iframe hack: File select together with hidden iframe used to upload file. (firefox, chrome, IE)</li> <li>Flash (all flash enabled browsers) </li> <li>FileReader  (firefox, chrome)</li> <ol> <li>Drag and drop together with FileReader and ajax post. </li> <li>File select together with FileReader and ajax post.</li> </ol> </ol> <div> When building ordinary web applications then the first alternative is definitely the best. It's simple and works well. If you are building a more javascript intense application you are in a different mess. If you are supporting IE 7-9 then alternative 2 and 3 is what you are looking for. To enable the best experience you could sniff if the browser handles FileReader (since IE10 might support it) and enable a better experience in that case.</div> <div> <br> </div> <div> Note: I've only considered firefox, chrome and IE. </div> <div style="clear: both;"></div>
42.703704
449
0.740676
eng_Latn
0.990724
615eba68bd106b5a95cbf67e30b920f191640738
14,786
md
Markdown
articles/remote-rendering/how-tos/frontend-apis.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
1
2021-03-12T23:37:16.000Z
2021-03-12T23:37:16.000Z
articles/remote-rendering/how-tos/frontend-apis.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/remote-rendering/how-tos/frontend-apis.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: API de front-end de Azure para la autenticación description: Explica cómo usar la API de front-end en C# para la autenticación author: florianborn71 ms.author: flborn ms.date: 02/12/2010 ms.topic: how-to ms.custom: devx-track-csharp ms.openlocfilehash: 5f0519b60d3b02c8312e15861441060ca89ab002 ms.sourcegitcommit: 692382974e1ac868a2672b67af2d33e593c91d60 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 10/22/2021 ms.locfileid: "130234139" --- # <a name="use-the-azure-frontend-apis-for-authentication"></a>Uso de las API de front-end de Azure para la autenticación En esta sección, se describirá cómo usar la API # para la autenticación y la administración de sesiones. > [!CAUTION] > Las funciones descritas en este capítulo emiten llamadas REST en el servidor de manera interna. En el caso de todas las llamadas REST, enviar estos comandos con demasiada frecuencia hará que el servidor se limite y devuelva un error con el tiempo. En este caso, el valor del miembro `SessionGeneralContext.HttpResponseCode` es 429 ("demasiadas solicitudes"). Como regla general, debería haber un retraso de entre **5 y 10 segundos entre las llamadas subsiguientes**. Algunas funciones también devuelven información cuando se guardan para intentarse de nuevo. Por ejemplo, `RenderingSessionPropertiesResult.MinimumRetryDelay` especifica cuántos segundos hay que esperar antes de intentar otra comprobación. Cuando está disponible, es mejor usar este valor devuelto, ya que permite realizar comprobaciones con la mayor frecuencia posible, sin limitaciones. ## <a name="sessionconfiguration"></a>SessionConfiguration SessionConfiguration se usa para configurar la información de autenticación de una instancia de ```RemoteRenderingClient``` en el SDK. Los campos importantes son: ```cs public class SessionConfiguration { // Domain that will be used for account authentication for the Azure Remote Rendering service, in the form [region].mixedreality.azure.com. // [region] should be set to the domain of the Azure Remote Rendering account. public string AccountDomain; // Domain that will be used to generate sessions for the Azure Remote Rendering service, in the form [region].mixedreality.azure.com. // [region] should be selected based on the region closest to the user. For example, westus2.mixedreality.azure.com or westeurope.mixedreality.azure.com. public string RemoteRenderingDomain; // Can use one of: // 1) ID and Key. // 2) ID and AuthenticationToken. // 3) ID and AccessToken. public string AccountId = Guid.Empty.ToString(); public string AccountKey = string.Empty; public string AuthenticationToken = string.Empty; public string AccessToken = string.Empty; } ``` El homólogo en C++ tiene este aspecto: ```cpp struct SessionConfiguration { std::string AccountDomain{}; std::string RemoteRenderingDomain{}; std::string AccountId{}; std::string AccountKey{}; std::string AuthenticationToken{}; std::string AccessToken{}; }; ``` Para la parte _region_ del dominio, use una [región que tenga cerca](../reference/regions.md). La información de la cuenta se puede obtener desde el portal, tal como se describe en el párrafo [Recuperación de la información de la cuenta](create-an-account.md#retrieve-the-account-information). ## <a name="azure-frontend"></a>Front-end de Azure Las clases relevantes son ```RemoteRenderingClient``` y ```RenderingSession```. ```RemoteRenderingClient``` se usa para la funcionalidad de administración de cuentas y de nivel de cuenta, lo que incluye la creación de sesiones de representación y conversión de activos. ```RenderingSession``` se usa para la funcionalidad de nivel de sesión e incluye: actualización de la sesión, consultas, renovación y retirada. Cada elemento ```RenderingSession``` abierto o creado conservará una referencia al front-end que lo crea. Para apagar correctamente, todas las sesiones se deben desasignar antes que el front-end. Al desasignar una sesión, no se detendrá el servidor en Azure; `RenderingSession.StopAsync` se debe invocar explícitamente. Una vez que se ha creado una sesión y su estado se ha marcado como listo, puede conectarse al entorno en tiempo de ejecución de representación remota con `RenderingSession.ConnectAsync`. ### <a name="threading"></a>Subprocesos Todas las llamadas asincrónicas a RenderingSession y RemoteRenderingClient se completan en un subproceso en segundo plano, no en el subproceso de la aplicación principal. ### <a name="conversion-apis"></a>API de conversión Para obtener más información sobre el servicio de conversión, consulte [la API REST de conversión de modelos](conversion/conversion-rest-api.md). #### <a name="start-asset-conversion"></a>Inicia una conversión de recursos. ```cs async void StartAssetConversion(RemoteRenderingClient client, string storageContainer, string blobinputpath, string bloboutpath, string modelName, string outputName) { var result = await client.StartAssetConversionAsync( new AssetConversionInputOptions(storageContainer, blobinputpath, "", modelName), new AssetConversionOutputOptions(storageContainer, bloboutpath, "", outputName) ); } ``` ```cpp void StartAssetConversion(ApiHandle<RemoteRenderingClient> client, std::string storageContainer, std::string blobinputpath, std::string bloboutpath, std::string modelName, std::string outputName) { AssetConversionInputOptions input; input.BlobContainerInformation.BlobContainerName = blobinputpath; input.BlobContainerInformation.StorageAccountName = storageContainer; input.BlobContainerInformation.FolderPath = ""; input.InputAssetPath = modelName; AssetConversionOutputOptions output; output.BlobContainerInformation.BlobContainerName = blobinputpath; output.BlobContainerInformation.StorageAccountName = storageContainer; output.BlobContainerInformation.FolderPath = ""; output.OutputAssetPath = outputName; client->StartAssetConversionAsync(input, output, [](Status status, ApiHandle<AssetConversionResult> result) { if (status == Status::OK) { //use result } else { printf("Failed to start asset conversion!"); } }); } ``` #### <a name="get-conversion-status"></a>Obtener el estado de conversión ```cs async void GetConversionStatus(RemoteRenderingClient client, string assetId) { AssetConversionStatusResult status = await client.GetAssetConversionStatusAsync(assetId); // do something with status (e.g. check current status etc.) } ``` ```cpp void GetConversionStatus(ApiHandle<RemoteRenderingClient> client, std::string assetId) { client->GetAssetConversionStatusAsync(assetId, [](Status status, ApiHandle<AssetConversionStatusResult> result) { if (status == Status::OK) { // do something with result (e.g. check current status etc.) } else { printf("Failed to get status of asset conversion!"); } }); } ``` ### <a name="rendering-apis"></a>Representación de API Consulte [la API REST de administración de sesiones](session-rest-api.md) para obtener más información sobre la administración de sesiones. Una sesión de representación se puede crear dinámicamente en el servicio o un identificador de sesión ya existente se puede "abrir" en un objeto de RenderingSession. #### <a name="create-rendering-session"></a>Creación de una sesión de representación ```cs async void CreateRenderingSession(RemoteRenderingClient client, RenderingSessionVmSize vmSize, int maxLeaseInMinutes) { CreateRenderingSessionResult result = await client.CreateNewRenderingSessionAsync( new RenderingSessionCreationOptions(vmSize, maxLeaseInMinutes / 60, maxLeaseInMinutes % 60)); // if the call was successful, result.Session holds a valid session reference, otherwise check result.Context for error information } ``` ```cpp void CreateRenderingSession(ApiHandle<RemoteRenderingClient> client, RenderingSessionVmSize vmSize, int maxLeaseInMinutes) { RenderingSessionCreationOptions params; params.MaxLeaseInMinutes = maxLeaseInMinutes; params.Size = vmSize; client->CreateNewRenderingSessionAsync(params, [](Status status, ApiHandle<CreateRenderingSessionResult> result) { if (status == Status::OK && result->GetErrorCode() == Result::Success) { result->GetSession(); //use res->Result } else { printf("Failed to create session!"); } }); } ``` #### <a name="open-an-existing-rendering-session"></a>Abrir una sesión de representación existente Abrir una sesión existente es una llamada sincrónica. ```cs async void CreateRenderingSession(RemoteRenderingClient client, string sessionId) { CreateRenderingSessionResult result = await client.OpenRenderingSessionAsync(sessionId); if (result.ErrorCode == Result.Success) { RenderingSession session = result.Session; // Query session status, etc. } } ``` ```cpp void CreateRenderingSession(ApiHandle<RemoteRenderingClient> client, std::string sessionId) { client->OpenRenderingSessionAsync(sessionId, [](Status status, ApiHandle<CreateRenderingSessionResult> result) { if (status == Status::OK && result->GetErrorCode()==Result::Success) { ApiHandle<RenderingSession> session = result->GetSession(); // Query session status, etc. } }); } ``` #### <a name="get-current-rendering-sessions"></a>Obtener las sesiones de representación actuales ```cs async void GetCurrentRenderingSessions(RemoteRenderingClient client) { RenderingSessionPropertiesArrayResult result = await client.GetCurrentRenderingSessionsAsync(); if (result.ErrorCode == Result.Success) { RenderingSessionProperties[] properties = result.SessionProperties; // Query session status, etc. } } ``` ```cpp void GetCurrentRenderingSessions(ApiHandle<RemoteRenderingClient> client) { client->GetCurrentRenderingSessionsAsync([](Status status, ApiHandle<RenderingSessionPropertiesArrayResult> result) { if (status == Status::OK && result->GetErrorCode() == Result::Success) { std::vector<RenderingSessionProperties> properties; result->GetSessionProperties(properties); } else { printf("Failed to get current rendering sessions!"); } }); } ``` ### <a name="session-apis"></a>API de sesión #### <a name="get-rendering-session-properties"></a>Obtener propiedades de la sesión de representación ```cs async void GetRenderingSessionProperties(RenderingSession session) { RenderingSessionPropertiesResult result = await session.GetPropertiesAsync(); if (result.ErrorCode == Result.Success) { RenderingSessionProperties properties = result.SessionProperties; } else { Console.WriteLine("Failed to get properties of session!"); } } ``` ```cpp void GetRenderingSessionProperties(ApiHandle<RenderingSession> session) { session->GetPropertiesAsync([](Status status, ApiHandle<RenderingSessionPropertiesResult> result) { if (status == Status::OK && result->GetErrorCode() == Result::Success) { RenderingSessionProperties properties = result->GetSessionProperties(); } else { printf("Failed to get properties of session!"); } }); } ``` #### <a name="update-rendering-session"></a>Actualización de una sesión de representación ```cs async void UpdateRenderingSession(RenderingSession session, int updatedLeaseInMinutes) { SessionContextResult result = await session.RenewAsync( new RenderingSessionUpdateOptions(updatedLeaseInMinutes / 60, updatedLeaseInMinutes % 60)); if (result.ErrorCode == Result.Success) { Console.WriteLine("Rendering session renewed succeeded!"); } else { Console.WriteLine("Failed to renew rendering session!"); } } ``` ```cpp void UpdateRenderingSession(ApiHandle<RenderingSession> session, int updatedLeaseInMinutes) { RenderingSessionUpdateOptions params; params.MaxLeaseInMinutes = updatedLeaseInMinutes; session->RenewAsync(params, [](Status status, ApiHandle<SessionContextResult> result) { if (status == Status::OK && result->GetErrorCode() == Result::Success) { printf("Rendering session renewed succeeded!"); } else { printf("Failed to renew rendering session!"); } }); } ``` #### <a name="stop-rendering-session"></a>Detener una sesión de representación ```cs async void StopRenderingSession(RenderingSession session) { SessionContextResult result = await session.StopAsync(); if (result.ErrorCode == Result.Success) { Console.WriteLine("Rendering session stopped successfully!"); } else { Console.WriteLine("Failed to stop rendering session!"); } } ``` ```cpp void StopRenderingSession(ApiHandle<RenderingSession> session) { session->StopAsync([](Status status, ApiHandle<SessionContextResult> result) { if (status == Status::OK && result->GetErrorCode() == Result::Success) { printf("Rendering session stopped successfully!"); } else { printf("Failed to stop rendering session!"); } }); } ``` #### <a name="connect-to-arr-inspector"></a>Conectar con el inspector de ARR ```cs async void ConnectToArrInspector(RenderingSession session) { string htmlPath = await session.ConnectToArrInspectorAsync(); #if WINDOWS_UWP UnityEngine.WSA.Application.InvokeOnUIThread(async () => { var file = await Windows.Storage.StorageFile.GetFileFromPathAsync(htmlPath); await Windows.System.Launcher.LaunchFileAsync(file); }, true); #else InvokeOnAppThreadAsync(() => { System.Diagnostics.Process.Start("file:///" + htmlPath); }); #endif } ``` ```cpp void ConnectToArrInspector(ApiHandle<RenderingSession> session) { session->ConnectToArrInspectorAsync([](Status status, std::string result) { if (status == Status::OK) { // Launch the html file with default browser std::string htmlPath = "file:///" + result; ShellExecuteA(NULL, "open", htmlPath.c_str(), NULL, NULL, SW_SHOWDEFAULT); } else { printf("Failed to connect to ARR inspector!"); } }); } ``` ## <a name="next-steps"></a>Pasos siguientes * [Crear una cuenta](create-an-account.md) * [Scripts de PowerShell de ejemplo](../samples/powershell-example-scripts.md)
36.781095
468
0.718382
yue_Hant
0.234082
615f52b38332f5d0ba6735809345505b049f3cc0
480
md
Markdown
JavaScript/codewars/032_the_coupon_code.md
Gyubin/TIL
9d855ba5376a0d3d58891bcfcc532e2c43496210
[ "MIT" ]
133
2016-01-05T14:40:48.000Z
2021-11-16T16:23:54.000Z
JavaScript/codewars/032_the_coupon_code.md
Gyubin/TIL
9d855ba5376a0d3d58891bcfcc532e2c43496210
[ "MIT" ]
1
2016-01-22T11:53:16.000Z
2016-01-22T12:15:07.000Z
JavaScript/codewars/032_the_coupon_code.md
Gyubin/TIL
9d855ba5376a0d3d58891bcfcc532e2c43496210
[ "MIT" ]
70
2016-07-03T02:01:22.000Z
2021-12-20T03:47:58.000Z
# #32 The Coupon Code 쿠폰이 올바른지 아닌지를 체크하는 문제다. 코드가 일치하는지 확인하고, 만료일보다 이전인지 확인하면 된다. ## 1. 코드 ```js function checkCoupon(enteredCode, correctCode, currentDate, expirationDate){ return enteredCode === correctCode && Date.parse(expirationDate) >= Date.parse(currentDate) } ``` - 쿠폰 코드는 단순히 `===`를 써서 같은지 확인한다. - 날짜는 찾아보니 `Date.parse(dateString)` 함수가 있었다. 날짜 형태의 문자열을 매개변수로 넣으면 Date 객체를 만들어낸다. 만들어진 객체는 부등호 연산이 가능하다. - 다른 해답을 보니 그냥 `new Date(datestring)` 형태로 객체를 생성해서 비교할 수도 있었다.
30
105
0.7125
kor_Hang
1.000008
61601af8b021b7143ffbc572510bff0bd90a8ab0
3,431
md
Markdown
docs/md/MultiPeril.md
oejwing/ktools
3daf2c3ae9983900f6fed655bc15d03e2fdda0b2
[ "BSD-3-Clause" ]
null
null
null
docs/md/MultiPeril.md
oejwing/ktools
3daf2c3ae9983900f6fed655bc15d03e2fdda0b2
[ "BSD-3-Clause" ]
null
null
null
docs/md/MultiPeril.md
oejwing/ktools
3daf2c3ae9983900f6fed655bc15d03e2fdda0b2
[ "BSD-3-Clause" ]
null
null
null
![alt text](../img/banner.jpg "banner") # Appendix C: Multi-peril model support <a id="multiperil"></a> ktools now supports multi-peril models through the introduction of the coverage_id in the data structures. Ground up losses apply at the “Item” level in the Kernel which corresponds to “interest coverage” in business terms, which is the element of financial loss that can be associated with a particular asset. In ktools, item_id represents the element of financial loss and coverage_id represents the asset with its associated total insured value. If there is more than one item per coverage (as defined in the items data) then each item represents an element of financial loss from a particular peril contributing to the total loss for the asset. For each item, the identification of the peril is held in the areaperil_id, which is a unique key representing a combination of the location (area) and peril. #### Multi-peril damage calculation Ground up losses are calculated by multiplying the damage ratio for an item by the total insured value of its associated coverage (defined in the coverages data). The questions are then; how are these losses combined across items, and how are they correlated? There are a few ways in which losses can be combined and the first example in ktools uses a simple rule, which is to sum the losses for each coverage and cap the overall loss to the total insured value. This is what you get when you use the -c parameter in gulcalc to output losses by 'coverage'. In v3.1.0 the method of combining losses became function-driven using the gulcalc command line parameter -a as a few standard approaches have emerged. These are; | Allocation option | Description | |:------------------|:------------------------------------------------------------------------------------------------------| | 0 | Do nothing (suitable for single sub-peril models with one item per coverage) | | 1 | Sum damage ratios and cap to 1. Back-allocate in proportion to contributing subperil loss | | 2 | Multiplicative method for combining damage. Back-allocate in proportion to contributing subperil loss | | 3 | Total damage = maximum subperil damage. Back-allocate all to the maximum contributing subperil loss | Allocation option 1 has been implemented in v3.1.0. Correlation of item damage is generic in ktools, as damage can either be 100% correlated or independent (see [Appendix A Random Numbers](RandomNumbers.md)). This is no different in the multi-peril case when items represent different elements of financial loss to the same asset, rather than different assets. More sophisticated methods of multi-peril correlation have been implemented for particular models, but as yet no standard approach has been implemented in ktools. Note that ground up losses by item can be passed into the financial module unallocated (allocation method 0) using the gulcalc option -i, or allocated using the gulcalc option -a1 -i. If the item ground up losses are passed though unallocated then the limit of total insured value must be applied as part of the financial module calculations, to prevent the ground up loss exceeding the coverage TIV. [Return to top](#multiperil) [Back to Contents](Contents.md)
107.21875
700
0.718449
eng_Latn
0.999442
61602995f27107150d2df93600bc0af7bd809ff8
1,022
md
Markdown
README.md
NiallBunting/broker-calculator
f564940a02dcf6ac3d8204f56c327afb92e54918
[ "MIT" ]
null
null
null
README.md
NiallBunting/broker-calculator
f564940a02dcf6ac3d8204f56c327afb92e54918
[ "MIT" ]
null
null
null
README.md
NiallBunting/broker-calculator
f564940a02dcf6ac3d8204f56c327afb92e54918
[ "MIT" ]
null
null
null
# Broker Calculator ## Original Notes Planning to create a small broker calculator that will allow you to see the most relevant broker for you in terms of cost. --------------------------------------- Current Balance How often do you trade? - x/<period> Do you trade outside the daily fixed time? Trading ISA SIPP Scale of self management nice looking website App ---------------------------------------- Name Notes on the service Self management ranking: 10 - mainly self managed, 0 - diy Platform { type: web, app easeofuse: design: link: note: } Fee { type: percentage amount: 0.9 min: 0 max: 123123 trigger: annual, funds, etfs, regular, entry, exit, monthly notes: waved: -1 (one free a month), 3 (waved after 3) } ---------------------------------------- ## The plan Hoping to write it all in JS so it can run on peoples browsers. With the ability to be put into an iframe so people can embed it on their website. Hopefully in that case they can add the data in return for embedding it.
19.653846
219
0.646771
eng_Latn
0.994857
616088d7fe5a31804cce3c071cd1a94b65ffed6b
116
md
Markdown
README.md
joyously/conic-gradient
68581386a48f84572befee347b8be19279878220
[ "MIT" ]
513
2015-06-18T18:05:45.000Z
2022-01-27T20:10:30.000Z
README.md
joyously/conic-gradient
68581386a48f84572befee347b8be19279878220
[ "MIT" ]
33
2015-06-19T15:48:27.000Z
2020-07-06T20:41:41.000Z
README.md
joyously/conic-gradient
68581386a48f84572befee347b8be19279878220
[ "MIT" ]
78
2015-06-18T18:05:46.000Z
2021-10-04T07:09:40.000Z
# Conic Gradient Polyfill Please visit http://leaverou.github.io/conic-gradient/ for usage instructions and demos.
29
88
0.801724
eng_Latn
0.760522
6160d483872899b71993249d7d4b0b505250c68a
129
md
Markdown
README.md
xinwuyun/note-in-xidian
d7d5f1f4fcdb7e9a51d22ab7196de6ed2422c010
[ "MIT" ]
571
2015-09-25T15:33:28.000Z
2022-03-19T08:29:14.000Z
README.md
xinwuyun/note-in-xidian
d7d5f1f4fcdb7e9a51d22ab7196de6ed2422c010
[ "MIT" ]
null
null
null
README.md
xinwuyun/note-in-xidian
d7d5f1f4fcdb7e9a51d22ab7196de6ed2422c010
[ "MIT" ]
180
2015-11-10T07:30:06.000Z
2021-09-16T00:29:48.000Z
# Post ## [网站收集](website.md) ## [工具](tools.md) ## [开源库](open\_source\_lib.md) ## [合集](broken-reference) ## [书单](shu-dan.md)
10.75
30
0.565891
yue_Hant
0.115529
61615f176050c2abacb9207d3d958fb5d11f706c
682
md
Markdown
README.md
coldspire/last-candle-flame
bc62c58965b491035187a8e95f5f9c3620af51d8
[ "MIT" ]
null
null
null
README.md
coldspire/last-candle-flame
bc62c58965b491035187a8e95f5f9c3620af51d8
[ "MIT" ]
null
null
null
README.md
coldspire/last-candle-flame
bc62c58965b491035187a8e95f5f9c3620af51d8
[ "MIT" ]
null
null
null
# Flame of the Last Candle 🕯&#xFE0F; Just another blog from a guy. ## Workflow Posts are written and pushed from [Forestry.io](https://forestry.io/). New posts are sent from Forestry to the `staging` branch. When time to publish one or more posts to the web, `staging` is merged to `main`. Finally, Netlify sees the `main` branch update, and then builds and deploys the site to [https://fotlc.netlify.app/](https://fotlc.netlify.app/). ### CL commands ``` npx eleventy ``` Or build and host locally for local development ``` npx eleventy --serve ``` Or build automatically when a template changes: ``` npx eleventy --watch ``` Or in debug mode: ``` DEBUG=* npx eleventy
20.666667
145
0.711144
eng_Latn
0.995366
6161781adb74d805a0ac7f47e53cce8d52d67d69
3,221
md
Markdown
docs/authorization/groups-sync.md
dendisuhubdy/kowl
9efdbac6bb5959b97bf5566c010e5e7df94c9ddb
[ "Apache-2.0" ]
1
2021-04-19T06:45:39.000Z
2021-04-19T06:45:39.000Z
docs/authorization/groups-sync.md
gitchong/kowl
52dc2b46ab9cea8d6af6f2e2cf838df1e7c1706f
[ "Apache-2.0" ]
null
null
null
docs/authorization/groups-sync.md
gitchong/kowl
52dc2b46ab9cea8d6af6f2e2cf838df1e7c1706f
[ "Apache-2.0" ]
null
null
null
# Groups Sync <p align="center"> <b>:loudspeaker: This page documents Kowl Business exclusive features.</b> </p> If you want to bind Roles to a set of users (e.g. GitHub teams or Google Groups) you need to grant Kowl a few additional permissions, so that it can resolve the memberships of these user sets. This page guides you through the required steps for each supported provider. ## Google ### Introduction Google Groups is a Google service which belongs to GSuite. Businesses may use it to organize their employees in groups so that they can manage permissions on a group level. A group can be as simple as this: ``` Group: dev-team-checkout@mycompany.com Members: - employee.ab@mycompany.com - employee.cd@mycompany.com - employee.ef@mycompany.com ``` But it could also be a nested group like this: ``` Group: software-engineers@mycompany.com Members: - bi-reports@other-company.com (External Group, managed within a different organization) - dev-team-checkout@mycompany.com (Group) - dev-team-landing@mycompany.com (Group) - security-officer@mycompany.com (User) ``` Kowl supports either case (externally managed groups being a bit tricky). ### Configuration To configure the Google groups sync you need to create a service account in Google Cloud and later on make this account available in Kowl. [This guide](../provider-setup/google.md#4-google-groups-sync-optional) describes the process how to create the service account so that you'll end up with a JSON file which we'll need in the next section. Once you have created the service account and granted the permissions as shown in the guide, you need to make the service account (JSON file) accessible for Kowl. If you are going to run it in Kubernetes you could use volumes and volume mounts. ```yaml login: # jwtSecret: set via --login.jwt-secret flag google: enabled: true clientId: xy-hash.apps.googleusercontent.com # clientSecret: set via --login.google.client-secret flag directory: serviceAccountFilepath: ./configs/sa-google-admin.json targetPrincipal: admin@mycompany.com ``` Only the following three properties are new and therefore relevant for the RBAC Sync on Groups: `serviceAccountFilepath` : Path to the JSON file which represents the service account `targetPrincipal` : An administrative email address on your organization's domain. This is the identity which will be impersonated by Kowl (e.g. `admin@mycompany.com). ## GitHub ### Configuration To configure the Google groups sync you need to create a personal access token. The GitHub account you create the token for requires the permissions to resolve the memberships of the teams and organizations you want to use in your role bindings. Once you have created the personal access token, you need to add it to the Kowl config. You can either pass it via the arguments or simply put it into your YAML config: ```yaml login: # jwtSecret: set via --login.jwt-secret flag github: enabled: false clientId: clientSecret: # This can be set via the --login.github.client-secret flag as well directory: personalAccessToken: # This can be set via the --login.github.directory.personal-access-token flag as well ```
41.294872
343
0.764359
eng_Latn
0.992134
61619522f0d9ad288288dd55b2f734c84fef599c
429
md
Markdown
docs/zitadel-api/index.md
caos/zitadel-csharp
a9ce7a207e4eeec0cfedce253b0b63721fd2cb48
[ "Apache-2.0" ]
12
2020-12-03T20:53:20.000Z
2021-08-20T13:54:58.000Z
docs/zitadel-api/index.md
caos/zitadel-csharp
a9ce7a207e4eeec0cfedce253b0b63721fd2cb48
[ "Apache-2.0" ]
110
2020-12-18T13:20:29.000Z
2022-03-22T21:35:49.000Z
docs/zitadel-api/index.md
caos/zitadel-csharp
a9ce7a207e4eeec0cfedce253b0b63721fd2cb48
[ "Apache-2.0" ]
1
2021-11-16T10:09:32.000Z
2021-11-16T10:09:32.000Z
# Zitadel gRPC Api Here you'll find the code documentation of the `Zitadel.Api` package. This is the official API of Zitadel. With this package, you are able to access the api methods of Zitadel directly via code. Authenticate yourself with a user or a service account and call methods on Zitadel itself via gRPC. Such methods include (non exhaustive): - Fetch roles - Lock User - Fetch Projects - Create User - And so on...
26.8125
79
0.7669
eng_Latn
0.997989
6161b1b0e22fb293811dceaffbf87196558f1a71
863
md
Markdown
README.md
yangzhiquan/yangzhiquan.github.io
d743e5b2585926f0043b2c7ab8841c2114f09c2f
[ "CC-BY-4.0" ]
null
null
null
README.md
yangzhiquan/yangzhiquan.github.io
d743e5b2585926f0043b2c7ab8841c2114f09c2f
[ "CC-BY-4.0" ]
null
null
null
README.md
yangzhiquan/yangzhiquan.github.io
d743e5b2585926f0043b2c7ab8841c2114f09c2f
[ "CC-BY-4.0" ]
null
null
null
### 我的 [GitBook](https://yangzhiquan.github.io/) 最近更新: * [Objective-C的闭包与泛型](./iOS/ClosureAndGeneric.md) * [iOS单元测试](./CleanCoder/iOS-Unit-Testing.md) ## 目录 | 年份 | 文章 | |:-------:|:------| | 2021 | [Objective-C的闭包与泛型](./iOS/ClosureAndGeneric.md) <br> [iOS单元测试](./CleanCoder/iOS-Unit-Testing.md) | 学而不思则惘,思而不学则殆 <!-- ## 微信公众号 ![](https://github.com/xxx.png) --> <!-- #### Sharing <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />以上文档由 <a xmlns:cc="http://creativecommons.org/ns#" href="https://github.com/yangzhiquan/yangzhiquan.github.io" property="cc:attributionName" rel="cc:attributionURL">yvan</a> 编写,转载请遵循<a rel="license" href="http://creativecommons.org/licenses/by/4.0/">知识共享许可协议</a> --> ### 关于我 移动端开发,会点前端.
30.821429
444
0.659328
yue_Hant
0.532934
6163be523b58b753d204510c7568d042eaa26da5
245
md
Markdown
changelog/unreleased/refactor-structure.md
promhippie/hetzner_exporter
56cbcdddf1df8bd1a681e6ce56e0d62588e9f7e3
[ "Apache-2.0" ]
13
2018-09-24T14:59:41.000Z
2022-01-05T22:36:49.000Z
changelog/unreleased/refactor-structure.md
webhippie/hetzner_exporter
2af78f8e2d1891e2b44a7bd9199a2ab9d6c0aacd
[ "Apache-2.0" ]
11
2018-10-04T15:43:42.000Z
2021-05-18T19:38:47.000Z
changelog/unreleased/refactor-structure.md
webhippie/hetzner_exporter
2af78f8e2d1891e2b44a7bd9199a2ab9d6c0aacd
[ "Apache-2.0" ]
2
2018-10-05T11:57:31.000Z
2021-08-10T02:11:56.000Z
Change: Refactor build tools and project structure To have a unified project structure and build tooling we have integrated the same structure we already got within our GitHub exporter. https://github.com/promhippie/hetzner_exporter/issues/16
35
76
0.828571
eng_Latn
0.982363
6164bf108a3c408388a2a18125844362bed98cc6
268
md
Markdown
content/parts/footnote.md
SasakiPeter/wild-boar
d3b37248e7dd3331a628b001605a4c2112f28ef0
[ "MIT" ]
1
2020-01-11T04:21:30.000Z
2020-01-11T04:21:30.000Z
content/parts/footnote.md
SasakiPeter/wild-boar
d3b37248e7dd3331a628b001605a4c2112f28ef0
[ "MIT" ]
null
null
null
content/parts/footnote.md
SasakiPeter/wild-boar
d3b37248e7dd3331a628b001605a4c2112f28ef0
[ "MIT" ]
null
null
null
--- title: footnote --- * copyright 2020 [Sasaki Peter](https://github.com/sasakipeter) * built with [Gatsby](https://www.gatsbyjs.org/) & [React](https://reactjs.org/) * delivered by [Netlify](https://www.netlify.com/) * photos by [pixabay.com](https://pixabay.com)
29.777778
80
0.690299
yue_Hant
0.546925
6165326e8fb992266554c2836dcb9519c1ff5af7
155
md
Markdown
_pages/ClinicalTrial.md
minddingstat/minddingstat.github.io
f79f08eaef22fbd63e292aed16518a5764f87874
[ "MIT" ]
null
null
null
_pages/ClinicalTrial.md
minddingstat/minddingstat.github.io
f79f08eaef22fbd63e292aed16518a5764f87874
[ "MIT" ]
null
null
null
_pages/ClinicalTrial.md
minddingstat/minddingstat.github.io
f79f08eaef22fbd63e292aed16518a5764f87874
[ "MIT" ]
null
null
null
--- title: "Clinical Trial" author_profile: true layout: category permalink: /categories/ClinicalTrial/ taxonomy: ClinicalTrial sidebar: nav: "foo" ---
15.5
37
0.748387
yue_Hant
0.202879
6165a9806ae2e511a1784e42a389c5f2a5e8038c
3,434
md
Markdown
docs/profiling/counter.md
rfakhouri/visualstudio-docs.cs-cz
3d540a168c09a23b855f746696062fd9954b8dd5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/counter.md
rfakhouri/visualstudio-docs.cs-cz
3d540a168c09a23b855f746696062fd9954b8dd5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/counter.md
rfakhouri/visualstudio-docs.cs-cz
3d540a168c09a23b855f746696062fd9954b8dd5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Čítač | Dokumentace Microsoftu ms.date: 11/04/2016 ms.topic: conceptual ms.assetid: aa4b4cdb-e6ea-433a-9579-56f3785e1385 author: mikejo5000 ms.author: mikejo manager: jillfra ms.workload: - multiple ms.openlocfilehash: e21e364d05641089fb7400fbbfa9873510037d62 ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630 ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 04/23/2019 ms.locfileid: "62553081" --- # <a name="counter"></a>Čítač **Čítač** možnost shromažďuje data z čítače výkonu procesoru (hardwaru). - Při použití metoda profilování vzorkování **čítač** určuje čítač výkonu na čipu a počet událostí čítače pro použití jako interval vzorkování. Při použití vzorkování, můžete zadat pouze jeden čítač. - Při použití metoda profilace instrumentace, číslo události čítače, ke kterým došlo v intervalu mezi událostmi předchozích a aktuálních kolekce jsou uvedené jako samostatná pole v sestavách profileru. Více **čítač** možnosti lze zadat, pokud používáte instrumentace. Každý typ procesoru má svou vlastní sadu čítačů výkonu hardwaru. Profiler definuje sadu čítačů obecný výkonu, které jsou společné pro téměř všechny procesory. Chcete-li seznam čítačů obecný a specifické pro procesor v počítači, použijte příkazu vsperfcmd proveďte **QueryCounters** příkazu. ## <a name="syntax"></a>Syntaxe ```cmd VSPerfCmd.exe {/Launch:AppName | /Attach PID} /Counter:Name[,Reload[,FriendlyName]][Options] ``` ```cmd VSPerfCmd.exe /Start:Method /Counter:Name[,Reload[,FriendlyName]][/Counter:Name[,Reload[,FriendlyName]]][Options] ``` #### <a name="parameters"></a>Parametry `Name` Název čítače. Použít VSPerfCmd.exe **/querycounters** možnost Zobrazit jména dostupné čítače v počítači. `Reload` Počet událostí čítače v intervalu vzorkování. Nepoužívejte pomocí metody instrumentace. `FriendlyName` (Volitelné) Řetězec, který má použít místo `Name` v záhlaví sloupce sestavy profileru a zobrazení. ## <a name="required-options"></a>Požadované možnosti Možnost čítačů jde použít jenom s jedním z následujících možností: **Spusťte:** `Trace` Inicializuje možnost profileru pomocí metody instrumentace. **Spuštění:** `AppName` Spustí se zadanou aplikaci a profiler. Profiler musí být inicializovaný na použití metody vzorkování. **Připojení:** `PID` Spuštění profileru a připojí ho k proces zadaný pomocí ID procesu. Profiler musí být inicializovaný na použití metody vzorkování. ## <a name="example"></a>Příklad Ukázka metody vzorkování ukazuje, jak ukázková aplikace na každých 1000 výskyty NonHaltedCycles čítače obecný profileru. Ukázka metody instrumentace ukazuje, jak inicializovat profileru začnete shromažďovat události L2InstructionFetches čítače. Název čítače L2InstructionFetches je specifické pro procesor. ```cmd ; Sample Method Example VSPerfCmd.exe /Start:Sample /Output:TestApp.exe.vsp VSPerfCmd.exe /Launch:TestApp.exe /Counter:NonHaltedCycles,1000,"Non-Halted Cycles" ;INSTRUMENTATION METHOD EXAMPLE VSPerfCmd.exe /Start:Trace /Output:TestApp.exe.vsp /Counter:L2InstructionFetches,,"L2 Cache Instruction Fetches" ``` ## <a name="see-also"></a>Viz také: - [VSPerfCmd](../profiling/vsperfcmd.md) - [Samostatné aplikace profilu](../profiling/command-line-profiling-of-stand-alone-applications.md) - [Webové aplikace ASP.NET profilu](../profiling/command-line-profiling-of-aspnet-web-applications.md) - [Profil služby](../profiling/command-line-profiling-of-services.md)
48.366197
292
0.790914
ces_Latn
0.991192
616649efb491b916a8d6d3587ce33aaacb0e61ba
651
md
Markdown
app/_posts/2014-10-15-shousen.md
chrhsmt/osechi
16804981a66c3e28ecb92674774e8b4448bb1672
[ "MIT" ]
null
null
null
app/_posts/2014-10-15-shousen.md
chrhsmt/osechi
16804981a66c3e28ecb92674774e8b4448bb1672
[ "MIT" ]
21
2018-12-14T08:35:53.000Z
2018-12-14T08:43:13.000Z
app/_posts/2014-10-15-shousen.md
chrhsmt/osechi
16804981a66c3e28ecb92674774e8b4448bb1672
[ "MIT" ]
null
null
null
--- layout: post title: "ネットもデパートも熱い商戦" header_image: "http://mainichi.jp/graph/2014/10/15/20141015k0000e020188000c/image/002.jpg" header_image_alt: "おせち商戦" date: 2014-10-15 23:20:00 +0900 updatedDate: 2014-10-15 23:20:00 +0900 categories: recommend keywords: "おせち商戦" --- どうも。管理人です。 本日のおせちニュースはこちら。 <!-- more --> ## [おせち:ネットもデパートも熱い商戦「味おためし」もあり / 毎日新聞](http://mainichi.jp/select/news/20141015k0000e020188000c.html) ネットでもリアルなデパートでも、おせち商戦が熱くなってきているようです。 ネット上だけでは味などが分からない点に対応するために、「おためしおせち」として、販売をしているものも出てきているようですね。 スーパーからは、今年はダイエー(東京)とイオン九州(福岡市)が初めて参入。 また、人気アイドルグループ「AKB48」が献立作りに加わったという商品(1万800円、2人用)などが販売されるなど、デパート各社も工夫を凝らしているようです。 ではでは。
22.448276
101
0.781874
jpn_Jpan
0.593077
6166a7ae33b8867d5dbf637703e4fd0fdbdbc670
840
md
Markdown
exampleSite/content/teaching.md
maibennett/website_github
2359bfa0ad046c7f084017b168209b76dcb2ecae
[ "MIT" ]
2
2019-04-03T04:02:59.000Z
2020-02-09T18:12:36.000Z
exampleSite/content/teaching.md
maibennett/website_github
2359bfa0ad046c7f084017b168209b76dcb2ecae
[ "MIT" ]
null
null
null
exampleSite/content/teaching.md
maibennett/website_github
2359bfa0ad046c7f084017b168209b76dcb2ecae
[ "MIT" ]
2
2020-02-08T21:04:42.000Z
2020-06-24T18:53:20.000Z
+++ title = "Teaching" slug = "teaching" +++ Here is a list of the courses I've taught, and some useful material I've created for them. *Note: All these materials were created thanks to the awesome contributions of many researchers and professors that have generously put their courses and material out there, and they are all referenced accordingly at the end of my slides in each class.* # Fall 2021 **STA 235 - Data Science for Business Applications (Honors)** :computer: You can check out all the course material, including the syllabus, slides, and assignments at [sta235.netlify.app](https://sta235.netlify.app) <iframe src="https://sta235.netlify.app" width="100%" height="900px"></iframe> # Spring 2021 **STA 235 - Data Science for Business Applications** :computer: For previous version of the course, please email me.
38.181818
253
0.753571
eng_Latn
0.996482
6166fd43699540c352d2baf6e5c1eff88542a41b
3,075
md
Markdown
docs/apis/open-api/subscribe-message/requestSubscribeMessage.md
ksti/taro
46ecfe27c0965cca6256a08a8462f83c71e16f04
[ "MIT" ]
2
2020-06-28T06:22:11.000Z
2020-07-08T03:49:38.000Z
docs/apis/open-api/subscribe-message/requestSubscribeMessage.md
ksti/taro
46ecfe27c0965cca6256a08a8462f83c71e16f04
[ "MIT" ]
1
2021-01-11T09:34:55.000Z
2021-01-11T09:34:55.000Z
docs/apis/open-api/subscribe-message/requestSubscribeMessage.md
ksti/taro
46ecfe27c0965cca6256a08a8462f83c71e16f04
[ "MIT" ]
1
2020-10-20T07:46:56.000Z
2020-10-20T07:46:56.000Z
--- title: Taro.requestSubscribeMessage(option) sidebar_label: requestSubscribeMessage --- 请求订阅消息 注意:[2.8.2](https://developers.weixin.qq.com/miniprogram/dev/framework/compatibility.html) 版本开始,用户发生点击行为或者发起支付回调后,才可以调起订阅消息界面。 > [参考文档](https://developers.weixin.qq.com/miniprogram/dev/api/open-api/subscribe-message/wx.requestSubscribeMessage.html) ## 类型 ```tsx (option: Option) => Promise<SuccessCallbackResult | FailCallbackResult> ``` ## 参数 ### Option <table> <thead> <tr> <th>参数</th> <th>类型</th> <th style="text-align:center">必填</th> <th>说明</th> </tr> </thead> <tbody> <tr> <td>tmplIds</td> <td><code>any[]</code></td> <td style="text-align:center">是</td> <td>需要订阅的消息模板的id的集合(注意:iOS客户端7.0.6版本、Android客户端7.0.7版本之后的一次性订阅/长期订阅才支持多个模板消息,iOS客户端7.0.5版本、Android客户端7.0.6版本之前的一次订阅只支持一个模板消息)消息模板id在[微信公众平台(mp.weixin.qq.com)-功能-订阅消息]中配置</td> </tr> <tr> <td>complete</td> <td><code>(res: CallbackResult) =&gt; void</code></td> <td style="text-align:center">否</td> <td>接口调用结束的回调函数(调用成功、失败都会执行)</td> </tr> <tr> <td>fail</td> <td><code>(result: FailCallbackResult) =&gt; void</code></td> <td style="text-align:center">否</td> <td>接口调用失败的回调函数</td> </tr> <tr> <td>success</td> <td><code>(result: SuccessCallbackResult) =&gt; void</code></td> <td style="text-align:center">否</td> <td>接口调用成功的回调函数</td> </tr> </tbody> </table> ### FailCallbackResult <table> <thead> <tr> <th>参数</th> <th>类型</th> <th>说明</th> </tr> </thead> <tbody> <tr> <td>errCode</td> <td><code>number</code></td> <td>接口调用失败错误码</td> </tr> <tr> <td>errMsg</td> <td><code>string</code></td> <td>接口调用失败错误信息</td> </tr> </tbody> </table> ### SuccessCallbackResult <table> <thead> <tr> <th>参数</th> <th>类型</th> <th>说明</th> </tr> </thead> <tbody> <tr> <td>[TEMPLATE_ID]</td> <td><code>&quot;accept&quot; | &quot;reject&quot; | &quot;ban&quot;</code></td> <td>动态的键,即模板id</td> </tr> <tr> <td>errMsg</td> <td><code>string</code></td> <td>接口调用成功时errMsg值为'requestSubscribeMessage:ok'</td> </tr> </tbody> </table> #### 示例代码 表示用户同意订阅 zun-LzcQyW-edafCVvzPkK4de2Rllr1fFpw2A_x0oXE 这条消息 ```json { "errMsg": "requestSubscribeMessage:ok", "zun-LzcQyW-edafCVvzPkK4de2Rllr1fFpw2A_x0oXE": "accept" } ``` ### template_reflex 模版消息订阅类型 <table> <thead> <tr> <th>参数</th> <th>说明</th> </tr> </thead> <tbody> <tr> <td>accept</td> <td>表示用户同意订阅该条id对应的模板消息</td> </tr> <tr> <td>reject</td> <td>表示用户拒绝订阅该条id对应的模板消息</td> </tr> <tr> <td>ban</td> <td>表示已被后台封禁</td> </tr> </tbody> </table> ## 示例代码 ```tsx Taro.requestSubscribeMessage({ tmplIds: [''], success: function (res) { } }) ``` ## API 支持度 | API | 微信小程序 | H5 | React Native | | :---: | :---: | :---: | :---: | | Taro.requestSubscribeMessage | ✔️ | | |
19.339623
180
0.565203
yue_Hant
0.318271
61677e37ea3b5e5cb50042bf149d0f8890b13ea1
5,958
md
Markdown
README.md
marnunez/auto-marker-label
0d42b2b7a1d45c79b42baa3bd650299b8f16a4e0
[ "Apache-2.0" ]
null
null
null
README.md
marnunez/auto-marker-label
0d42b2b7a1d45c79b42baa3bd650299b8f16a4e0
[ "Apache-2.0" ]
null
null
null
README.md
marnunez/auto-marker-label
0d42b2b7a1d45c79b42baa3bd650299b8f16a4e0
[ "Apache-2.0" ]
null
null
null
# An Open Source Algorithm for the Automatic Labelling of Motion Capture Markers using Deep Learning An algorithm that uses machine learning to automatically label optical motion capture markers. The algorithm can be trained on existing data or simulated marker trajectories. Data and code is provided to generate the simulated trajectories for custom marker sets. ![Marker Labelling GUI](/images/auto-marker-label-GUI.jpg) ## Installation This code has been tested using Python 3.7. The following packages are required and can be installed using pip. The version used in testing is listed. See requirements.txt for the full list of versions tested. ``` dash==1.14.0 dash-bootstrap-components==0.10.3 hdf5==1.10.4 ipywidgets==7.5.1 mydcc==0.1.22 numpy==1.19.1 pandas==1.0.5 plotly==4.9.0 scikit-learn==0.22.1 scipy==1.5.2 torch==1.5.1 ``` [ezc3d](https://github.com/pyomeca/ezc3d) is also required and can be installed using conda `conda install -c conda-forge ezc3d` ## Use ### Generate Simulated Trajectories If you do not have existing labelled motion capture data to train the algorithm, simulated trajectories can be generated. If you have your own training data, skip this step. - Download the body segment kinematics used to generate the simulated trajectories here: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4293999.svg)](https://doi.org/10.5281/zenodo.4293999). Extract the .hdf5 file from the zip file and place it in the data folder. - Next, the marker set to be used must be defined using an OpenSim marker set .xml file. - Install [OpenSim](https://simtk.org/frs/index.php?group_id=91) - In OpenSim, Open Model Rajagopal2015_mod.osim included in /data folder - Right click on Markers in the Navigator to add new marker. Marker can be selected and moved in the visualization. Set the marker's name and parent_frame (ie body segment it is attached to) in the Properties window. - Save the marker set by right clicking Markers in the Navigator and choosing Save to File. - Set parameters in **generateSimTrajectories.py** - Set the path to the marker set .xml file. - Set the markers used to align the data. Ideally, these will be a marker on the left and right side of the torso or head (eg. right and left acromions). These are used to rotate the data about the verical axis so that the person faces the +x direction. - Run **generateSimTrajectories.py** ### Train the Algorithm - Set parameters in **trainAlgorithm.py**. - If using simulated trajectories to train, set the path to the marker set and to the pickle file generated by generateSimTrajectories.py. - If using existing labelled data to train, set the path to the folder containing the c3d files. Set the markers used to align the data. Ideally, these will be a marker on the left and right side of the torso or head (eg. right and left acromions). - Define an OpenSim marker set, if not done already (see above). - Run **trainAlgorithm.py**. The training will be performed on a GPU, if one is available. Note that this may take a long time to run (ie. hours - days). Training time can be reduced by using less training data (set num_participants in generateSimTrajectories.py). ### Setup the GUI - Set the paths to trained model, trained values pickle file, and market set in **markerLabelGUI.py**. - Run **markerLabelGUI.py**, this will open the GUI in your browser. ### Using the GUI - Enter the folder path where the c3d files to be labelled are located. - Select the desired file from the dropdown menu. - Click *Load Data*, the loading is complete when the plot of markers appears. - If the person is not facing the +x direction, enter the angle to rotate the data about the z-axis (vertical) and click *Submit*. This angle should be chosen such that the person faces +x for the majority of the trial. - Click *Label Markers*. - Examine the results. - Clicking and dragging rotates, the scroll wheel zooms, and right clicking translates the plot. Hovering over a marker displays the marker number, label, and coordinates. - The slider at the bottom can be used to view different time frames. After clicking the slider, the left and right arrow keys can be used to adjust the timeframe as well. - The type of marker visualization can be selected from the *Visualization* dropdown menu. *Confidence* colours the markers based on the confidence in the predicted label. *Unlabelled* highlights unlabelled markers in red. *Segments* displays markers attached to the same body segment in the same colour. - The *Error Detection* box lists unlabelled markers and duplicated labels. The time frames where the error was detected are displayed. Note that the listed marker is guaranteed to be visible for the first and last frames, but may be missing from the intermediate frames of the listed range. - Correct incorrect labels using the *Marker Label Modifier*. Type the number for the marker to change the label and select the desired label from the dropdown then click the *Submit* button. To leave a marker unlabelled, leave the dropdown menu blank (this can be cleared by clicking the 'x'). - Export a labelled version of the .c3d file by clicking the *Export to C3D* button. This will rotate the data back to the original orientation and fill small gaps through cubic spline interpolation. Unlablled markers will not be exported. - Before loading a new c3d file, click the *Refresh Settings* button. ### Transfer Learning - As data is collected, labelled, and corrected, it can be added to the training set through transfer learning to improve the algorithm - In **transferLearning.py**, set the path for the trained model and training values to build on (.ckpt and .pickle file) and the folder containing the .c3d files to add to the training set. Set the markers used to align the data. Ideally, these will be a marker on the left and right side of the torso or head (eg. right and left acromions). - Run **transferLearning.py**
81.616438
342
0.772575
eng_Latn
0.997713
6167c6077d86419900ad1da746fe7bb353e99b2c
3,575
md
Markdown
src/mavlink/pymavlink/README.md
Mukund-MTX/SkyLark
6587a4f5bb81deccaf6e4a577c8db03a31025632
[ "MIT" ]
10
2021-03-15T03:58:06.000Z
2021-12-30T15:33:38.000Z
src/mavlink/pymavlink/README.md
Mukund-MTX/SkyLark
6587a4f5bb81deccaf6e4a577c8db03a31025632
[ "MIT" ]
4
2021-05-03T16:58:53.000Z
2021-12-21T21:01:02.000Z
src/mavlink/pymavlink/README.md
mars-tu/SkyLark
aa658376d00a700594154197b81aaf11639948f0
[ "MIT" ]
9
2021-04-28T15:26:34.000Z
2021-12-21T20:41:30.000Z
[![Build Status](https://travis-ci.org/ArduPilot/pymavlink.svg?branch=master)](https://travis-ci.org/ArduPilot/pymavlink) # Pymavlink This is a Python implementation of the MAVLink protocol. It includes a source code generator (generator/mavgen.py) to create MAVLink protocol implementations for other programming languages as well. Also contains tools for analyzing flight logs. # Documentation Please see http://ardupilot.org/dev/docs/mavlink-commands.html for mavlink command reference. For realtime discussion please see the pymavlink [Gitter channel](https://gitter.im/ArduPilot/pymavlink) Examples can be found [in the repository](examples/) or in the [ArduSub book](https://www.ardusub.com/developers/pymavlink.html) # Installation Pymavlink supports both Python 2 and Python 3. The following instructions assume you are using Python 3 and a Debian-based (like Ubuntu) installation. .. note:: pymavlink assumes the command "python" is in your path. Your distribution may provide a package such as "python-is-python3" to ensure that "python" is in your path. ## Dependencies Pymavlink has several dependencies : - [future](http://python-future.org/) : for Python 2 and Python 3 interoperability - [lxml](http://lxml.de/installation.html) : for checking and parsing xml file - python3-dev : for mavnative (or python-dev for Python 2) - a C compiler : for mavnative Optional : - numpy : for FFT - pytest : for tests ### On Linux lxml has some additional dependencies that can be installed with your package manager (here with `apt-get`) : .. note:: If you continue to use Python 2 you may need to change package names here (e.g. python3-dev => python-dev) ```bash sudo apt-get install gcc python3-dev libxml2-dev libxslt-dev ``` Optional for FFT scripts and tests: ```bash sudo apt-get install python3-numpy python3-pytest ``` Using pip you can install the required dependencies for pymavlink : ```bash sudo python -m pip install --upgrade future lxml ``` ### On Windows Use pip to install future as for Linux. Lxml can be installed with a Windows installer from here : https://pypi.org/project/lxml ## Installation ### For users It is recommended to install pymavlink from PyPI with pip, that way dependencies should be auto installed by pip. ```bash sudo python -m pip install --upgrade pymavlink ``` #### Mavnative By default, pymavlink will try to compile and install mavnative which is a C extension for parsing mavlink. Mavnative only supports mavlink1. To skip mavnative installation and reduce dependencies like `gcc` and `python-dev`, you can pass `DISABLE_MAVNATIVE=True` environment variable to the installation command: ```bash sudo DISABLE_MAVNATIVE=True python -m pip install --upgrade pymavlink ``` ### For developers From the pymavlink directory, you can use : ```bash sudo MDEF=PATH_TO_message_definitions python -m pip install . -v ``` Since pip installation is executed from /tmp, it is necessary to point to the directory containing message definitions with MDEF. MDEF should not be set to any particular message version directory but the parent folder instead. If you have cloned from mavlink/mavlink then this is ```/mavlink/message_definitions``` . Using pip should auto install dependencies and allow you to keep them up-to-date. Or: ```bash sudo python setup.py install ``` # License --------- pymavlink is released under the GNU Lesser General Public License v3 or later. The source code generated by generator/mavgen.py is available under the permissive MIT License.
32.207207
400
0.760559
eng_Latn
0.986055
6167fbe67ba3295de04e3bc6c334ead479832be0
681
md
Markdown
tournaments/reverseVowelsOfString/reverseVowelsOfString.md
gurfinkel/codeSignal
114817947ac6311bd53a48f0f0e17c0614bf7911
[ "MIT" ]
5
2020-02-06T09:51:22.000Z
2021-03-19T00:18:44.000Z
tournaments/reverseVowelsOfString/reverseVowelsOfString.md
gurfinkel/codeSignal
114817947ac6311bd53a48f0f0e17c0614bf7911
[ "MIT" ]
null
null
null
tournaments/reverseVowelsOfString/reverseVowelsOfString.md
gurfinkel/codeSignal
114817947ac6311bd53a48f0f0e17c0614bf7911
[ "MIT" ]
3
2019-09-27T13:06:21.000Z
2021-04-20T23:13:17.000Z
Write a function that takes a string as input and returns the string with only the vowels reversed. **Note**: The letters "a", "e", "i", "o", and "u" are vowels. The letter "y" is not a vowel. Example - For `s = "hello, world"`, the output should be `reverseVowelsOfString(s) = "hollo, werld"`; - For `s = "codesignal"`, the output should be `reverseVowelsOfString(s) = "cadisegnol"`; - For `s = "eIaOyU"`, the output should be `reverseVowelsOfString(s) = "UOaIye"`. Input/Output - **[execution time limit] 3 seconds (cs)** - **[input] string s** _Guaranteed constraints:_ `0 ≤ s.length ≤ 50`. - **[output] string**
28.375
101
0.612335
eng_Latn
0.945231
6169417f9ab61cf6fd14672335eb03a75fb32564
41,796
md
Markdown
security-updates/SecurityBulletins/2013/ms13-089.md
MicrosoftDocs/security-updates.zh-tw
a7899f202a462bc504303f28e9c2a8d41cfa99a0
[ "CC-BY-4.0", "MIT" ]
3
2019-01-24T02:18:29.000Z
2020-05-19T20:17:25.000Z
security-updates/SecurityBulletins/2013/ms13-089.md
MicrosoftDocs/security-updates.zh-tw
a7899f202a462bc504303f28e9c2a8d41cfa99a0
[ "CC-BY-4.0", "MIT" ]
257
2017-12-11T09:12:37.000Z
2019-12-06T23:07:01.000Z
security-updates/SecurityBulletins/2013/ms13-089.md
MicrosoftDocs/security-updates.zh-tw
a7899f202a462bc504303f28e9c2a8d41cfa99a0
[ "CC-BY-4.0", "MIT" ]
5
2018-10-12T21:08:18.000Z
2021-11-15T11:25:34.000Z
--- TOCTitle: 'MS13-089' Title: 'Microsoft Security Bulletin MS13-089 - 重大' ms:assetid: 'ms13-089' ms:contentKeyID: 61238904 ms:date: '04/18/2014' ms:mtpsurl: 'https://technet.microsoft.com/zh-TW/library/ms13-089(v=Security.10)' --- Security Bulletin Microsoft Security Bulletin MS13-089 - 重大 =========================================== Windows 圖形裝置介面中的資訊安全風險可能會允許遠端執行程式碼 (2876331) ---------------------------------------------------------------------- 發行: 2013年11月13日 **版本:** 1.0 ### 一般資訊 #### 提要 此資訊安全更新可解決 Microsoft Windows 中一項未公開報告的資訊安全風險。如果使用者透過 WordPad 檢視或開啟蓄意製作的 Windows Write 檔案,此資訊安全風險可能會允許遠端執行程式碼。成功利用此資訊安全風險的攻擊者可以取得與目前使用者相同的使用者權限。系統上帳戶使用者權限較低的使用者,其受影響的程度比擁有系統管理權限的使用者要小。 對於所有受支援版本的 Microsoft Windows,此資訊安全更新的等級為「重大」。如需更多資訊,請參閱本節中的<受影響及不受影響的軟體>小節。 此資訊安全更新可修改圖形裝置介面處理影像檔案時處理整數計算的方式,進而解決此資訊安全風險。如需更多有關此資訊安全風險的資訊,請參閱下節**<資訊安全風險資訊>**下的特定資訊安全風險項目的<常見問題集 (FAQ)>小節。 **建議。** 大部分客戶都已啟用自動更新,並且不必須採取任何行動,因為資訊安全更新將自動下載和安裝。沒有啟用自動更新的客戶則必須檢查更新,並手動安裝更新。如需有關自動更新中特定設定選項的資訊,請參閱 [Microsoft 知識庫文件編號 294871](https://support.microsoft.com/kb/294871?ln=zh-tw)。 若是系統管理員和企業安裝,或是想要手動安裝此資訊安全更新的使用者,Microsoft 建議客戶使用更新管理軟體,立即套用更新,或使用 [Microsoft Update](https://go.microsoft.com/fwlink/?linkid=40747) 服務檢查更新。 另請參閱本公告下文的<偵測與部署工具及指南>一節。 #### 知識庫文件 | 知識庫文件 | [2876331](https://support.microsoft.com/kb/2876331) | |----------------|-----------------------------------------------------| | 檔案資訊 | 是 | | SHA1/SHA2 雜湊 | 是 | | 已知問題 | 無 | #### 受影響及不受影響的軟體 下列軟體已經過測試判斷哪些版號或版本會受到影響。其他版本超過它們的支援週期或不受影響。若要瞭解您的軟體版本的支援週期,請參閱 [Microsoft 產品技術支援週期網站](https://support.microsoft.com/default.aspx?scid=fh;%5bln%5d;lifecycle)。 **受影響的軟體** <p></p> <table style="border:1px solid black;"> <tr class="thead"> <th style="border:1px solid black;" > 作業系統 </th> <th style="border:1px solid black;" > 最大資訊安全影響 </th> <th style="border:1px solid black;" > 彙總嚴重性等級 </th> <th style="border:1px solid black;" > 已取代更新 </th> </tr> <tr> <th colspan="4"> Windows XP </th> </tr> <tr> <td style="border:1px solid black;"> [Windows XP Service Pack 3](https://www.microsoft.com/download/details.aspx?familyid=766cb024-b595-451f-8ae5-4b5e7838d478) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [Windows XP Professional x64 Edition Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=81db278c-1966-42fd-9b00-c7f24a0bbed6) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <th colspan="4"> Windows Server 2003 </th> </tr> <tr> <td style="border:1px solid black;"> [Windows Server 2003 Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=3282dda5-5b43-4ae1-9854-6480178b8d8d) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [Windows Server 2003 x64 Edition Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=1e77aa03-2c59-46ca-9ff9-5b0ff454b3ce) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <td style="border:1px solid black;"> [適用於 Itanium 型系統的 Windows Server 2003 SP2](https://www.microsoft.com/download/details.aspx?familyid=f9ce6da3-4b68-4a30-baac-ae47ba793c95) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <th colspan="4"> Windows Vista </th> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [Windows Vista Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=76ee7e22-d64a-41e7-b8f7-a1d377782053) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <td style="border:1px solid black;"> [Windows Vista x64 Edition Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=a5122746-5ffb-4915-8864-f9dd413ec0d0) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <th colspan="4"> Windows Server 2008 </th> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [適用於 32 位元系統的 Windows Server 2008 Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=a6d0ebc5-321f-46ec-b7a3-61624fa6e6e2) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <td style="border:1px solid black;"> [適用於 x64 型系統的 Windows Server 2008 Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=bb36c042-fa2a-4d0d-a8d4-cac08b28cc63) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [適用於 Itanium 型系統的 Windows Server 2008 Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=ba8bcd2d-1c7b-4052-8129-bb1f8d0018f1) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <th colspan="4"> Windows 7 </th> </tr> <tr> <td style="border:1px solid black;"> [適用於 32 位元系統的 Windows 7 Service Pack 1](https://www.microsoft.com/download/details.aspx?familyid=108699b1-717f-4075-b087-d2740274fe9d) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [適用於 x64 型系統的 Windows 7 Service Pack 1](https://www.microsoft.com/download/details.aspx?familyid=508091ff-3dff-4ac4-b056-e1f77d4449af) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr> <th colspan="4"> Windows Server 2008 R2 </th> </tr> <tr> <td style="border:1px solid black;"> [適用於 x64 型系統的 Windows Server 2008 R2 Service Pack 1](https://www.microsoft.com/download/details.aspx?familyid=af1d1d5d-9109-45e7-88be-7ea7f0357a8a) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [適用於 Itanium 型系統的 Windows Server 2008 R2 Service Pack 1](https://www.microsoft.com/download/details.aspx?familyid=4f46ebc9-9ae1-4bea-928b-62ce902de386) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr> <th colspan="4"> Windows 8 和 Windows 8.1 </th> </tr> <tr> <td style="border:1px solid black;"> [適用於 32 位元系統的 Windows 8](https://www.microsoft.com/download/details.aspx?familyid=fbbe5bf9-0dad-4316-898c-debebc37dc88) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [適用於 x64 型系統的 Windows 8](https://www.microsoft.com/download/details.aspx?familyid=0c9446ff-f127-4a5b-8834-1dc15ada0c0f) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr> <td style="border:1px solid black;"> [適用於 32 位元系統的 Windows 8.1](https://www.microsoft.com/download/details.aspx?familyid=f6c17a8b-d89c-44fe-a706-9d23e98046a3) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [適用於 x64 型系統的 Windows 8.1](https://www.microsoft.com/download/details.aspx?familyid=03372c3e-c2d3-43d0-9968-9615ef0fa662) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr> <th colspan="4"> Windows Server 2012 和 Windows Server 2012 R2 </th> </tr> <tr> <td style="border:1px solid black;"> [Windows Server 2012](https://www.microsoft.com/download/details.aspx?familyid=2743efdf-4334-49d2-b7c3-7222d47db0fc) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [Windows Server 2012 R2](https://www.microsoft.com/download/details.aspx?familyid=a4da30e8-3255-4c11-a33c-7c7ab705c738) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr> <th colspan="4"> Windows RT 和 Windows RT 8.1 </th> </tr> <tr> <td style="border:1px solid black;"> Windows RT<sup>[1]</sup> (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> Windows RT 8.1<sup>[1]</sup> (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr> <th colspan="4"> Server Core 安裝選項 </th> </tr> <tr> <td style="border:1px solid black;"> [適用於 32 位元系統的 Windows Server 2008 Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=a6d0ebc5-321f-46ec-b7a3-61624fa6e6e2) (Server Core 安裝) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [適用於 x64 型系統的 Windows Server 2008 Service Pack 2](https://www.microsoft.com/download/details.aspx?familyid=bb36c042-fa2a-4d0d-a8d4-cac08b28cc63) (Server Core 安裝) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> [MS08-071](https://technet.microsoft.com/zh-tw/security/bulletin/ms08-071) 中的 956802 </td> </tr> <tr> <td style="border:1px solid black;"> [適用於 x64 型系統的 Windows Server 2008 R2 Service Pack 1](https://www.microsoft.com/download/details.aspx?familyid=af1d1d5d-9109-45e7-88be-7ea7f0357a8a) (Server Core 安裝) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> [Windows Server 2012](https://www.microsoft.com/download/details.aspx?familyid=2743efdf-4334-49d2-b7c3-7222d47db0fc) (Server Core 安裝) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> <tr> <td style="border:1px solid black;"> [Windows Server 2012 R2](https://www.microsoft.com/download/details.aspx?familyid=a4da30e8-3255-4c11-a33c-7c7ab705c738) (Server Core 安裝) (2876331) </td> <td style="border:1px solid black;"> 遠端執行程式碼 </td> <td style="border:1px solid black;"> 重大 </td> <td style="border:1px solid black;"> 無 </td> </tr> </table> <sup>[1]</sup>更新是透過 [Windows Update](https://update.microsoft.com/windowsupdate/v6/default.aspx) 提供。 更新常見問題集 -------------- **Windows 8.1 Preview、Windows RT 8.1 Preview 和 Windows Server 2012 R2 Preview 是否受本公告所述之任何資訊安全風險所影響?** 是。2876331 更新可供 Windows 8.1 Preview、Windows RT 8.1 Preview 和 Windows Server 2012 R2 Preview 使用。建議執行這些作業系統的客戶為其系統套用更新。該更新是在 Windows Update 提供。 **我所使用的軟體是此資訊安全公告中討論的軟體之舊版。該怎麼辦?**   本公告所列出的受影響軟體版本已經過測試判斷哪些版本會受到影響。其他版本超出它們的支援週期。如需瞭解產品生命週期的更多資訊,請參閱 [Microsoft 支援週期](https://support.microsoft.com/default.aspx?scid=fh;%5bln%5d;lifecycle)網站。 使用此軟體舊版的客戶應優先考慮移轉至支援的版本,以避免因潛在的資訊安全風險而遭受攻擊。若要瞭解您的軟體版本的支援週期,請參閱[選擇一個產品檢視其支援週期資訊](https://support.microsoft.com/gp/lifeselect)。如需更多關於上述軟體版本的 Service Pack 的資訊,請參閱 [Service Pack 週期支援政策](https://support.microsoft.com/?ln=zh-tw&scid=gp%3b%5bln%5d%3blifecycle&x=13&y=15)。 需要舊版軟體額外支援服務的客戶,請連絡 Microsoft 客戶小組人員、技術支援經理或適當的 Microsoft 協力廠商,以取得所需的額外支援。尚未簽訂聯盟、優先或授權合約的客戶,可以連絡當地的 Microsoft 銷售辦公室。如需連絡資訊,請參閱 [Microsoft 全球資訊](https://www.microsoft.com/worldwide/)網站,在 \[Contact Information\] (連絡資訊) 清單中選擇國家,然後按一下 \[Go\] (前往) 查看各地的連絡電話號碼。連絡時,請指明要連絡當地優先支援服務行銷經理。如需更多資訊,請參閱 [Microsoft 技術支援週期準則常見問答集](https://support.microsoft.com/gp/lifepolicy)。 ### **資訊安全風險資訊** 嚴重性等級和資訊安全風險識別碼 ------------------------------ 下列嚴重性等級是假設資訊安全風險可能造成的最嚴重影響而評定。在本資訊安全公告發行的 30 天內,如需資訊安全風險之易遭利用性與嚴重性等級和資訊安全影響之間對應關係的資訊,請參閱 [11 月份公告摘要](https://technet.microsoft.com/security/bulletin/ms13-nov)中的<資訊安全風險入侵指數>。如需更多資訊,請參閱 [Microsoft 資訊安全風險入侵指數](https://technet.microsoft.com/security/cc998259)。 <table class="dataTable"> <caption> 依受影響軟體列出的資訊安全風險嚴重性等級和最大資訊安全影響 </caption> <tr class="thead"> <th style="border:1px solid black;" > 受影響的軟體 </th> <th style="border:1px solid black;" > 圖形裝置介面整數溢位資訊安全風險 - CVE-2013-3940 </th> <th style="border:1px solid black;" > 彙總嚴重性等級 </th> </tr> <tr> <th colspan="3"> Windows XP </th> </tr> <tr> <td style="border:1px solid black;"> Windows XP Service Pack 3 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> Windows XP Professional x64 Edition Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows Server 2003 </th> </tr> <tr> <td style="border:1px solid black;"> Windows Server 2003 Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> Windows Server 2003 x64 Edition Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <td style="border:1px solid black;"> 適用於 Itanium 型系統的 Windows Server 2003 SP2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows Vista </th> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> Windows Vista Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <td style="border:1px solid black;"> Windows Vista x64 Edition Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows Server 2008 </th> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> 適用於 32 位元系統的 Windows Server 2008 Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <td style="border:1px solid black;"> 適用於 x64 型系統的 Windows Server 2008 Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> 適用於 Itanium 型系統的 Windows Server 2008 Service Pack 2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows 7 </th> </tr> <tr> <td style="border:1px solid black;"> 適用於 32 位元系統的 Windows 7 Service Pack 1 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> 適用於 x64 型系統的 Windows 7 Service Pack 1 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows Server 2008 R2 </th> </tr> <tr> <td style="border:1px solid black;"> 適用於 x64 型系統的 Windows Server 2008 R2 Service Pack 1 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> 適用於 Itanium 型系統的 Windows Server 2008 R2 Service Pack 1 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows 8 和 Windows 8.1 </th> </tr> <tr> <td style="border:1px solid black;"> 適用於 32 位元系統的 Windows 8 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> 適用於 x64 型系統的 Windows 8 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <td style="border:1px solid black;"> 適用於 32 位元系統的 Windows 8.1 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> 適用於 x64 型系統的 Windows 8.1 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows Server 2012 和 Windows Server 2012 R2 </th> </tr> <tr> <td style="border:1px solid black;"> Windows Server 2012 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> Windows Server 2012 R2 </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Windows RT 和 Windows RT 8.1 </th> </tr> <tr> <td style="border:1px solid black;"> Windows RT<sup>[1]</sup> </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> Windows RT 8.1<sup>[1]</sup> </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <th colspan="3"> Server Core 安裝選項 </th> </tr> <tr> <td style="border:1px solid black;"> 適用於 32 位元系統的 Windows Server 2008 Service Pack 2 (Server Core 安裝) </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> 適用於 x64 型系統的 Windows Server 2008 Service Pack 2 (Server Core 安裝) </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <td style="border:1px solid black;"> 適用於 x64 型系統的 Windows Server 2008 R2 Service Pack 1 (Server Core 安裝) </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr class="alternateRow"> <td style="border:1px solid black;"> Windows Server 2012 (Server Core 安裝) </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> <tr> <td style="border:1px solid black;"> Windows Server 2012 R2 (Server Core 安裝) </td> <td style="border:1px solid black;"> **重大**  遠端執行程式碼 </td> <td style="border:1px solid black;"> **重大** </td> </tr> </table> 圖形裝置介面整數溢位資訊安全風險 - CVE-2013-3940 ------------------------------------------------ 在 Windows 圖形裝置介面 (GDI) 透過 WordPad 處理蓄意製作的 Windows Write 檔案的方式中,存在遠端執行程式碼的資訊安全風險。成功利用此資訊安全風險的攻擊者可以取得受影響系統的完整控制權。攻擊者接下來將能安裝程式,檢視、變更或刪除資料,或建立具有完整使用者權限的新帳戶。系統上帳戶使用者權限較低的使用者,其受影響的程度比擁有系統管理權限的使用者要小。 若要以一般性資訊安全風險清單中的標準項目來檢視此資訊安全風險,請參閱 [CVE-2013-3940](https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2013-3940) (英文)。 #### 緩和因素 緩和因素是指存在於預設狀態中的設定、共用設定或一般最佳作法,可能會減少資訊安全風險影響的嚴重性。下列緩和因素可能對您的狀況有所助益: - 此資訊安全風險無法透過電子郵件自動受到攻擊。使用者必須開啟電子郵件訊息中傳送的附件,攻擊才可順利進行。 - 在網頁式攻擊案例中,攻擊者可以架設網站,並在網站上加入將用於利用此資訊安全風險而蓄意製作的 Windows Write 檔案。此外,受侵害的網站以及接受或存放使用者提供之內容的網站裡,也可能包含蓄意製作以利用本資訊安全風險的內容。攻擊者不能強迫使用者檢視攻擊者控制的內容,或開啟蓄意製作的檔案。而是會引誘使用者採取行動。一般的做法是設法讓使用者按一下連往攻擊者網站的連結,然後引誘使用者開啟蓄意製作的 Write 檔案。 - 成功利用此資訊安全風險的攻擊者可以取得與目前使用者相同的使用者權限。系統上帳戶使用者權限較低的使用者,其受影響的程度比擁有系統管理權限的使用者要小。 #### 因應措施 因應措施指的是無法徹底修正資訊安全風險,但有助於在套用更新之前封鎖已知攻擊模式的設定變更。Microsoft 測試了下列因應措施和狀態,討論因應措施是否會降低功能: - **藉由限制存取 mswrd8.wpc 來停用 Word 6 轉換程式** 系統管理員可以套用存取控制清單至受影響的轉換程式,以確保 WordPad 不再載入該程式。如此可有效預防此資訊安全風險被利用。 **警告**您在安裝資訊安全更新之前,必須先復原此因應措施。 要套用存取清單,請在命令提示字元執行下列命令。請注意,有些命令可能造成錯誤訊息;這是預期中的情況。 `echo y| cacls "%ProgramFiles%\Common Files\Microsoft Shared\TextConv\mswrd832.cnv" /E /P everyone:N` `echo y| cacls "%ProgramFiles(x86)%\Common Files\Microsoft Shared\TextConv\mswrd832.cnv" /E /P everyone:N` `echo y| cacls "%ProgramFiles%\Windows NT\Accesso``ries\mswrd8.wpc" /E /P everyone:N` `echo y| cacls "%ProgramFiles%\Windows NT\Accessories\mswrd864.wpc" /E /P everyone:N` `echo y| cacls "%ProgramFiles(x86)%\Windows NT\Accessories\mswrd8.wpc" /E /P everyone:N` **因應措施的影響。** 採用此因應措施,您就不能再將 Word 6 文件轉換為 WordPad RTF 或 Word 2003 文件。Microsoft Office Word 將傳回一個錯誤表示「檔案似乎已毀損」。 **如何復原因應措施。** `echo y| cacls "%ProgramFiles%\Common Files\Microsoft Shared\TextConv\mswrd832.cnv" /E /R everyone` `echo y| cacls "%ProgramFiles(x86)%\Common Files\Microsoft Shared\TextConv\mswrd832.cnv" /E /R`` everyone` `echo y| cacls "%ProgramFiles%\Windows NT\Accessories\mswrd8.wpc" /E /R everyone` `echo y| cacls "%ProgramFiles%\Windows NT\Accessories\mswrd864.wpc" /E /R everyone` `echo y| cacls "%ProgramFiles(x86)%\Windows NT\Accessories\mswrd8.wpc" /E /R everyone` - **對於來自於不受信任的來源或在非預期情況下從信任來源收到的 Windows Write (.wri) 文件,請勿隨意開啟。** 對於來自於不受信任的來源或在非預期情況下從信任來源收到的 Windows Write (.wri) 檔案,請勿隨意開啟。當使用者開啟蓄意製作的檔案時,即可能遭受利用此資訊安全風險的攻擊。 #### 常見問題集 **此資訊安全風險的範圍為何?**   這是遠端執行程式碼的資訊安全風險。 **造成這項資訊安全風險的原因為何?**   此資訊安全風險是由於透過 WordPad 開啟包含蓄意製作的影像的 Windows Write (.wri) 檔案所導致。當 WordPad 剖析 Windows Write 檔案時,Windows 圖形裝置介面不當處理蓄意製作的影像,導致記憶體損毀。 **什麼是圖形裝置介面 (GDI)?**  。 GDI 是 Microsoft Windows 圖形裝置介面 (GDI),可讓應用程式使用視訊顯示和印表機上的圖形和格式化文字。Windows 應用程式不會直接存取圖形硬體。而是由 GDI 代表應用程式與裝置驅動程式互動。如需更多關於 GDI 的資訊,請參閱 [Windows GDI](https://msdn.microsoft.com/zh-tw/library/dd145203(v=vs.85).aspx)。 **攻擊者可能會利用此資訊安全風險採取什麼行動?**   成功利用此資訊安全風險的攻擊者可以取得與目前使用者相同的使用者權限。如果目前使用者以系統管理的使用者權限登入,則攻擊者即可取得受影響系統的完整控制權。攻擊者接下來將能安裝程式,檢視、變更或刪除資料,或建立具有完整使用者權限的新帳戶。系統上帳戶使用者權限較低的使用者,其受影響的程度比擁有系統管理權限的使用者要小。 **攻擊者如何利用此資訊安全風險?**   在網頁式攻擊案例中,攻擊者可以架設網站,並在網站上加入將用於利用此資訊安全風險而蓄意製作的 Windows Write 檔案。此外,受侵害的網站以及接受或存放使用者提供之內容的網站裡,也可能包含蓄意製作以利用本資訊安全風險的內容。攻擊者並不能強迫使用者造訪蓄意製作的網站或開啟蓄意製作的檔案。而是必須引誘使用者採取行動。例如,攻擊者可以誘騙使用者按一下通往攻擊者網站的連結,然後引誘他們開啟蓄意製作的 Windows Write 檔案。 在電子郵件攻擊案例中,攻擊者可能會傳送蓄意製作的檔案給使用者,然後引誘使用者開啟該檔案,來利用此資訊安全風險。 **因為此資訊安全風險而承受風險的主要系統有哪些?**   工作站和終端伺服器的風險最高。 **更新的作用何在?**   此更新可修正 GDI 處理透過 WordPad 開啟 Windows Write 檔案中所含蓄意製作的影像的方式,進而解決此資訊安全風險。 **本資訊安全公告發行時,此資訊安全風險是否已揭發出來?**   否。Microsoft 是經由協同合作的來源接獲有關此資訊安全風險的訊息。 **當本資訊安全公告發行時,Microsoft 是否已接獲任何消息,指出此資訊安全風險已遭有心人士利用?**   否。本資訊安全公告初次發行時,Microsoft 尚未接到任何有關本資訊安全風險已公開用來攻擊客戶的消息。 ### 更新資訊 偵測與部署工具及指南 -------------------- 有幾項資源可協助系統管理員部署資訊安全更新。  - Microsoft Baseline Security Analyzer (MBSA) 能讓系統管理員掃描本機和遠端系統,查看是否遺漏資訊安全更新及一般資訊安全設定錯誤的狀況。  - Windows Server Update Services (WSUS)、Systems Management Server (SMS) 和 System Center Configuration Manager 可協助系統管理員散佈資訊安全更新。  - 應用程式相容性工具組隨附的 Update Compatibility Evaluator 元件可針對所安裝的應用程式簡化其測試和驗證 Windows 更新的過程。  如需上述工具以及其他可使用工具的詳細資訊,請參閱 [IT專業人員的資訊安全工具](https://technet.microsoft.com/security/cc297183)。 資訊安全更新部署 ---------------- **受影響的軟體** 如需有關您使用系統的特定資訊安全更新資訊,請按下適當的連結: #### Windows XP (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">適用於 Windows XP Service Pack 3:<br /> <strong>WindowsXP-KB2876331-x86-ENU.exe</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">Windows XP Professional x64 Edition Service Pack 2:<br /> <strong>WindowsServer2003.WindowsXP-KB2876331-x64-ENU.exe</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/262841?ln=zh-tw">Microsoft 知識庫文件編號 262841</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>更新記錄檔</strong></td> <td style="border:1px solid black;">KB2876331.log</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">使用 [控制台] 中的 [新增或移除程式] 項目,或 %Windir%\$NTUninstallKB2876331$\Spuninst 資料夾中的 Spuninst.exe 公用程式</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;">所有受支援 32 位元版本的 Windows XP:<br /> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Updates\Windows XP\SP4\KB2876331\Filelist</td> </tr> <tr class="odd"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">所有受支援 x64 型版本的 Windows XP:<br /> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Updates\Windows XP Version 2003\SP3\KB2876331\Filelist</td> </tr> </tbody> </table> **注意:**對於受支援版本的 Windows XP Professional x64 Edition 的更新也適用於受支援版本的 Windows Server 2003 x64 Edition。 #### Windows Server 2003 (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">對於所有受支援 32 位元版本的 Windows Server 2003:<br /> <strong>WindowsServer2003-KB2876331-x86-ENU.exe</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">適用於所有受支援 x64 版本的 Windows Server 2003:<br /> <strong>WindowsServer2003.WindowsXP-KB2876331-x64-ENU.exe</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">對於所有受支援 Itanium 版的 Windows Server 2003:<strong>WindowsServer2003-KB2876331-ia64-ENU.exe</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/262841?ln=zh-tw">Microsoft 知識庫文件編號 262841</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>更新記錄檔</strong></td> <td style="border:1px solid black;">KB2876331.log</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">使用 [控制台] 中的 [新增或移除程式] 項目,或 %Windir%\$NTUninstallKB2876331$\Spuninst 資料夾中的 Spuninst.exe 公用程式</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;">HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Updates\Windows Server 2003\SP3\KB2876331\Filelist</td> </tr> </tbody> </table> **注意:**適用於受支援版本的 Windows Server 2003 x64 Edition 的更新也適用於受支援版本的 Windows XP Professional x64 Edition。 #### Windows Vista (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">所有受支援 32 位元版本的 Windows Vista:<br /> <strong>Windows6.0-KB2876331-x86.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">所有受支援 x64 型版本的 Windows Vista:<br /> <strong>Windows6.0-KB2876331-x64.msu</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/934307?ln=zh-tw">Microsoft 知識庫文件編號 934307</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">WUSA.exe 不支援更新的解除安裝。如要解除安裝 WUSA 所安裝的更新程式,請按一下 [控制台],然後按一下 [安全性]。在 Windows Update 下,按一下 [檢視安裝的更新] 並從更新清單中選取。</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;"><strong>注意:</strong>登錄機碼不存在,無法驗證此更新是否存在。</td> </tr> </tbody> </table> #### Windows Server 2008 (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">適用於所有受支援之 32 位元版本的 Windows Server 2008:<br /> <strong>Windows6.0-KB2876331-x86.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">適用於所有受支援之 x64 版本的 Windows Server 2008:<br /> <strong>Windows6.0-KB2876331-x64.msu</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">適用於所有受支援之 Itanium 版本的 Windows Server 2008:<br /> <strong>Windows6.0-KB2876331-ia64.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/934307?ln=zh-tw">Microsoft 知識庫文件編號 934307</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">WUSA.exe 不支援更新的解除安裝。如要解除安裝 WUSA 所安裝的更新程式,請按一下 [控制台],然後按一下 [安全性]。在 Windows Update 下,按一下 [檢視安裝的更新] 並從更新清單中選取。</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;"><strong>注意:</strong>登錄機碼不存在,無法驗證此更新是否存在。</td> </tr> </tbody> </table> #### Windows 7 (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">適用於所有受支援 32 位元版本的 Windows 7:<br /> <strong>Windows6.1-KB2876331-x86.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">適用於所有受支援 x64 版本的 Windows 7:<br /> <strong>Windows6.1-KB2876331-x64.msu</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/934307?ln=zh-tw">Microsoft 知識庫文件編號 934307</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">若要解除安裝由 WUSA 所安裝的更新程式,請使用 /Uninstall 安裝參數,或按一下 [控制台] 和 [系統及安全性],然後在 Windows Update 項下,按一下 [檢視安裝的更新] 並從更新清單中選取。</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;"><strong>注意:</strong>登錄機碼不存在,無法驗證此更新是否存在。</td> </tr> </tbody> </table> #### Windows Server 2008 R2 (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">適用於所有受支援之 x64 版本的 Windows Server 2008 R2:<br /> <strong>Windows6.1-KB2876331-x64.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">適用於所有受支援之 Itanium 版本的 Windows Server 2008 R2:<br /> <strong>Windows6.1-KB2876331-ia64.msu</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/934307?ln=zh-tw">Microsoft 知識庫文件編號 934307</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">若要解除安裝由 WUSA 所安裝的更新程式,請使用 /Uninstall 安裝參數,或按一下 [控制台] 和 [系統及安全性],然後在 Windows Update 項下,按一下 [檢視安裝的更新] 並從更新清單中選取。</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;"><strong>注意:</strong>登錄機碼不存在,無法驗證此更新是否存在。</td> </tr> </tbody> </table> #### Windows 8 和 Windows 8.1 (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">所有受支援 32 位元版本的 Windows 8:<br /> <strong>Windows8-RT-KB2876331-x86.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">所有受支援 x64 型版本的 Windows 8:<br /> <strong>Windows8-RT-KB2876331-x64.msu</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">所有受支援 32 位元版本的 Windows 8.1:<br /> <strong>Windows8.1-KB2876331-x86.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">所有受支援 x64 型版本的 Windows 8.1:<br /> <strong>Windows8.1-KB2876331-x64.msu</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/934307?ln=zh-tw">Microsoft 知識庫文件編號 934307</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">若要解除安裝由 WUSA 所安裝的更新,請使用 /Uninstall 安裝參數,或依序按一下 [控制台]、[系統及安全性]、[Windows Update],然後按一下 [另請參閱] 下的 [已安裝的更新],然後從更新清單中加以選取。</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;"><strong>注意:</strong>登錄機碼不存在,無法驗證此更新是否存在。</td> </tr> </tbody> </table> #### Windows Server 2012 和 Windows 2012 R2 (所有版本) **參考表** 下表包含此軟體的資訊安全更新資訊。 <p></p> <table style="border:1px solid black;"> <tbody> <tr class="odd"> <td style="border:1px solid black;"><strong>資訊安全更新檔案名稱</strong></td> <td style="border:1px solid black;">所有受支援版本的 Windows Server 2012:<br /> <strong>Windows8-RT-KB2876331-x64.msu</strong></td> </tr> <tr class="even"> <td style="border:1px solid black;"></td> <td style="border:1px solid black;">所有受支援版本的 Windows Server 2012 R2:<br /> <strong>Windows8.1-KB2876331-x64.msu</strong></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>安裝參數</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/934307?ln=zh-tw">Microsoft 知識庫文件編號 934307</a></td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>重新開機需求</strong></td> <td style="border:1px solid black;">是,套用此資訊安全更新之後,您必須重新啟動系統。</td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>移除資訊</strong></td> <td style="border:1px solid black;">若要解除安裝由 WUSA 所安裝的更新,請使用 /Uninstall 安裝參數,或依序按一下 [控制台]、[系統及安全性]、[Windows Update],然後按一下 [另請參閱] 下的 [已安裝的更新],然後從更新清單中加以選取。</td> </tr> <tr class="even"> <td style="border:1px solid black;"><strong>檔案資訊</strong></td> <td style="border:1px solid black;">請參閱 <a href="https://support.microsoft.com/kb/2876331?ln=zh-tw">Microsoft 知識庫文件編號 2876331</a></td> </tr> <tr class="odd"> <td style="border:1px solid black;"><strong>登錄機碼驗證</strong></td> <td style="border:1px solid black;"><strong>注意:</strong>登錄機碼不存在,無法驗證此更新是否存在。</td> </tr> </tbody> </table> #### Windows RT 和 Windows RT 8.1 (所有版本) 下表包含此軟體的資訊安全更新資訊。 | | | |------------------|------------------------------------------------------------------------------------------------------------------------------------------| | **部署** | 更新僅透過 [Windows Update](https://update.microsoft.com/windowsupdate/v6/default.aspx) 提供。 | | **重新開機需求** | 是,套用此資訊安全更新之後,您必須重新啟動系統。 | | **移除資訊** | 請依序按一下 \[控制台\]、\[系統及安全性\] 以及 \[Windows Update\],然後按一下 \[另請參閱\] 下的 \[已安裝的更新\],然後從更新清單中選取。 | | **檔案資訊** | 請參閱 [Microsoft 知識庫文件編號 2876331](https://support.microsoft.com/kb/2876331?ln=zh-tw) | ### 其他資訊 #### 感謝 Microsoft [感謝](https://technet.microsoft.com/zh-tw/security/gg309157.aspx)下列人士協助我們一同保護我們的客戶: - 感謝 [Secunia Research](https://secunia.com/) 的 Hossein Lotfi 回報圖形裝置介面整數溢位資訊安全風險 (CVE-2013-3940) #### Microsoft 主動保護計畫 (MAPP) 為了增強客戶的資訊安全保護,Microsoft 將在每月發行資訊安全更新之前,提前向重要資訊安全軟體提供者提供資訊安全風險資訊。資訊安全軟體提供者可利用此資訊安全風險資訊,透過其資訊安全軟體或裝置 (如防毒軟體、網路入侵偵測系統、或主機入侵預防系統),為客戶提供更新的保護措施。如果要判斷是否有資訊安全軟體提供者的主動保護可用,請造訪由 [Microsoft 主動保護計畫 (MAPP) 合作夥伴](https://technet.microsoft.com/zh-tw/security/dn467918) (英文) 上列出的計畫合作夥伴所提供的主動保護計畫網站。 #### 支援 **如何取得此資訊安全更新的說明及支援** - 協助安裝更新: [Microsoft Update 支援](https://support.microsoft.com/ph/6527?ln=zh-tw) - IT 專業人員的資訊安全解決方案: [TechNet 資訊安全疑難排解與支援](https://technet.microsoft.com/security/bb980617.aspx) - 協助保護您的 Windows 電腦免於病毒和惡意軟體攻擊: [病毒解決方案與資訊安全中心](https://support.microsoft.com/contactus/cu_sc_virsec_master?ln=zh-tw) - 您所在國家/地區的當地支援: [國際支援](https://support.microsoft.com/common/international.aspx?ln=zh-tw) #### 免責聲明 Microsoft 知識庫 (Microsoft Knowledge Base) 中的資訊係以其「現狀」提供,並不提供任何形式之擔保。Microsoft 不做任何明示或默示的責任擔保,包括適售性以及適合某特定用途之擔保責任。無論任何情況下的損害,Microsoft Corporation 及其供應商皆不負任何法律責任,包括直接、間接、偶發、衍生性、所失業務利益或特殊損害。即使 Microsoft Corporation 及其供應商已被告知此類損害的可能性亦不負任何責任。某些地區不允許排除及限制衍生性或附隨損害賠償責任,因此前述限制不適用於這些地區。 #### 修訂 - V1.0 (2013 年 11 月 13 日): 公告發行。 *Built at 2014-04-18T01:50:00Z-07:00*
28.317073
360
0.693463
yue_Hant
0.699867
61697457e289a958ce7e27ad7ccb2156290e6a37
267
md
Markdown
tccli/examples/tdmq/v20200217/CreateEnvironmentRole.md
HS-Gray/tencentcloud-cli
3822fcfdfed570fb526fe49abe6793e2f9127f4a
[ "Apache-2.0" ]
47
2018-05-31T11:26:25.000Z
2022-03-08T02:12:45.000Z
tccli/examples/tdmq/v20200217/CreateEnvironmentRole.md
HS-Gray/tencentcloud-cli
3822fcfdfed570fb526fe49abe6793e2f9127f4a
[ "Apache-2.0" ]
23
2018-06-14T10:46:30.000Z
2022-02-28T02:53:09.000Z
tccli/examples/tdmq/v20200217/CreateEnvironmentRole.md
HS-Gray/tencentcloud-cli
3822fcfdfed570fb526fe49abe6793e2f9127f4a
[ "Apache-2.0" ]
22
2018-10-22T09:49:45.000Z
2022-03-30T08:06:04.000Z
**Example 1: 创建环境角色关系映射** Input: ``` tccli tdmq CreateEnvironmentRole --cli-unfold-argument \ --RoleName test_role_1 \ --EnvironmentId default \ --Permissions produce ``` Output: ``` { "Response": { "RequestId": "gggxxxx" } } ```
11.608696
57
0.588015
kor_Hang
0.116579
61698efa002050afbf3228c30cd426090f261949
2,096
md
Markdown
content/docs/getting-started/getting-started-with-j2store.md
j2store/docs
8d6a796fb2bf1f4cbd0c625adca82e13f962ddb7
[ "MIT" ]
null
null
null
content/docs/getting-started/getting-started-with-j2store.md
j2store/docs
8d6a796fb2bf1f4cbd0c625adca82e13f962ddb7
[ "MIT" ]
1
2019-07-22T12:12:10.000Z
2019-07-22T12:12:10.000Z
content/docs/getting-started/getting-started-with-j2store.md
j2store/docs
8d6a796fb2bf1f4cbd0c625adca82e13f962ddb7
[ "MIT" ]
1
2019-07-09T06:00:52.000Z
2019-07-09T06:00:52.000Z
--- path: "/docs/getting-started/getting-started-with-j2store" updated: "2019-05-30" title: "Getting started with J2store" description: "Get started with J2Store" author: "Sowbagya Lakshmi" category: "campaigns" --- ### Building an online store? It's easy as 1-2-3 J2Store turns your native Joomla articles into products beautifully. Just create articles, treat them as products and launch your online store. It is really that simple. We have put together a list of resources to get you started with your store quickly: ![Gettingstarted](https://raw.githubusercontent.com/j2store/doc-images/master/getting-started/Installation/Installation-configenterdetails.png) * Watch our <link-text url="https://www.j2store.org/support/video-tutorials/content/quick-start.html" target="_blank" rel="noopener">quick start video tutorials</link-text> * <link-text url="https://www.j2store.org/resources/usecases.html" target="_blank" rel="noopener">Checkout the use cases</link-text>. We have 100+ of them covering almost all popular online store requirements. * Learn from our complete <link-text url="https://www.j2store.org/support/user-guide.html" target="_blank" rel="noopener">user guide.</link-text> * Building a store in your language?<link-text url="https://www.j2store.org/translations.html" target="_blank" rel="noopener">Download translations</link-text> * Check all available <link-text url="https://www.j2store.org/extensions/payment-plugins.html" rel="noopener" target="_blank">payment gateways</link-text> for your store. * Looking to extend your store? Here are <link-text url="https://www.j2store.org/extensions/apps.html" rel="noopener" target="_blank">the apps</link-text> to help. * Got questions? Refer the <link-text url="https://www.j2store.org/support/community-forum.html" rel="noopener" target="_blank">community forum.</link-text> * Need priority support and PRO features?<link-text url="https://www.j2store.org/get-j2store.html" rel="noopener" target="_blank"> Purchase PRO plan.</link-text> Still need assistance ? Contact our support team.
46.577778
211
0.758588
eng_Latn
0.638602
6169d717f16310acb9b4a1a8a0430f4dd9b954c0
763
md
Markdown
_publications/QuadTrees.md
tiagosalvador/tiagosalvador.github.io
cc43c2ed0ef88d10c567d6a6dfc34c81d997304f
[ "MIT" ]
null
null
null
_publications/QuadTrees.md
tiagosalvador/tiagosalvador.github.io
cc43c2ed0ef88d10c567d6a6dfc34c81d997304f
[ "MIT" ]
null
null
null
_publications/QuadTrees.md
tiagosalvador/tiagosalvador.github.io
cc43c2ed0ef88d10c567d6a6dfc34c81d997304f
[ "MIT" ]
null
null
null
--- title: "Higher-order adaptive finite difference methods for fully nonlinear elliptic equations" collection: publications type: "Journal Article" permalink: /publications/QuadTrees authors: "Brittany Froese Hamfeldt and <b>Tiago Salvador</b>" excerpt: date: 2018-06-01 venue: 'Journal of Scientific Computing' doi: 'https://doi.org/10.1007/s10915-017-0586-5' arxivurl: 'https://arxiv.org/abs/1706.07741' giturl: 'https://github.com/tiagosalvador/quadtrees-mesh' citation: 'Brittany Froese Hamfeldt and <b>Tiago Salvador</b>. &quot;Higher-order adaptive finite difference methods for fully nonlinear elliptic equations.&quot; <i>Journal of Scientific Computing</i> 75(3): 1282-1306, 2018.' --- [Download paper here](https://doi.org/10.1007/s10915-017-0586-5)
44.882353
226
0.770642
eng_Latn
0.230455
616ae3374a1f38f70cbf78b26a46c14b7e1ced25
22
md
Markdown
README.md
gabriel-wolf/ArticleAnalyzer.py
cd25e16aa04fa2a7e71691f2f5058bd46034f225
[ "MIT" ]
1
2019-01-02T15:01:27.000Z
2019-01-02T15:01:27.000Z
README.md
gabriel-wolf/ArticleAnalyzer.py
cd25e16aa04fa2a7e71691f2f5058bd46034f225
[ "MIT" ]
null
null
null
README.md
gabriel-wolf/ArticleAnalyzer.py
cd25e16aa04fa2a7e71691f2f5058bd46034f225
[ "MIT" ]
null
null
null
# Article_Analysis.py
11
21
0.818182
kor_Hang
0.874493
616aee9a2761057b899890e3ebf19c56dfa43416
233
md
Markdown
README.md
hansonw/node-prebuilt-starter
9d75e4d221ca41280ea8d98699f5165c0a323c10
[ "MIT" ]
2
2016-03-01T21:49:52.000Z
2016-03-02T07:31:50.000Z
README.md
hansonw/node-prebuilt-starter
9d75e4d221ca41280ea8d98699f5165c0a323c10
[ "MIT" ]
null
null
null
README.md
hansonw/node-prebuilt-starter
9d75e4d221ca41280ea8d98699f5165c0a323c10
[ "MIT" ]
null
null
null
# node-prebuilt-starter Starter prebuilt Node-native package. Just run `npm install` to build. To rebuild: ``` npm run rebuild ``` Generate a compilation database (for Nuclide or YCM, etc): ``` tools/compilation_database.js ```
13.705882
58
0.729614
eng_Latn
0.832103
616b2e162bf015534ce34ae5aa62503154685f17
144
md
Markdown
source/_posts/dev/java/basic.md
travonf/blog
04f6fe4952d83b99faccf25afa8ea23c8a4d6ccf
[ "MIT" ]
null
null
null
source/_posts/dev/java/basic.md
travonf/blog
04f6fe4952d83b99faccf25afa8ea23c8a4d6ccf
[ "MIT" ]
6
2020-11-13T13:06:37.000Z
2020-11-14T13:34:51.000Z
source/_posts/dev/java/basic.md
travonf/blog
04f6fe4952d83b99faccf25afa8ea23c8a4d6ccf
[ "MIT" ]
null
null
null
--- title: Java基础 date: 2007-09-05 08:00:00 updated: 2007-09-05 08:00:00 tags: "typescript" categories: ["Develop", "Java"] --- Coming soon...
14.4
31
0.659722
ita_Latn
0.104028
616b5daa3f82052cd505d7ec6c45770cf5a393af
7,523
md
Markdown
articles/active-directory/hybrid/how-to-connect-sync-technical-concepts.md
pmsousa/azure-docs.pt-pt
bc487beff48df00493484663c200e44d4b24cb18
[ "CC-BY-4.0", "MIT" ]
15
2017-08-28T07:46:17.000Z
2022-02-03T12:49:15.000Z
articles/active-directory/hybrid/how-to-connect-sync-technical-concepts.md
pmsousa/azure-docs.pt-pt
bc487beff48df00493484663c200e44d4b24cb18
[ "CC-BY-4.0", "MIT" ]
407
2018-06-14T16:12:48.000Z
2021-06-02T16:08:13.000Z
articles/active-directory/hybrid/how-to-connect-sync-technical-concepts.md
pmsousa/azure-docs.pt-pt
bc487beff48df00493484663c200e44d4b24cb18
[ "CC-BY-4.0", "MIT" ]
17
2017-10-04T22:53:31.000Z
2022-03-10T16:41:59.000Z
--- title: 'Azure AD Connect sync: Conceitos técnicos | Microsoft Docs' description: Explica os conceitos técnicos da sincronização Azure AD Connect. services: active-directory documentationcenter: '' author: billmath manager: daveba editor: '' ms.assetid: 731cfeb3-beaf-4d02-aef4-b02a8f99fd11 ms.service: active-directory ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na ms.topic: how-to ms.date: 01/15/2018 ms.subservice: hybrid ms.author: billmath ms.collection: M365-identity-device-management ms.openlocfilehash: 01e53b30a4c27296e30e031ffb771697afa8e1e9 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: pt-PT ms.lasthandoff: 03/29/2021 ms.locfileid: "87019680" --- # <a name="azure-ad-connect-sync-technical-concepts"></a>Sincronização do Azure AD Connect: Conceitos Técnicos Este artigo é um resumo do tema [Compreender a arquitetura.](how-to-connect-sync-technical-concepts.md) A azure AD Connect sync baseia-se numa plataforma sólida de sincronização metadoriory. As seguintes secções introduzem os conceitos de sincronização metadoritro. Com base no MIIS (Microsoft Identity Integration Server), ILM (Identity Lifecycle Manager) e FIM (Gestor de Identidade de Vanguarda), o Azure Ative Directory Sync Services fornece a próxima plataforma para ligação a fontes de dados, sincronização de dados entre fontes de dados, bem como o fornecimento e desprovisionamento de identidades. ![Conceitos Técnicos](./media/how-to-connect-sync-technical-concepts/scenario.png) As seguintes secções fornecem mais detalhes sobre os seguintes aspetos do Serviço de Sincronização FIM: * Conector * Fluxo de atributos * Espaço de conector * Metaverso * Aprovisionamento ## <a name="connector"></a>Conector Os módulos de código que são utilizados para comunicar com um diretório conectado são chamados conectores (anteriormente conhecidos como agentes de gestão (MAs)). Estes estão instalados no computador que executa a sincronização Azure AD Connect. Os conectores fornecem a capacidade de conversação sem agente utilizando protocolos de sistema remoto em vez de depender da implantação de agentes especializados. Isto significa a diminuição dos tempos de risco e de implantação, especialmente quando se trata de aplicações e sistemas críticos. Na imagem acima, o conector é sinónimo do espaço do conector, mas engloba toda a comunicação com o sistema externo. O conector é responsável por toda a funcionalidade de importação e exportação para o sistema e liberta os desenvolvedores de precisarem de entender como se conectarem a cada sistema de forma nativa ao utilizarem o fornecimento declarativo para personalizar as transformações de dados. As importações e exportações só ocorrem quando programados, permitindo um maior isolamento das alterações que ocorrem no sistema, uma vez que as alterações não se propagam automaticamente à fonte de dados ligada. Além disso, os desenvolvedores também podem criar os seus próprios conectores para se conectarem a praticamente qualquer fonte de dados. ## <a name="attribute-flow"></a>Fluxo de atributos O metaverso é a visão consolidada de todas as identidades unidas dos espaços de conector vizinhos. Na figura acima, o fluxo de atributos é representado por linhas com pontas de setas para o fluxo de entrada e saída. O fluxo de atributos é o processo de copiar ou transformar dados de um sistema para outro e todos os fluxos de atributos (entrada ou saída). O fluxo de atributos ocorre entre o espaço do conector e o metaverso bi-direccional quando estão programadas operações de sincronização (completas ou delta). O fluxo de atributos só ocorre quando estas sincronizações são executadas. Os fluxos de atributos são definidos nas Regras de Sincronização. Estes podem ser de entrada (ISR na imagem acima) ou de saída (OSR na imagem acima). ## <a name="connected-system"></a>Sistema conectado O sistema conectado (também conhecido como diretório ligado) refere-se ao sistema remoto Azure AD Connect sync que ligou e lê e escreve dados de identidade de e para. ## <a name="connector-space"></a>Espaço de conector Cada fonte de dados ligada é representada como um subconjunto filtrado dos objetos e atributos no espaço do conector. Isto permite que o serviço de sincronização funcione localmente sem a necessidade de contactar o sistema remoto ao sincronizar os objetos e restringir a interação apenas às importações e exportações. Quando a fonte de dados e o conector têm a capacidade de fornecer uma lista de alterações (uma importação delta), então a eficiência operacional aumenta drasticamente à medida que apenas muda desde que o último ciclo de votação é trocado. O espaço do conector isola a fonte de dados ligada das alterações que se propagam automaticamente, exigindo que o conector marque importações e exportações. Este seguro adicional concede-lhe paz de espírito durante os testes, pré-visualização ou confirmação da próxima atualização. ## <a name="metaverse"></a>Metaverso O metaverso é a visão consolidada de todas as identidades unidas dos espaços de conector vizinhos. Como as identidades estão ligadas entre si e a autoridade é atribuída para vários atributos através de mapeamentos de fluxo de importação, o objeto metaverso central começa a agregar informação de vários sistemas. A partir deste objeto atributo fluxo, os mapeamentos transportam informações para sistemas de saída. Os objetos são criados quando um sistema autoritário os projeta para o metaverso. Assim que todas as ligações forem removidas, o objeto metaverso é eliminado. Os objetos no metaverso não podem ser editados diretamente. Todos os dados do objeto devem ser contribuídos através do fluxo de atributos. O metaverso mantém conectores persistentes com cada espaço do conector. Estes conectores não necessitam de reavaliação para cada execução de sincronização. Isto significa que a sincronização Azure AD Connect não tem de localizar o objeto remoto correspondente de cada vez. Isto evita a necessidade de agentes dispendiosos para evitar alterações a atributos que normalmente seriam responsáveis por correlacionar os objetos. Ao descobrir novas fontes de dados que possam ter objetos pré-existentes que precisam de ser geridos, o Azure AD Connect sincronizado utiliza um processo chamado regra de união para avaliar potenciais candidatos com os quais estabelecer uma ligação. Uma vez estabelecida a ligação, esta avaliação não ocorre novamente e o fluxo de atributos normal pode ocorrer entre a fonte de dados conectada remota e o metaverso. ## <a name="provisioning"></a>Aprovisionamento Quando uma fonte autoritária projeta um novo objeto no metaverso, um novo objeto espacial do conector pode ser criado em outro Conector que representa uma fonte de dados ligada a jusante. Isto estabelece inerentemente uma ligação, e o fluxo de atributos pode proceder bi-direcionalmente. Sempre que uma regra determina que um novo objeto de espaço do conector precisa de ser criado, é chamado de provisionamento. No entanto, uma vez que esta operação ocorre apenas dentro do espaço do conector, não se transporta para a fonte de dados ligada até que seja realizada uma exportação. ## <a name="additional-resources"></a>Recursos Adicionais * [Azure AD Connect Sync: Personalizar opções de sincronização](how-to-connect-sync-whatis.md) * [Integrar as identidades no local ao Azure Active Directory](whatis-hybrid-identity.md) <!--Image references--> [1]: ./media/active-directory-aadsync-technical-concepts/ic750598.png
79.189474
561
0.815499
por_Latn
0.999621
616b97a7de3522a8f0e25c31d90537e2bccb583c
1,191
md
Markdown
content/en/Documentation/basics/manuals_linux-basics_finding-things.md
apark064/hpcc_new_site
e3ead06386fa3e9ded4d3df45bdfdea1ddd5f056
[ "Apache-2.0" ]
1
2021-05-26T23:55:51.000Z
2021-05-26T23:55:51.000Z
content/en/Documentation/basics/manuals_linux-basics_finding-things.md
apark064/hpcc_new_site
e3ead06386fa3e9ded4d3df45bdfdea1ddd5f056
[ "Apache-2.0" ]
null
null
null
content/en/Documentation/basics/manuals_linux-basics_finding-things.md
apark064/hpcc_new_site
e3ead06386fa3e9ded4d3df45bdfdea1ddd5f056
[ "Apache-2.0" ]
2
2021-06-15T17:48:54.000Z
2021-06-25T18:18:39.000Z
--- layout: page title: Finding Things permalink: manuals_linux-basics_finding-things.html --- ## Find Files ```bash find ~ -name "*pattern*" # Searches for *pattern* in and below your home directory find ~ -iname "*pattern*" # Same as above, but case insensitive find ~ -type f -mtime -2 # Searches for files you have modified in the last two days ``` Useful `find` arguments: * `-user <userName>` * `-group <groupName>` * `-ctime <number of days ago changed>` * `-exec <command to run on each file> {} \;` ## Find Text ```bash grep "pattern" <FILENAME> # Provides lines in a file where "pattern" appears grep -H "pattern" # -H prints out file name in front of pattern find ~ -name "*.txt" -exec grep -H "pattern" {} \; # Search lines where "pattern" appears in files with names that end with ".txt" ``` ## Find Applications ```bash which <APPLICATION_NAME> # Location of application whereis <APPLICATION_NAME> # Searches for executables in set of directories rpm -qa | grep "pattern" # List all RPM packages and filter based on "pattern" ```
32.189189
134
0.618808
eng_Latn
0.976275
616d58b3bd321d01716c2599852c0ce61c31c5d4
55
md
Markdown
README.md
studiome/flutter_webrtc_sample
fb11e28b9a3918f816a19b55be5c342e9dbf77b9
[ "MIT" ]
null
null
null
README.md
studiome/flutter_webrtc_sample
fb11e28b9a3918f816a19b55be5c342e9dbf77b9
[ "MIT" ]
null
null
null
README.md
studiome/flutter_webrtc_sample
fb11e28b9a3918f816a19b55be5c342e9dbf77b9
[ "MIT" ]
null
null
null
# flutter_webrtc_sample WebRTC Sample for Flutter Web
13.75
29
0.836364
eng_Latn
0.399365
616e6e7275db0b33a83223861a6597843070b020
2,158
md
Markdown
model/communication/README.md
HaydarAk/InformationModel
d8b38b2691972ad8003ce211f6fa83aeed636476
[ "Apache-2.0" ]
30
2020-01-17T08:18:50.000Z
2022-03-08T22:29:43.000Z
model/communication/README.md
HaydarAk/InformationModel
d8b38b2691972ad8003ce211f6fa83aeed636476
[ "Apache-2.0" ]
245
2020-04-03T16:06:21.000Z
2022-03-31T13:22:53.000Z
model/communication/README.md
HaydarAk/InformationModel
d8b38b2691972ad8003ce211f6fa83aeed636476
[ "Apache-2.0" ]
19
2020-04-05T07:00:30.000Z
2021-12-09T08:42:34.000Z
# Communication Module Elements of the *Communication* module allow for describing (non)interactive exchange of digital content and interactions with processing logic (Data Apps). The `Endpoint` represents an individual point of content exchange. It is defined in terms of underlying communication protocol by its `Host` and a unique `path` within the address space of the `Host`. In the most simple case there is a static `Endpoint` exposed by a `Resource` that points to an content `Artifact` making it available for download. <div align="center"><img alt="Concerns hexagon" src="../../images/Endpoint_Artifact.jpg" width="60%" /></div> Interactive communication depend on client input in order to generate the content. `Operations`(s) are the basic interaction primitives (atomic services). `Parameter`(s) are named slots defined by an `Operation` in order to communicate the client *input*, server *output* or a *fault*. The `Message Exchange Pattern`(MEP) of the `Operation` governs the order and cardinality of `Parameter` supply (e.g. `ROBUST_IN_ONLY`). Related `Operation`s are grouped into `Interface`s. `Resource`s define `Interface`s in order to make the interactions involved in content exchange explicit and perceivable by the client in an abstract, protocol-agnostic way. <div align="center"><img alt="Concerns hexagon" src="../../images/Interface_Model.jpg" height="500px" /></div> `Message`s are a means to invoke `Operation`s (`InvokeOperationMessage`) or mediate content (`ArtifactRequest`) in an abstract, protocol-agnostic way comparable to SOAP-based communication. They are delivered to and handled by default, conventional handlers of the Connector runtime. `Operation Bindings`, on the other side, express how the `Operation` signature, i.e. the name and its parameters, map to structures of a communication protocol. This task is currently delegated to well-established API description documents (e.g. OpenAPI). A `Resource` may expose interactive `Endpoint`s that link to such bindings (optional, greyed out part). <div align="center"><img alt="Concerns hexagon" src="../../images/InteractiveEndpoint.jpg" width="60%" /></div>
179.833333
757
0.776645
eng_Latn
0.988898
616e935f3c21aa7cb69b785c35e475048b27add2
1,228
md
Markdown
README.md
cliv/hubot-dynamodb-brain
a07bacd3badea7d09b7fcb96010e321f94982cb5
[ "MIT" ]
null
null
null
README.md
cliv/hubot-dynamodb-brain
a07bacd3badea7d09b7fcb96010e321f94982cb5
[ "MIT" ]
null
null
null
README.md
cliv/hubot-dynamodb-brain
a07bacd3badea7d09b7fcb96010e321f94982cb5
[ "MIT" ]
null
null
null
# hubot-dynamodb-brain A hubot brain built on DynamoDB See [`src/dynamodb-brain.coffee`](src/dynamodb-brain.coffee) for full documentation. ## Installation In hubot project repo, run: `npm install hubot-dynamodb-brain --save` Then add **hubot-dynamodb-brain** to your `external-scripts.json`: ```json [ "hubot-dynamodb-brain" ] ``` ## Defaults: This brain is written with the assumption that the hubot instance is running on an AWS instance, with an instance role that allows access to the 'hubotbrain' table in the us-east-2 region. Environment variables are available to override. The following CloudFormation YAML will create a suitable table in DynamoDB: ```yaml HubotBrainDynamo: Type: "AWS::DynamoDB::Table" Properties: TableName: hubotbrain AttributeDefinitions: - AttributeName: "botname" AttributeType: "S" KeySchema: - AttributeName: "botname" KeyType: "HASH" ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 ``` ## Sample Interaction ``` user1>> hubot brainscan hubot>> My brain is doing great! ``` ## Local Testing Run the following to Test Locally: `docker-compose up --abort-on-container-exit`
23.615385
239
0.707655
eng_Latn
0.902834
616eb83ea2d8d7c3f5a27ea4395ed0c7b8353580
3,667
md
Markdown
addons/README.md
KaiBirkenstock/ionic-text-mask
123e25f7f341ac900824076636fb79993a2d729a
[ "Unlicense" ]
1
2017-07-13T19:58:13.000Z
2017-07-13T19:58:13.000Z
addons/README.md
saulocastrolp/text-mask
0b0b57740c5881ab3cc58fc7135028d644853c66
[ "Unlicense" ]
null
null
null
addons/README.md
saulocastrolp/text-mask
0b0b57740c5881ab3cc58fc7135028d644853c66
[ "Unlicense" ]
3
2020-05-06T08:33:47.000Z
2021-01-16T23:50:18.000Z
# Text Mask Addons These addons are ready-to-use pipes and masks that can be used with Text Mask. ## Installation ```bash npm i text-mask-addons --save ``` ## Masks These can be passed as a [`mask`](https://github.com/text-mask/text-mask/blob/master/componentDocumentation.md#mask) to Text Mask. ### `createNumberMask` `createNumberMask` returns a `numberMask` function that will format user input as currency. `createNumberMask` accepts an object with the following keys: 1. `prefix` (string): what to display before the amount. Defaults to `'$'`. 1. `suffix` (string): what to display after the amount. Defaults to empty string. 1. `includeThousandsSeparator` (boolean): whether or not to separate thousands. Defaults to to `true`. 1. `thousandsSeparatorSymbol` (string): character with which to separate thousands. Default to `','`. 1. `allowDecimal` (boolean): whether or not to allow the user to enter a fraction with the amount. Default to `false`. 1. `decimalSymbol` (string): character that will act as a decimal point. Defaults to `'.'` 1. `decimalLimit` (number): how many digits to allow after the decimal. Defaults to `2` 1. `integerLimit` (number): limit the length of the integer number. Defaults to `null` for unlimited 1. `requireDecimal` (boolean): whether or not to always include a decimal point and placeholder for decimal digits after the integer. Defaults to `false`. 1. `allowNegative` (boolean): whether or not to allow negative numbers. Defaults to `false` 1. `allowLeadingZeroes` (boolean): whether or not to allow leading zeroes. Defaults to `false` #### Usage ```js import createNumberMask from 'text-mask-addons/dist/createNumberMask' // First, you need to create the `numberMask` with your desired configurations const numberMask = createNumberMask({ prefix: '', suffix: ' $' // This will put the dollar sign at the end, with a space. }) // ...then pass `numberMask` to the Text Mask component as the mask ``` ### `emailMask` `emailMask` formats user input as an email address. #### Usage ```js import emailMask from 'text-mask-addons/dist/emailMask' // ...then pass `emailMask` to the Text Mask component as the mask ``` *Technical side note*: even though `emailMask` is passed as a `mask`, it is actually made of both a `mask` and a `pipe` bundled together for convenience. The Text Mask component knows how to unwrap and separate the `pipe` and `mask` functions to use them. ## Pipes These functions here can be passed as a [`pipe`](https://github.com/text-mask/text-mask/blob/master/componentDocumentation.md#pipe) to Text Mask. ### `createAutoCorrectedDatePipe` The `createAutoCorrectedDatePipe` returns a `autoCorrectedDatePipe`, which can help the user in entering a date. For example, if the user enters a value larger than `1` in the 1st slot of month, it appends `0` to it. That is `4` => `04`. It does a similar thing for the day slots. It also blocks the user from entering invalid days or months such as `33/44`. For `createAutoCorrectedDatePipe` to work properly, the Text Mask component needs to be configured with [`keepCharPositions`](https://github.com/text-mask/text-mask/blob/master/componentDocumentation.md#keepcharpositions) set to `true`. #### Usage ```js import createAutoCorrectedDatePipe from 'text-mask-addons/dist/createAutoCorrectedDatePipe' const autoCorrectedDatePipe = createAutoCorrectedDatePipe('mm/dd/yyyy') // As you can see in the line above, you can pass a string argument to `createAutoCorrectedDatePipe` // to give it the order of day, month, and year in your `mask`. // ...now you can pass `autoCorrectedDatePipe` to the Text Mask component as the `pipe` ```
38.197917
128
0.749114
eng_Latn
0.989563
61706c40a544d1fbae2b960ac8ea0369f59dc9c5
919
md
Markdown
README.md
QB4-dev/esp_nano_httpd_basic_example
3d5c206ed05614f148c7b90c6cb9113cc29257b8
[ "MIT" ]
2
2019-01-04T21:54:22.000Z
2019-12-16T19:23:36.000Z
README.md
QB4-dev/esp_nano_httpd_basic_example
3d5c206ed05614f148c7b90c6cb9113cc29257b8
[ "MIT" ]
1
2018-04-09T21:57:18.000Z
2019-02-14T21:00:05.000Z
README.md
QB4-dev/esp_nano_httpd_basic_example
3d5c206ed05614f148c7b90c6cb9113cc29257b8
[ "MIT" ]
null
null
null
# esp_nano_httpd_basic_example Clone this repo with option `--recursive` to download __esp_nano_httpd__ submodule `git clone https://github.com/QB4-dev/esp_nano_httpd_basic_example --recursive` Check SDK and toolchain paths in Makefile: ```Makefile # base directory for the compiler XTENSA_TOOLS_ROOT ?= /opt/esp-open-sdk/xtensa-lx106-elf/bin # base directory of the ESP8266 SDK package, absolute SDK_BASE ?= /opt/esp-open-sdk/sdk ``` Compile it using: `make` And flash into ESP8266 device: `make flash` Then use your smartphone or computer and scan WiFi networks. `ESP-LED-XXXX` with no password should appear. Connect to it, and type `http://192.168.4.1` address into web browser. Set your router SSID and password and try to access your device from your router network. Also check led demo - it's simple example web interface to configure LED blink frequency ![ESP8266 connected to router](/img.png)
34.037037
107
0.772579
eng_Latn
0.865943
6171a83b8bf8a3adee8c4216d9b7b1165830169b
49
md
Markdown
README.md
Maxbro1922/SSSR
1871e9dd847f06713bba69ed048ff3e6bb1c0b76
[ "CC0-1.0" ]
null
null
null
README.md
Maxbro1922/SSSR
1871e9dd847f06713bba69ed048ff3e6bb1c0b76
[ "CC0-1.0" ]
null
null
null
README.md
Maxbro1922/SSSR
1871e9dd847f06713bba69ed048ff3e6bb1c0b76
[ "CC0-1.0" ]
null
null
null
# SSSR Союз Советских Социалистических Республик
16.333333
41
0.857143
rus_Cyrl
0.677763
6171f9059630c41bff6a08f5a5d46bdbac53e808
2,139
md
Markdown
articles/connectors/connectors-create-api-sharepointonline.md
yhs666/mc-docs.zh-cn
a9ca0759e37f213ee8f2c8a3e792cf098ca7b6fd
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/connectors/connectors-create-api-sharepointonline.md
yhs666/mc-docs.zh-cn
a9ca0759e37f213ee8f2c8a3e792cf098ca7b6fd
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/connectors/connectors-create-api-sharepointonline.md
yhs666/mc-docs.zh-cn
a9ca0759e37f213ee8f2c8a3e792cf098ca7b6fd
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 了解如何在逻辑应用中使用 SharePoint Online 连接器 description: 使用 SharePoint Online 连接器创建逻辑应用以管理 SharePoint 上的列表。 services: logic-apps documentationcenter: .net,nodejs,java author: MandiOhlinger manager: anneta editor: '' tags: connectors ms.assetid: e0ec3149-507a-409d-8e7b-d5fbded006ce ms.service: logic-apps ms.devlang: multiple ms.topic: article ms.tgt_pltfrm: na ms.workload: integration origin.date: 07/19/2016 ms.author: v-yiso ms.date: 03/26/2018 ms.openlocfilehash: fc0967902b7ca85483b595b767ed430d3584f2db ms.sourcegitcommit: 3a9c13eb4b4bcddd1eabca22507476fb34f89405 ms.translationtype: HT ms.contentlocale: zh-CN ms.lasthandoff: 11/26/2019 ms.locfileid: "74528415" --- # <a name="get-started-with-the-sharepoint-online-connector"></a>SharePoint Online 连接器入门 使用 SharePoint Online 连接器管理 SharePoint 列表。 若要使用[任何连接器](apis-list.md),首先需要创建逻辑应用。 可通过 [立即创建逻辑应用](../logic-apps/quickstart-create-first-logic-app-workflow.md) 开始操作。 ## <a name="connect-to-sharepoint-online"></a>连接到 SharePoint Online 在逻辑应用访问任何服务之前,必须先创建到该服务的*连接*。 [连接](connectors-overview.md)提供逻辑应用和其他服务之间的连接性。 <!-- ### Create a connection to SharePoint Online > [!INCLUDE [Steps to create a connection to SharePoint](../../includes/connectors-create-api-sharepointonline.md)] ## Use a SharePoint Online trigger A trigger is an event that can be used to start the workflow defined in a logic app. [Learn more about triggers](../logic-apps/logic-apps-overview.md#logic-app-concepts). > [!INCLUDE [Steps to create a SharePoint Online trigger](../../includes/connectors-create-api-sharepointonline-trigger.md)] ## Use a SharePoint Online action An action is an operation carried out by the workflow defined in a logic app. [Learn more about actions](../logic-apps/logic-apps-overview.md#logic-app-concepts). > [!INCLUDE [Steps to create a SharePoint Online action](../../includes/connectors-create-api-sharepointonline-action.md)] --> ## <a name="connector-specific-details"></a>特定于连接器的详细信息 在[连接器详细信息](/connectors/sharepoint/)中查看在 Swagger 中定义的触发器和操作,并查看限制。 ## <a name="next-steps"></a>后续步骤 [创建逻辑应用](../logic-apps/quickstart-create-first-logic-app-workflow.md)
37.526316
172
0.774661
eng_Latn
0.526692
617231ceab524baaa6781baa18116401e79a995f
1,054
md
Markdown
docs/framework/wcf/diagnostics/tracing/system-servicemodel-channels-socketconnectioncreate.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-channels-socketconnectioncreate.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-channels-socketconnectioncreate.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: System.ServiceModel.Channels.SocketConnectionCreate ms.date: 03/30/2017 ms.assetid: 707015f1-a41f-4a42-b74e-a19677e2517b ms.openlocfilehash: 475301cc828d0996b97394cdd99d6b490abd22bb ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 11/26/2020 ms.locfileid: "96291880" --- # <a name="systemservicemodelchannelssocketconnectioncreate"></a>System.ServiceModel.Channels.SocketConnectionCreate System.ServiceModel.Channels.SocketConnectionCreate ## <a name="description"></a>Description Ce suivi est émis pour la première activité de processus effectuée par le client et pour l'activité correspondant à la réception d'octets du service. Il fournit les adresses IP locales et distantes. Il est émis au niveau Informations. ## <a name="see-also"></a>Voir aussi - [Suivi](index.md) - [Utilisation du suivi pour résoudre les problèmes posés par votre application](using-tracing-to-troubleshoot-your-application.md) - [Administration et diagnostics](../index.md)
42.16
237
0.803605
fra_Latn
0.554821
6172b2b88dd4089106176b677a854e56696f8e14
3,688
md
Markdown
webapp/notes/chapter2/section2.2.md
seamong/myWebsite
d696e996f8df03b89f6d31249559aa7fb8ebb61a
[ "MIT" ]
null
null
null
webapp/notes/chapter2/section2.2.md
seamong/myWebsite
d696e996f8df03b89f6d31249559aa7fb8ebb61a
[ "MIT" ]
null
null
null
webapp/notes/chapter2/section2.2.md
seamong/myWebsite
d696e996f8df03b89f6d31249559aa7fb8ebb61a
[ "MIT" ]
null
null
null
# css_12 等分圆 ## 说明 12等分园,其实原理很简单.只要按照一定的规律,就可以轻松的制作出来 ## 代码 html 部分 ```html <div class="circle"> <!-- 1 --> <div class="pie"> <div class="pie_div"> <div> <img src="http://file.ituring.com.cn/SmallCover/1708febcfca2b7d1f8ad" alt=""> </div> </div> <div class="pie_div"> <div></div> </div> <div class="pie_div"> <div></div> </div> </div> <!-- 2 --> <div class="pie"> <div class="pie_div"> <div></div> </div> <div class="pie_div"> <div></div> </div> <div class="pie_div"> <div></div> </div> </div> <!-- 3 --> <div class="pie"> <div class="pie_div"> <div>♈</div> </div> <div class="pie_div"> <div></div> </div> <div class="pie_div"> <div></div> </div> </div> <!-- 4 --> <div class="pie"> <div class="pie_div"> <div></div> </div> <div class="pie_div"> <div></div> </div> <div class="pie_div"> <div></div> </div> </div> </div> ``` less 部分 ```less .circle { width: 100vw; height: 100vw; .pie { width: 50vw; height: 50vw; overflow: hidden; position: relative; float: left; .pie_div { width: 100%; height: 100%; position: absolute; &:nth-child(1) { transform: rotate(0deg); // background-color: rgb(209, 32, 32); } &:nth-child(2) { transform: rotate(30deg); // background-color: rgb(10, 250, 150); div { // background-color: rgb(63, 8, 97); } } &:nth-child(3) { transform: rotate(60deg); // background-color: rgb(11, 23, 197); } & > div { width: 50%; height: 50%; position: absolute; border: 5px solid #bb1414; // background-color: rgb(129, 20, 202); } } &:nth-child(1) { // background-color: aqua; .pie_div { border-top-left-radius: 100%; transform-origin: bottom right; & > div { bottom: 0; right: 0; border-right: 0; border-top: 0; border-top-left-radius: 100%; transform-origin: bottom right; img { width: 5vw; position: absolute; bottom: 1vw; right: 15vw; } } } } &:nth-child(2) { // background-color: rgb(255, 230, 0); .pie_div { border-top-right-radius: 100%; transform-origin: bottom left; & > div { bottom: 0; left: 0; border-right: 0; border-bottom: 0; border-top-right-radius: 100%; transform-origin: bottom left; } } } &:nth-child(3) { // background-color: rgb(197, 22, 168); .pie_div { border-bottom-left-radius: 100%; transform-origin: top right; & > div { top: 0; right: 0; border-left: 0; border-top: 0; border-bottom-left-radius: 100%; transform-origin: top right; } } } &:nth-child(4) { // background-color: rgb(37, 185, 136); .pie_div { border-bottom-right-radius: 100%; transform-origin: top left; & > div { top: 0; left: 0; border-left: 0; border-bottom: 0; border-bottom-right-radius: 100%; transform-origin: top left; } } } } } ``` ## 参考资料 [CSS3实现二十多种基本图形](https://www.cnblogs.com/miragele/p/5761099.html)
21.317919
87
0.454718
ita_Latn
0.087345
6173391215880f7c312ec5a57add8ab04a5349e0
17,684
md
Markdown
dox/GitProcedure.md
keipertk/DeveloperTools
049680aa67d3e4bd033ad091b98c0822d811443b
[ "Apache-2.0" ]
null
null
null
dox/GitProcedure.md
keipertk/DeveloperTools
049680aa67d3e4bd033ad091b98c0822d811443b
[ "Apache-2.0" ]
null
null
null
dox/GitProcedure.md
keipertk/DeveloperTools
049680aa67d3e4bd033ad091b98c0822d811443b
[ "Apache-2.0" ]
null
null
null
Git and GitHub Procedures ========================= The purpose of this page is to document typical Git and GitHub workflows and to serve as a cheat sheet for remembering the commands. This page only covers the basic aspects related to Git version control and how one uses GitHub to contribute code. Contents -------- 1. [Git and GitHub Background](#git-and-github-background) 2. [Common Git Commands](#common-git-commands) 3. [GitHub Workflow](#github-workflow) 4. [Further Information](#further-information) Git and GitHub Background ------------------------ As a code base evolves it will have existed in many versions. Almost all modern codes use some form of what is termed "version control" (VC) to manage these versions. Rather than attempting to do VC manually, most developers typically employ some form of VC software. Broadly speaking VC software provides two main services: 1. Management of a code's history - Sophisticated "undo" feature 2. Merging disparate contributions into one code base - Manually merging contributions from different developers is error-prone - Good VC can merge code automatically by utilizing the common code history Superficially all VC software follows a similar workflow. When a developer wants to work on a version controlled code base (called a repo), they: 1. Copy the repo 2. Modify the copy 3. Merge the copy back with the repo Exactly how this process is done depends on the model adopted by the VC software managing the repo. Originally, most VC packages (like CVS and SVN) utilized what is called a centralized repo model. In a centralized model there is only one repo. Copies of that repo are intimately linked to the original repo. More modern VC packages, like Git and Mercurial, instead use what is called a distributed repo model. In this model, all copies of the original repo are perfectly legitimate repos on their own. Conceptually this means that any notion of "authoritative repo" is purely a social convention. The distributed repo model has a number of advantages compared to the centralized repo model: - Faster operations - No need to communicate with original (typically remote) repo - Encourages code-base to be saved more often - Easy collaboration - No need for messy three-way (original, and two copies) synchronization - Only worry about copy to copy synchronization - Copies have access to all VC commands - Makes it much easier to share changes without going through original repo - Each repo is essentially a "back-up" - The original repo's full history is in each copy Given [GitHub](https://github.com/)'s current popularity as a social coding platform it's easy to loose sight of GitHub's actual role in the VC process. GitHub itself is really nothing more than a website which hosts Git repos. Consequentially, there's nothing special (from the perspective of Git) about the repo that lives on GitHub versus a copy of that repo living on any other computer. That said, given that GitHub is easily accessed by all developers and potential users, it's typical to, by social convention, treat the GitHub repo as the "official" repo. GitHub's popularity is largely fueled by the fact that in addition to being a place to host Git repos it also strives to encompass and simplify many other aspects of the development process (such as continuous integration and code-review). The following sections detail common Git commands and the typical GitHub workflow purposed for the NWChemEx project. Common Git Commands ------------------- For the purposes of this tutorial we'll assume that you're working with an existing Git repo (if you're not, the easiest way to make a repo is to do so on GitHub and then follow GitHub's prompts). Once you know which repo you want to work on, the first step is to get your own copy (the copy is termed a clone in Git lingo). The command to clone a repo is: ~~~.git git clone <path_to_repo> [<where_to_put_repo>] ~~~ This will checkout a repo that is located at `path_to_repo` and optionally put it in a folder named `where_to_put_repo` (if you don't specify `where_to_put_repo`, the clone will be placed into a folder with the same name as the repo you are cloning). It's important to realize that `path_to_repo` can be either a file path, to say clone a repo on your internal network, or a website like GitHub. The remainder of the commands in this section assume you are inside the resulting directory (Git will try to access settings that are hidden in `.git/` folders and will complain if said folders don't exist). Typically one thinks of the code itself as having a single state. This state evolves as features are added to the code. The timeline of the code's state is termed the "master branch" (history in Git is thought of as tree-like). By default the clone you get only has the master branch. A widely adopted convention of the Git community (adherence to which will make your life easier long term) is that the master branch should always be deployable (*i.e.* work and be relatively bug-free). This convention is easy to meet if we always keep this branch clean (*i.e.* don't make your changes to it) and we keep it up to date. In an effort to keep the master branch clean the first thing you should do is thus make a new branch. This is done by: ~~~.git git checkout -b <branch_name> ~~~ where `branch_name` is the name of the branch you'll be working on. The `-b` flag tells Git to make the branch (Git will yell at you if the branch exists and you use the `-b` flag). The resulting branch starts a new timeline that diverges from the master branch's current state. All of your development will occur on this branch. Since the branch has diverged from the master branch it is safe to routinely track your changes, even before they're ready to be merged back into master. At this point you begin developing your great new feature on your new branch. As time goes by and you write more and more code, you'll reach a point where you'll want to save the branch's state with Git so that you can revert if something goes horribly wrong. To do this, first you have to tell Git what files you want to save: ~~~.git git add <files_to_save> ~~~ where `files_to_save` is one or more files to save (Linux wild cards work, *e.g.* `git add *.cpp` will save all C++ source files in the current directory). After this command, the state of the files is not saved yet (they are what is typically referred to as staged). The staging phase makes it easier for you to fine tune what gets saved and what doesn't. You can run `git add` as many times as you want and keep amassing files to save. It's useful to note that you can get a wealth of information about the current repo's state via: ~~~.git git status ~~~ Among other things, this command will tell you which files are not versioned, which versioned files are changed, but not staged, and which versioned files are staged. Once you're happy with the set of staged files, you "commit" them via: ~~~.git git commit -m "<message>" ~~~ This command will save all staged files to your branch and log the commit with some (hopefully descriptive) message (if you omit the `-m` flag and the message it'll bring up your text editor of choice so that you can type one). After running this command your code's state is saved; however, the files are only saved to your current branch, they are not saved to any other branch (other branch notably including the master branch) or repo yet. At some point you'll want to move your feature to another repo. Typically this other repo is the original repo you cloned. Because we are now attempting to partially synchronize two repos, there's a lot of possibilities for how we want to do this. In an effort to keep this simple, we note that 99.9% of the time, using the GitHub workflow laid out below we want to synchronize a single branch of each repo. Moreover we want to synchronize the same branch (that is we typically will not be directly merging into master as explained below). For simplicity we assume our current repo is on the branch we currently want to synchronize (if you're not `git checkout <branch_to_synch>`) and all changed files have been committed. Before we can synchronize, we have to make sure we have all of the changes on the original repo's branch (if the original repo doesn't have this branch yet, *i.e.* your commit will make it, skip this step; as with most things Git will yell at you if you attempt to synchronize with a non-existent branch or if that branch is ahead of yours). The command to "pull" the other branch's changes is: ~~~.git git pull origin <branch_name> ~~~ `origin` is an alias Git automatically defines for you, which points to the original repo you cloned (obviously change origin if you're not synchronizing with the original repo). `branch_name` should be both the name of your current branch and the name of the other repo's branch. It is possible for conflicts to occur at this point, so it's worth discussing them now. Git's pretty good about merging contributions from multiple developers automatically. Nevertheless conflicts do occur. If during a merge a conflict does occur, you'll have to correct it manually. To do this take note of the conflicting files (if you forget you can get the list again by running `git status`). For each file you'll need to fix all conflicts contained within it. Within the file, Git will add three delimiters. The conflicting lines of code will start with `<<<<<<< HEAD` and end with `>>>>>>> branch_name` delimiters. In between these delimiters `=======` will separate your changes (top half) from the other repo's changes (bottom half). To fix the conflict, you'll need to delete the delimiters and manually merge the changes. Once you've done that you stage and then commit the file. Finally, once all conflicts are fixed (if any existed) you "push" your changes to the other repo: ~~~.git git push origin <branch_name> ~~~ While it's essential to keep the master branch of your repo clean, it's also good practice to keep it synchronized with that of the repo you cloned (we'll get to why in a moment). Synchronization of the the master branch is akin to the first half of the procedure we just outlined. First (assuming you're on your development branch and not the master branch) change to your master branch: ~~~.git git checkout master ~~~ then pull the original repo's master branch via: ~~~.git git pull origin master ~~~ Since you're following this tutorial there'll be no problems with the merge and everything will go swimmingly. With your master branch up-to-date you'll want to merge those changes into your active development branch. To do this, check-out your development branch and run: ~~~.git git merge master ~~~ This will merge the contents of your repo's master branch into your current branch. Depending on how master has changed conflicts may occur; if they do, you simply deal with them as we did above. GitHub Workflow --------------- The commands from the previous section are complemented by several GitHub extensions. We explain these extensions in this section. For the purposes of this tutorial, let's say you want to contribute to a very creatively named repo on GitHub called "GitHubRepo". Well we've got two problems. First, the maintainers of "GitHubRepo" probably don't want you directly committing to their code base without them first looking at your contribution ("looking at" is typically automated to some extent). Hence, they'll need to pull your changes into a sandbox area and assess them before committing them. This leads to the second problem, you probably don't want them accessing your computer. GitHub has purposed a solution, it's called forking. Alls it is, is a fancy clone procedure. During forking GitHub clones "GitHubRepo" to your account (thereby hosting the clone on GitHub itself). We'll call the resulting clone "GitHubFork". Basically "GitHubFork" is a buffer repo that you both can access comfortably (as in the spirit of Git itself each fork is a legitimate GitHub repo itself and can be forked too, great for allowing the workflow described here to be done recursively for collaborations). As for how to fork, on "GitHubRepo"'s GitHub page just click the fork button at the top. After forking, the Git procedure continues like normal. You clone "GitHubFork" to your local machine and checkout a new branch preserving master. To save yourself some typing later you'll want to define an alias for "GitHubRepo" (origin will be set to "GitHubFork"). Typically this alias is called "upstream". To make this alias the command is: ~~~.git git remote add upstream <path_to_original_repo> ~~~ It is polite at this point to notify the maintainers of "GitHubRepo" that you're going to work on this feature. To do this you first push your development branch to "GitHubFork". Then on "GitHubFork"'s GitHub page you should see a box pop up that says your branch's name and "compare and pull request" (if not you can go to the branches tab and manually start a pull request). A pull request is just that, it's a request for the maintainers of "GitHubRepo" to pull the specified branch into their repository. Since you're opening this PR (that's short for pull request and is a very prevalent abbreviation on GitHub, so learn it) before finishing the code, it's customary to title the PR something like "[WIP] Descriptive Title". Here "WIP" stands for "work in progress" (again common abbreviation) and tells the maintainers that it's not ready yet. You'll also need to provide a description of what your feature does (many repos will provide a template that you should fill out to the best of your ability). Starting the PR early is a good idea as it provides you a means of getting feedback along the way ranging from "don't bother doing this, we don't want your feature" to "that's great, let us know if there's any way we can help you get that implemented". It also will be the place where a code-review (the maintainers of the repo look at your code and make comments on it) will occur. By starting early the code-review can be done in stages (assuming you regularly update "GitHubFork"). For the most part the remainder of the development cycle is pretty standard. The big exception is staying synchronized with "GitHubRepo". Since other developers who contribute to "GitHubRepo" aren't going to be nice enough to push their changes to your fork of the repo, you can't just run the pull command from the last section. Hence, in order for you to stay up-to-date with "GitHubRepo" you'll need to pull changes from its master branch into your local master branch. The command is similar (and the reason we defined the "upstream" alias): ~~~.git git pull upstream master ~~~ With your local master branch synched, you'll then want to synch "GitHubFork"'s master branch. To do this you'll push the local changes to "GitHubFork". The command is: ~~~.git git push origin master ~~~ Although not strictly necessary, this step makes it easier for you to recover should anything go wrong. In particular let's say you accidentally modify your local master branch. By ensuring your "GitHubFork" master branch is a clean copy of the "GitHubRepo"'s at some point in its history you can run (on your master branch): ~~~.git git reset --hard origin/master ~~~ This command will delete all changes made to your current master branch, and make it exactly equal to the state of "GitHubFork"'s master branch. YOU WILL ALMOST CERTAINLY LOSE WORK BY DOING THIS. It's thus best to first checkout a new branch, that is a copy of the current master branch, before executing this command. Once you're done developing you need to notify the "GitHubRepo" maintainers. This is typically done in two ways. First, the "[WIP]" tag is removed from the title of your PR. As this is easy to miss, you typically will also comment in the PR "r2g" (short for ready to go). Comments are a lot harder to miss. At this point the ball's in the maintainers court to accept your PR or provide additional feedback of things that need fixing (which assuming you were pushing to "GitHubFork" regularly, will hopefully not be a long list). Once the PR is approved either you or the maintainers will click on the "merge" button provided by GitHub and your code will be merged. That's it, your feature is merged, the PR is closed, and you can delete your branch. It is recommended that the contributor clicks merge in order to avoid premature merging (simply because the reviewer has accepted what's there doesn't mean that the contributor is done contributing via that PR). The image below summarizes the discussion above. The left side of the red line is the GitHub "official" repo. On the right side of the red line are the repos that you (the developer) own. Above the black dotted line are the repos on GitHub and below the black line are the repos that live locally on your own computer (or other computers you are using). ![alt text](github_workflow.png "GitHub Workflow") Further Information ------------------- There is much more to both GitHub and Git itself. The following is a collection of tutorials offering additional information on certain topics. - [The Git Command](https://git-scm.com/docs/gittutorial) - [GitHub Workflow](https://guides.github.com/introduction/flow/) - [Forking a Repo on GitHub](https://guides.github.com/activities/forking/)
50.381766
83
0.763741
eng_Latn
0.999848
6173998ff9da8adae0b2d99c7ce320b5e96e03a7
6,000
md
Markdown
README.md
barryt2/data-accelerator
06e4a52ffde1ffc1ea01855f74bf7915dda825bc
[ "MIT" ]
null
null
null
README.md
barryt2/data-accelerator
06e4a52ffde1ffc1ea01855f74bf7915dda825bc
[ "MIT" ]
null
null
null
README.md
barryt2/data-accelerator
06e4a52ffde1ffc1ea01855f74bf7915dda825bc
[ "MIT" ]
null
null
null
### Data Accelerator for Apache Spark |Flow| [![Build status](https://dev.azure.com/ms/data-accelerator/_apis/build/status/DataX.Flow?branchName=master)](https://dev.azure.com/ms/data-accelerator/_build/latest?definitionId=112&branchName=master) |Gateway| [![Build status](https://dev.azure.com/ms/data-accelerator/_apis/build/status/DataX.Gateway?branchName=master)](https://dev.azure.com/ms/data-accelerator/_build/latest?definitionId=113&branchName=master) |DataProcessing| [![Build status](https://dev.azure.com/ms/data-accelerator/_apis/build/status/DataX.Spark?branchName=master)](https://dev.azure.com/ms/data-accelerator/_build/latest?definitionId=116&branchName=master) | |:---:|:-----:|:-----:|:-----:|:-----:|:-----:| |**Metrics**| [![Build status](https://dev.azure.com/ms/data-accelerator/_apis/build/status/DataX.Metrics?branchName=master)](https://dev.azure.com/ms/data-accelerator/_build/latest?definitionId=114&branchName=master) |**SimulatedData**| [![Build status](https://dev.azure.com/ms/data-accelerator/_apis/build/status/DataX.SimulatedData?branchName=master)](https://dev.azure.com/ms/data-accelerator/_build/latest?definitionId=115&branchName=master) |**Website**| [![Build status](https://dev.azure.com/ms/data-accelerator/_apis/build/status/DataX.Web?branchName=master)](https://dev.azure.com/ms/data-accelerator/_build/latest?definitionId=117&branchName=master) | [Data Accelerator](https://github.com/Microsoft/data-accelerator) for Apache Spark democratizes streaming big data using Spark by offering several key features such as a no-code experience to set up a data pipeline as well as fast dev-test loop for creating complex logic. Our team has been using the project for two years within Microsoft for processing streamed data across many internal deployments handling data volumes at Microsoft scale. It offers an easy to use platform to learn and evaluate streaming needs and requirements. We are thrilled to share this project with the wider community as open source! <p align="center"><img style="float: center;" width="90%" src="https://github.com/Microsoft/data-accelerator/wiki/tutorials/images/readme3.PNG"></p> [Data Accelerator](https://github.com/Microsoft/data-accelerator) offers three level of experiences: - The first requires no code at all, using rules to create alerts on data content. - The second allows to quickly write a Spark SQL query with additions like LiveQuery, time windowing, in-memory accumulator and more. - The third enables integrating custom code written in Scala or via Azure functions. You can get started locally for Windows, macOs and Linux following [these instructions](https://github.com/Microsoft/data-accelerator/wiki/Local-mode-with-Docker) <br/> To deploy to Azure, you can use the ARM template; see instructions [deploy to Azure](https://github.com/Microsoft/data-accelerator/wiki/Cloud-deployment).<br/> The [`data-accelerator`](https://github.com/Microsoft/data-accelerator/) repository contains everything needed to set up an end-to-end data pipeline. There are many ways you can participate in the project: - [Submit bugs and requests](https://github.com/Microsoft/data-accelerator/issues) - [Review code changes](https://github.com/microsoft/data-accelerator/pulls) - [Review documentation](https://github.com/Microsoft/data-accelerator/wiki) and make updates ranging from typos to new content. # Getting Started To unleash the full power Data Accelerator, [deploy to Azure](https://github.com/Microsoft/data-accelerator/wiki/Cloud-deployment) and check [cloud mode tutorials](https://github.com/Microsoft/data-accelerator/wiki/Tutorials#cloud-mode). We have also enabled a "hello world" experience that you try out locally by running docker container. When running locally there are no dependencies on Azure, however the functionality is very limited and only there to give you a very cursory overview of Data Accelerator. To run Data Accelerator locally, [deploy locally](https://github.com/Microsoft/data-accelerator/wiki/Local-mode-with-Docker) and then check out the [local mode tutorials](https://github.com/Microsoft/data-accelerator/wiki/Tutorials#local-mode).<br/> Data Accelerator for Spark runs on the following: - HDInsights with Spark 2.3 - Service Fabric (v6.4.637.9590) with - .NET Core 2.1 - ASP.NET - App Service with Node 10.6 See the [wiki](https://github.com/Microsoft/data-accelerator/wiki) pages for further information on how to build, diagnose and maintain your data pipelines built using Data Accelerator for Spark. # Contributing If you are interested in fixing issues and contributing to the code base, we would love to partner with you. Try things out, join in the design conversations and make pull requests. * [Download the latest stable or in-development releases](https://github.com/Microsoft/data-accelerator/wiki) * [Build and Debug Data Accelerator source code](CONTRIBUTING.md#build-and-run) # Feedback * Request new features on [GitHub](https://github.com/Microsoft/data-accelerator/blob/master/CONTRIBUTING.md) * Open a new issue on [GitHub](https://github.com/Microsoft/data-accelerator/issues) * Ask a question on [Stack Overflow](https://stackoverflow.com/questions/tagged/data-accelerator) * Contact us: data-accelerator@microsoft.com * Check out the [contributing page](CONTRIBUTING.md) to see the best places to log issues and start discussions. Please also see our [Code of Conduct](CODE_OF_CONDUCT.md). # Security issues Security issues and bugs should be reported privately, via email, to the Microsoft Security Response Center (MSRC) secure@microsoft.com. You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Further information, including the MSRC PGP key, can be found in the Security TechCenter. # License This repository is licensed with the [MIT](LICENSE) license.
101.694915
664
0.775833
eng_Latn
0.722156
617405e5ef354f8b002adaf5924d48feb2eb4616
526
md
Markdown
README.md
Rafael955/arcade-froggy
efc4f2fc54c1c7b6800058e322ea7b55b4855853
[ "MIT" ]
null
null
null
README.md
Rafael955/arcade-froggy
efc4f2fc54c1c7b6800058e322ea7b55b4855853
[ "MIT" ]
null
null
null
README.md
Rafael955/arcade-froggy
efc4f2fc54c1c7b6800058e322ea7b55b4855853
[ "MIT" ]
null
null
null
====================================================================== Arcade Game - Frogger Arcade Para abrir o jogo basta fazer o download do repositório e duplo clique sobre frogger.html e visualizar em seu navegador, ou acesse: https://vigorous-mahavira-f0cd14.netlify.com/frogger Instruções do Jogo 1- Utilize as setas para movimentar o herói pelo cenário. 2- Desvie dos inimigos e tente chegar até o rio no final do cenário. 3- Divirta-se! =======================================================================
27.684211
71
0.568441
por_Latn
0.906438
61742f55c56851c09b75f18c6aca1ebca5e27382
10,359
md
Markdown
articles/frontdoor/standard-premium/how-to-compression.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
66
2017-07-09T03:34:12.000Z
2022-03-05T21:27:20.000Z
articles/frontdoor/standard-premium/how-to-compression.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
671
2017-06-29T16:36:35.000Z
2021-12-03T16:34:03.000Z
articles/frontdoor/standard-premium/how-to-compression.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
171
2017-07-25T06:26:46.000Z
2022-03-23T09:07:10.000Z
--- title: Mejora del rendimiento mediante la compresión de archivos en Azure Front Door Estándar/Premium (versión preliminar) description: Obtenga información sobre cómo mejorar la velocidad de transferencia de archivos y aumentar el rendimiento de carga de página mediante la compresión de los archivos en Azure Front Door. services: front-door author: duongau ms.service: frontdoor ms.topic: article ms.date: 02/18/2021 ms.author: yuajia ms.openlocfilehash: ca9c2e3b4e9873d4880385479b701d36c92238b0 ms.sourcegitcommit: 0770a7d91278043a83ccc597af25934854605e8b ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 09/13/2021 ms.locfileid: "124750191" --- # <a name="improve-performance-by-compressing-files-in-azure-front-door-standardpremium-preview"></a>Mejora del rendimiento mediante la compresión de archivos en Azure Front Door Estándar/Premium (versión preliminar) > [!Note] > Esta documentación trata sobre Azure Front Door Estándar/Prémium (versión preliminar). ¿Busca información sobre Azure Front Door? Consulte [aquí](../front-door-overview.md). La compresión de archivo es un método eficaz para mejorar la velocidad de transferencia de archivos y aumentar el rendimiento de carga de la página. La compresión reduce el tamaño del archivo antes de que lo envíe el servidor. La compresión de archivo reduce los costos de ancho de banda y proporciona una experiencia mejorada para los usuarios. > [!IMPORTANT] > Azure Front Door Estándar/Prémium (versión preliminar) está disponible actualmente en versión preliminar pública. > Esta versión preliminar se ofrece sin Acuerdo de Nivel de Servicio y no se recomienda para cargas de trabajo de producción. Es posible que algunas características no sean compatibles o que tengan sus funcionalidades limitadas. > Para más información, consulte [Términos de uso complementarios de las Versiones Preliminares de Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Se puede habilitar de dos maneras: - Puede habilitar la compresión en el servidor de origen. En este caso, Azure Front Door pasa los archivos comprimidos y los entrega a los clientes que los solicitan. - Habilite la compresión directamente en los servidores POP de Azure Front Door (*compresión sobre la marcha*). En este caso, Azure Front Door comprime los archivos y los envía a los usuarios finales. > [!NOTE] > Las solicitudes de rango se pueden comprimir en tamaños diferentes. En Azure Front Door es necesario que los valores de longitud del contenido sean los mismos para cualquier solicitud HTTP GET. Si los clientes envían solicitudes de rango de bytes con el encabezado `accept-encoding` que provoca que el origen responda con diferentes longitudes de contenido, Azure Front Door devolverá un error 503. Puede deshabilitar la compresión en Origen/Azure Front Door, o bien crear una regla de conjunto de reglas para quitar `accept-encoding` de la solicitud para las solicitudes de rango de bytes. > [!IMPORTANT] > Los cambios de configuración de Azure Front Door tardan hasta 10 minutos en propagarse a lo largo de la red. Si está configurando la compresión por primera vez para su punto de conexión de la red CDN, considere esperar entre 1 y 2 horas antes de realizar la solución de problemas para garantizar que la configuración de la compresión se haya propagado a todos los POP. ## <a name="enabling-compression"></a>Habilitar la compresión > [!Note] > En Azure Front Door, la compresión es parte del proceso de **habilitar el almacenamiento en caché** en la ruta. Solo si **habilita el almacenamiento en caché** puede usar la compresión de Azure Front Door. Puede habilitar la compresión de las siguientes maneras: * Durante la creación rápida: cuando habilita el almacenamiento en caché, puede habilitar la compresión. * Durante la creación personalizada: habilite el almacenamiento en caché y la compresión al agregar una ruta. * En la ruta de Endpoint Manager. * En la página Optimización. ### <a name="enable-compression-in-endpoint-manager"></a>Habilitar la compresión en Endpoint Manager 1. En la página del perfil de Azure Front Door Estándar/Premium, vaya a **Endpoint Manager** y seleccione el punto de conexión en el que desea habilitar la compresión. 1. Seleccione **Editar extremo** y, a continuación, seleccione la **ruta** en la que desea habilitar la compresión. :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-1.png" alt-text="Captura de pantalla de la página de aterrizaje de Endpoint Manager." lightbox="../media/how-to-compression/front-door-compression-endpoint-manager-1-expanded.png"::: 1. Asegúrese de que la opción **Habilitar almacenamiento en caché** esté activada y, a continuación, active la casilla **Habilitar la compresión**. :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Habilitar la compresión en Endpoint Manager."::: 1. Para guardar la configuración, seleccione **Actualizar**. ### <a name="enable-compression-in-optimization"></a>Habilitar la compresión en la página Optimización 1. En la página del perfil de Azure Front Door Estándar/Premium, vaya a **Optimizaciones** en Configuración. Expanda el punto de conexión para ver la lista de rutas. 1. Seleccione los tres puntos situados junto a la **ruta** que tiene la compresión *deshabilitada*. A continuación, seleccione **Configure route** (Configurar ruta). :::image type="content" source="../media/how-to-compression/front-door-compression-optimization-1.png" alt-text="Pantalla para habilitar compresión en la página Optimización." lightbox="../media/how-to-compression/front-door-compression-optimization-1-expanded.png"::: 1. Asegúrese de que la opción **Habilitar almacenamiento en caché** esté activada y, a continuación, active la casilla **Habilitar la compresión**. :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Captura de pantalla de la habilitación de la compresión en Endpoint Manager."::: 1. Haga clic en **Update**(Actualizar). ## <a name="modify-compression-content-type"></a>Modificación del tipo de contenido de compresión Puede modificar la lista predeterminada de tipos MIME en la página Optimizaciones. 1. En la página del perfil de Azure Front Door Estándar/Premium, vaya a **Optimizaciones** en Configuración. A continuación, seleccione la **ruta** que tenga *habilitada* la compresión. 1. Seleccione los tres puntos situados junto a la **ruta** que tiene la compresión *habilitada*. A continuación, seleccione **View Compressed file types** (Ver tipos de archivos comprimidos). :::image type="content" source="../media/how-to-compression/front-door-compression-edit-content-type.png" alt-text="Captura de pantalla de la página Optimización." lightbox="../media/how-to-compression/front-door-compression-edit-content-type-expanded.png"::: 1. Elimine los formatos predeterminados o seleccione **Agregar** para agregar nuevos tipos de contenido. :::image type="content" source="../media/how-to-compression/front-door-compression-edit-content-type-2.png" alt-text="Captura de pantalla de la página para personalizar la compresión de archivos."::: 1. Seleccione **Guardar** para actualizar la configuración de la compresión. ## <a name="disabling-compression"></a>Deshabilitar la compresión Puede deshabilitar la compresión de las siguientes maneras: * En la ruta de Endpoint Manager. * En la página Optimización. ### <a name="disable-compression-in-endpoint-manager"></a>En Endpoint Manager. 1. En la página del perfil de Azure Front Door Estándar/Premium, vaya a **Endpoint Manager** en Configuración. Seleccione el punto de conexión en el que desea deshabilitar la compresión. 1. Seleccione **Editar extremo** y, a continuación, seleccione la **ruta** en la que desea deshabilitar la compresión. Desactive la casilla **Habilitar la compresión**. 1. Para guardar la configuración, seleccione **Actualizar**. ### <a name="disable-compression-in-optimizations"></a>Deshabilitar la compresión en Optimizaciones 1. En la página del perfil de Azure Front Door Estándar/Premium, vaya a **Optimizaciones** en Configuración. A continuación, seleccione la **ruta** que tenga *habilitada* la compresión. 1. Seleccione los tres puntos situados junto a la **ruta** que tiene *habilitada* la compresión y, a continuación, seleccione *Configure route* (Configurar ruta). :::image type="content" source="../media/how-to-compression/front-door-disable-compression-optimization.png" alt-text="Captura de pantalla de la página para deshabilitar la compresión en Optimización."::: 1. Desactive la casilla **Habilitar la compresión**. :::image type="content" source="../media/how-to-compression/front-door-disable-compression-optimization-2.png" alt-text="Captura de pantalla de la página de actualización de rutas para deshabilitar la compresión."::: 1. Para guardar la configuración, seleccione **Actualizar**. ## <a name="compression-rules"></a>Reglas de compresión En Azure Front Door, solo se comprimen los archivos válidos. Para ser elegible para la compresión, un archivo debe cumplir con los siguientes requisitos: * Ser de un tipo MIME * Debe tener más de 1 KB * Debe tener menos de 8 MB Estos perfiles admiten las codificaciones de compresión siguientes: * gzip (GNU zip) * brotli Si la solicitud admite más de un tipo de compresión, la compresión brotli es la que tiene prioridad. Cuando una solicitud de un activo especifica la compresión gzip y la solicitud da como resultado un error de caché, Azure Front Door realiza la compresión gzip del recurso directamente en el servidor POP. Después, el archivo comprimido se envía desde la caché. Si el origen usa la codificación de transferencia fragmentada (CTE) para enviar datos comprimidos al POP de Azure Front Door, no se admitirán los tamaños de respuesta superiores a 8 MB. ## <a name="next-steps"></a>Pasos siguientes - Aprenda a configurar el primer [conjunto de reglas](how-to-configure-rule-set.md). - Obtenga más información sobre las [condiciones de coincidencia de un conjunto de reglas](concept-rule-set-match-conditions.md). - Obtenga más información acerca del [conjunto de reglas de Azure Front Door](concept-rule-set.md).
71.9375
592
0.783763
spa_Latn
0.960612
61756ae5e08503a2cebb4320c7207ec9a0a81c40
36,360
md
Markdown
content/posts/2020-10-20---Hot-Papers.md
TatsuyaShirakawa/daily-arxiv-gatsby
4c1744c7f6f3eaa676310a5958ee71e126cf0c93
[ "MIT" ]
4
2020-09-02T16:13:06.000Z
2021-11-08T08:17:04.000Z
content/posts/2020-10-20---Hot-Papers.md
TatsuyaShirakawa/daily-arxiv-gatsby
4c1744c7f6f3eaa676310a5958ee71e126cf0c93
[ "MIT" ]
null
null
null
content/posts/2020-10-20---Hot-Papers.md
TatsuyaShirakawa/daily-arxiv-gatsby
4c1744c7f6f3eaa676310a5958ee71e126cf0c93
[ "MIT" ]
null
null
null
--- title: Hot Papers 2020-10-20 date: 2020-10-21T09:32:38.Z template: "post" draft: false slug: "hot-papers-2020-10-20" category: "arXiv" tags: - "arXiv" - "Twitter" - "Machine Learning" - "Computer Science" description: "Hot papers 2020-10-20" socialImage: "/media/flying-marine.jpg" --- # 1. Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl - retweets: 20484, favorites: 0 (10/21/2020 09:32:38) - links: [abs](https://arxiv.org/abs/2010.09337) | [pdf](https://arxiv.org/pdf/2010.09337) - [stat.ML](https://arxiv.org/list/stat.ML/recent) | [cs.LG](https://arxiv.org/list/cs.LG/recent) We present a brief history of the field of interpretable machine learning (IML), give an overview of state-of-the-art interpretation methods, and discuss challenges. Research in IML has boomed in recent years. As young as the field is, it has over 200 years old roots in regression modeling and rule-based machine learning, starting in the 1960s. Recently, many new IML methods have been proposed, many of them model-agnostic, but also interpretation techniques specific to deep learning and tree-based ensembles. IML methods either directly analyze model components, study sensitivity to input perturbations, or analyze local or global surrogate approximations of the ML model. The field approaches a state of readiness and stability, with many methods not only proposed in research, but also implemented in open-source software. But many important challenges remain for IML, such as dealing with dependent features, causal interpretation, and uncertainty estimation, which need to be resolved for its successful application to scientific problems. A further challenge is a missing rigorous definition of interpretability, which is accepted by the community. To address the challenges and advance the field, we urge to recall our roots of interpretable, data-driven modeling in statistics and (rule-based) ML, but also to consider other areas such as sensitivity analysis, causal inference, and the social sciences. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">We have a new paper on arxiv 🎉🎉<br>Interpretable Machine Learning - A Brief History, State-of-the-Art and Challenges.<br><br>Best to read in a comfortable chair with a cup of coffee/tea. It&#39;s an extended abstract to a keynote I gave at the ECML XKDD workshop. <a href="https://t.co/euATIkAetK">https://t.co/euATIkAetK</a></p>&mdash; Christoph Molnar (@ChristophMolnar) <a href="https://twitter.com/ChristophMolnar/status/1318445578943168512?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 2. TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems Robert David, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, Ian Nappier, Meghna Natraj, Shlomi Regev, Rocky Rhodes, Tiezhen Wang, Pete Warden - retweets: 2197, favorites: 180 (10/21/2020 09:32:38) - links: [abs](https://arxiv.org/abs/2010.08678) | [pdf](https://arxiv.org/pdf/2010.08678) - [cs.LG](https://arxiv.org/list/cs.LG/recent) | [cs.AI](https://arxiv.org/list/cs.AI/recent) Deep learning inference on embedded devices is a burgeoning field with myriad applications because tiny embedded devices are omnipresent. But we must overcome major challenges before we can benefit from this opportunity. Embedded processors are severely resource constrained. Their nearest mobile counterparts exhibit at least a 100---1,000x difference in compute capability, memory availability, and power consumption. As a result, the machine-learning (ML) models and associated ML inference framework must not only execute efficiently but also operate in a few kilobytes of memory. Also, the embedded devices' ecosystem is heavily fragmented. To maximize efficiency, system vendors often omit many features that commonly appear in mainstream systems, including dynamic memory allocation and virtual memory, that allow for cross-platform interoperability. The hardware comes in many flavors (e.g., instruction-set architecture and FPU support, or lack thereof). We introduce TensorFlow Lite Micro (TF Micro), an open-source ML inference framework for running deep-learning models on embedded systems. TF Micro tackles the efficiency requirements imposed by embedded-system resource constraints and the fragmentation challenges that make cross-platform interoperability nearly impossible. The framework adopts a unique interpreter-based approach that provides flexibility while overcoming these challenges. This paper explains the design decisions behind TF Micro and describes its implementation details. Also, we present an evaluation to demonstrate its low resource requirement and minimal run-time performance overhead. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Thanks to an amazing team of authors, we&#39;ve just posted the official <a href="https://twitter.com/TensorFlow?ref_src=twsrc%5Etfw">@TensorFlow</a> Lite Micro paper on Arxiv: <a href="https://t.co/sqxoHPyd6q">https://t.co/sqxoHPyd6q</a><br>Lots of juicy details about the design and tradeoffs involved, what worked and what didn&#39;t!</p>&mdash; Pete Warden (@petewarden) <a href="https://twitter.com/petewarden/status/1318361349500461056?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 3. Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization Pranav Subramani, Nicholas Vadivelu, Gautam Kamath - retweets: 773, favorites: 147 (10/21/2020 09:32:38) - links: [abs](https://arxiv.org/abs/2010.09063) | [pdf](https://arxiv.org/pdf/2010.09063) - [cs.LG](https://arxiv.org/list/cs.LG/recent) | [cs.CR](https://arxiv.org/list/cs.CR/recent) | [cs.PF](https://arxiv.org/list/cs.PF/recent) A common pain point in differentially private machine learning is the significant runtime overhead incurred when executing Differentially Private Stochastic Gradient Descent (DPSGD), which may be as large as two orders of magnitude. We thoroughly demonstrate that by exploiting powerful language primitives, including vectorization, just-in-time compilation, and static graph optimization, one can dramatically reduce these overheads, in many cases nearly matching the best non-private running times. These gains are realized in two frameworks: JAX and TensorFlow. JAX provides rich support for these primitives as core features of the language through the XLA compiler. We also rebuild core parts of TensorFlow Privacy, integrating features from TensorFlow 2 as well as XLA compilation, granting significant memory and runtime improvements over the current release version. These approaches allow us to achieve up to 50x speedups in comparison to the best alternatives. Our code is available at https://github.com/TheSalon/fast-dpsgd. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Feeling blue that differentially private machine learning is slow?😢<br>Answer is simple: learn JAX! 😮<br>50x speedups vs TF Privacy &amp; Opacus! 🏎️<br><br>Project carried by <a href="https://twitter.com/PranavSubramani?ref_src=twsrc%5Etfw">@PranavSubramani</a> and <a href="https://twitter.com/nicvadivelu?ref_src=twsrc%5Etfw">@nicvadivelu</a>.<br>📝Paper: <a href="https://t.co/9Dxb8pdYRb">https://t.co/9Dxb8pdYRb</a><br>💻Code: <a href="https://t.co/P7dZgECVop">https://t.co/P7dZgECVop</a><br>🧵Thread ⬇️ 1/8 <a href="https://t.co/F65Hu9ssxZ">pic.twitter.com/F65Hu9ssxZ</a></p>&mdash; Gautam Kamath (@thegautamkamath) <a href="https://twitter.com/thegautamkamath/status/1318596827948535808?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Happy to announce our work on enabling fast DPSGD is up on arxiv now <a href="https://t.co/pPEz5uvNfx">https://t.co/pPEz5uvNfx</a><br><br>Joint work with <a href="https://twitter.com/nicvadivelu?ref_src=twsrc%5Etfw">@nicvadivelu</a> and <a href="https://twitter.com/thegautamkamath?ref_src=twsrc%5Etfw">@thegautamkamath</a>.<br><br>The paper is focused on getting DPSGD to run fast for Machine Learning models in particular. 1/n</p>&mdash; Pranav Subramani (@PranavSubramani) <a href="https://twitter.com/PranavSubramani/status/1318356534225612801?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 4. Against Scale: Provocations and Resistances to Scale Thinking Alex Hanna, Tina M. Park - retweets: 639, favorites: 168 (10/21/2020 09:32:38) - links: [abs](https://arxiv.org/abs/2010.08850) | [pdf](https://arxiv.org/pdf/2010.08850) - [cs.CY](https://arxiv.org/list/cs.CY/recent) At the heart of what drives the bulk of innovation and activity in Silicon Valley and elsewhere is scalability. This unwavering commitment to scalability -- to identify strategies for efficient growth -- is at the heart of what we refer to as "scale thinking." Whether people are aware of it or not, scale thinking is all-encompassing. It is not just an attribute of one's product, service, or company, but frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems. This paper examines different facets of scale thinking and its implication on how we view technology and collaborative work. We argue that technological solutions grounded in scale thinking are unlikely to be as liberatory or effective at deep, systemic change as their purveyors imagine. Rather, solutions which resist scale thinking are necessary to undo the social structures which lie at the heart of social inequality. We draw on recent work on mutual aid networks and propose questions to ask of collaborative work systems as a means to evaluate technological solutions and guide designers in identifying sites of resistance to scale thinking. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">New paper: <a href="https://twitter.com/teeepain?ref_src=twsrc%5Etfw">@teeepain</a> and I wrote a short piece for the recent <a href="https://twitter.com/hashtag/CSCW2020?src=hash&amp;ref_src=twsrc%5Etfw">#CSCW2020</a> workshop on Reconsidering Scale and Scaling, wherein we try to map the dimensions of &quot;scale thinking&quot; in Valley culture and map out resistances in mutual aid <a href="https://t.co/CetJ8WoM5C">https://t.co/CetJ8WoM5C</a></p>&mdash; Dr. Alex Hanna is just a witch, oh little old me (@alexhanna) <a href="https://twitter.com/alexhanna/status/1318356831878537216?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Against scale: how to resist Silicon Valley scale thinking - &quot;solutions which resist scale thinking are necessary to undo the social structures which lie at the heart of social inequality&quot; <a href="https://t.co/xjYRlpypNM">https://t.co/xjYRlpypNM</a> ht <a href="https://twitter.com/gleemie?ref_src=twsrc%5Etfw">@gleemie</a> cc <a href="https://twitter.com/Gina_labs?ref_src=twsrc%5Etfw">@Gina_labs</a></p>&mdash; giulio quaggiotto (@gquaggiotto) <a href="https://twitter.com/gquaggiotto/status/1318452389586685953?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 5. Reinforcement Learning for Efficient and Tuning-Free Link Adaptation Vidit Saxena, Hugo Tullberg, Joakim Jaldén - retweets: 512, favorites: 22 (10/21/2020 09:32:39) - links: [abs](https://arxiv.org/abs/2010.08651) | [pdf](https://arxiv.org/pdf/2010.08651) - [eess.SP](https://arxiv.org/list/eess.SP/recent) | [cs.IT](https://arxiv.org/list/cs.IT/recent) | [cs.LG](https://arxiv.org/list/cs.LG/recent) Link adaptation (LA) optimizes the selection of modulation and coding schemes (MCS) for a stochastic wireless channel. The classical outer loop LA (OLLA) tracks the channel's signal-to-noise-and-interference ratio (SINR) based on the observed transmission outcomes. On the other hand, recent Reinforcement learning LA (RLLA) schemes sample the available MCSs to optimize the link performance objective. However, both OLLA and RLLA rely on tuning parameters that are challenging to configure. Further, OLLA optimizes for a target block error rate (BLER) that only indirectly relates to the common throughput-maximization objective, while RLLA does not fully exploit the inter-dependence between the MCSs. In this paper, we propose latent Thompson Sampling for LA (LTSLA), a RLLA scheme that does not require configuration tuning, and which fully exploits MCS inter-dependence for efficient learning. LTSLA models an SINR probability distribution for MCS selection, and refines this distribution through Bayesian updates with the transmission outcomes. LTSLA also automatically adapts to different channel fading profiles by utilizing their respective Doppler estimates. We perform simulation studies of LTSLA along with OLLA and RLLA schemes for frequency selective fading channels. Numerical results demonstrate that LTSLA improves the instantaneous link throughout by up to 50% compared to existing schemes. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Reinforcement Learning for Efficient and Tuning-Free Link Adaptation. <a href="https://twitter.com/hashtag/MachineLearning?src=hash&amp;ref_src=twsrc%5Etfw">#MachineLearning</a> <a href="https://twitter.com/hashtag/AI?src=hash&amp;ref_src=twsrc%5Etfw">#AI</a> <a href="https://twitter.com/hashtag/BigData?src=hash&amp;ref_src=twsrc%5Etfw">#BigData</a> <a href="https://twitter.com/hashtag/Analytics?src=hash&amp;ref_src=twsrc%5Etfw">#Analytics</a> <a href="https://twitter.com/hashtag/RStats?src=hash&amp;ref_src=twsrc%5Etfw">#RStats</a> <a href="https://twitter.com/hashtag/Python?src=hash&amp;ref_src=twsrc%5Etfw">#Python</a> <a href="https://twitter.com/hashtag/Java?src=hash&amp;ref_src=twsrc%5Etfw">#Java</a> <a href="https://twitter.com/hashtag/JavaScript?src=hash&amp;ref_src=twsrc%5Etfw">#JavaScript</a> <a href="https://twitter.com/hashtag/ReactJS?src=hash&amp;ref_src=twsrc%5Etfw">#ReactJS</a> <a href="https://twitter.com/hashtag/Serverless?src=hash&amp;ref_src=twsrc%5Etfw">#Serverless</a> <a href="https://twitter.com/hashtag/IoT?src=hash&amp;ref_src=twsrc%5Etfw">#IoT</a> <a href="https://twitter.com/hashtag/Linux?src=hash&amp;ref_src=twsrc%5Etfw">#Linux</a> <a href="https://twitter.com/hashtag/Coding?src=hash&amp;ref_src=twsrc%5Etfw">#Coding</a> <a href="https://twitter.com/hashtag/100DaysOfCode?src=hash&amp;ref_src=twsrc%5Etfw">#100DaysOfCode</a> <a href="https://twitter.com/hashtag/Programming?src=hash&amp;ref_src=twsrc%5Etfw">#Programming</a> <a href="https://twitter.com/hashtag/Statistics?src=hash&amp;ref_src=twsrc%5Etfw">#Statistics</a> <a href="https://twitter.com/hashtag/DataScience?src=hash&amp;ref_src=twsrc%5Etfw">#DataScience</a> <a href="https://twitter.com/hashtag/DeepLearning?src=hash&amp;ref_src=twsrc%5Etfw">#DeepLearning</a> <a href="https://t.co/c5Qtf9UN9S">https://t.co/c5Qtf9UN9S</a> <a href="https://t.co/XrflsJshxz">pic.twitter.com/XrflsJshxz</a></p>&mdash; Marcus Borba (@marcusborba) <a href="https://twitter.com/marcusborba/status/1318692712233766914?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 6. Poisoned classifiers are not only backdoored, they are fundamentally broken Mingjie Sun, Siddhant Agarwal, J. Zico Kolter - retweets: 140, favorites: 63 (10/21/2020 09:32:39) - links: [abs](https://arxiv.org/abs/2010.09080) | [pdf](https://arxiv.org/pdf/2010.09080) - [cs.LG](https://arxiv.org/list/cs.LG/recent) | [cs.CR](https://arxiv.org/list/cs.CR/recent) Under a commonly-studied "backdoor" poisoning attack against classification models, an attacker adds a small "trigger" to a subset of the training data, such that the presence of this trigger at test time causes the classifier to always predict some target class. It is often implicitly assumed that the poisoned classifier is vulnerable exclusively to the adversary who possesses the trigger. In this paper, we show empirically that this view of backdoored classifiers is fundamentally incorrect. We demonstrate that anyone with access to the classifier, even without access to any original training data or trigger, can construct several alternative triggers that are as effective or more so at eliciting the target class at test time. We construct these alternative triggers by first generating adversarial examples for a smoothed version of the classifier, created with a recent process called Denoised Smoothing, and then extracting colors or cropped portions of adversarial images. We demonstrate the effectiveness of our attack through extensive experiments on ImageNet and TrojAI datasets, including a user study which demonstrates that our method allows users to easily determine the existence of such backdoors in existing poisoned classifiers. Furthermore, we demonstrate that our alternative triggers can in fact look entirely different from the original trigger, highlighting that the backdoor actually learned by the classifier differs substantially from the trigger image itself. Thus, we argue that there is no such thing as a "secret" backdoor in poisoned classifiers: poisoning a classifier invites attacks not just by the party that possesses the trigger, but from anyone with access to the classifier. Code is available at https://github.com/locuslab/breaking-poisoned-classifier. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Is the backdoor secret? Checkout our new work on &#39;&#39;breaking&#39;&#39; poisoned classifiers, where we use neat ideas in adversarial robustness to analyze backdoored classifiers. Joint work with <a href="https://twitter.com/agsidd10?ref_src=twsrc%5Etfw">@agsidd10</a> &amp; <a href="https://twitter.com/zicokolter?ref_src=twsrc%5Etfw">@zicokolter</a>.<br><br>Paper: <a href="https://t.co/IRSS1q65Ky">https://t.co/IRSS1q65Ky</a><br>Code: <a href="https://t.co/KCDrek7FTP">https://t.co/KCDrek7FTP</a> <a href="https://t.co/yAyY8zsDD3">pic.twitter.com/yAyY8zsDD3</a></p>&mdash; Mingjie Sun (@Eric_jie_thu) <a href="https://twitter.com/Eric_jie_thu/status/1318567762407677952?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Current backdoor poisoning attacks aren&#39;t just vulnerable to attackers with a &quot;secret trigger&quot;. They can easily be broken (creating a new trigger that is just as effective) given access to the classifier. New paper with <a href="https://twitter.com/Eric_jie_thu?ref_src=twsrc%5Etfw">@Eric_jie_thu</a> and <a href="https://twitter.com/agsidd10?ref_src=twsrc%5Etfw">@agsidd10</a>.<a href="https://t.co/afgPh8C2rE">https://t.co/afgPh8C2rE</a> <a href="https://t.co/EIY5alsXjx">https://t.co/EIY5alsXjx</a></p>&mdash; Zico Kolter (@zicokolter) <a href="https://twitter.com/zicokolter/status/1318576557808562178?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 7. Light Stage Super-Resolution: Continuous High-Frequency Relighting Tiancheng Sun, Zexiang Xu, Xiuming Zhang, Sean Fanello, Christoph Rhemann, Paul Debevec, Yun-Ta Tsai, Jonathan T. Barron, Ravi Ramamoorthi - retweets: 110, favorites: 55 (10/21/2020 09:32:39) - links: [abs](https://arxiv.org/abs/2010.08888) | [pdf](https://arxiv.org/pdf/2010.08888) - [cs.GR](https://arxiv.org/list/cs.GR/recent) | [cs.CV](https://arxiv.org/list/cs.CV/recent) The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light transport matrix of that subject, which enables image-based relighting in novel environments. However, due to the finite number of lights in the stage, the light transport matrix only represents a sparse sampling on the entire sphere. As a consequence, relighting the subject with a point light or a directional source that does not coincide exactly with one of the lights in the stage requires interpolation and resampling the images corresponding to nearby lights, and this leads to ghosting shadows, aliased specularities, and other artifacts. To ameliorate these artifacts and produce better results under arbitrary high-frequency lighting, this paper proposes a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage. Given an arbitrary "query" light direction, our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face that appears to be illuminated by a "virtual" light source at the query location. This neural network must circumvent the inherent aliasing and regularity of the light stage data that was used for training, which we accomplish through the use of regularized traditional interpolation methods within our network. Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights, and is able to generalize across a wide variety of subjects. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Light Stage Super-Resolution: Continuous High-Frequency Relighting<br>pdf: <a href="https://t.co/sQG9Hp98DH">https://t.co/sQG9Hp98DH</a><br>abs: <a href="https://t.co/pdvcGg9D5w">https://t.co/pdvcGg9D5w</a> <a href="https://t.co/2IoudaJE3n">pic.twitter.com/2IoudaJE3n</a></p>&mdash; AK (@ak92501) <a href="https://twitter.com/ak92501/status/1318393772661788673?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 8. An individual-level ground truth dataset for home location detection Luca Pappalardo, Leo Ferres, Manuel Sacasa, Ciro Cattuto, Loreto Bravo - retweets: 134, favorites: 29 (10/21/2020 09:32:39) - links: [abs](https://arxiv.org/abs/2010.08814) | [pdf](https://arxiv.org/pdf/2010.08814) - [cs.CY](https://arxiv.org/list/cs.CY/recent) | [physics.soc-ph](https://arxiv.org/list/physics.soc-ph/recent) Home detection, assigning a phone device to its home antenna, is a ubiquitous part of most studies in the literature on mobile phone data. Despite its widespread use, home detection relies on a few assumptions that are difficult to check without ground truth, i.e., where the individual that owns the device resides. In this paper, we provide an unprecedented evaluation of the accuracy of home detection algorithms on a group of sixty-five participants for whom we know their exact home address and the antennas that might serve them. Besides, we analyze not only Call Detail Records (CDRs) but also two other mobile phone streams: eXtended Detail Records (XDRs, the ``data'' channel) and Control Plane Records (CPRs, the network stream). These data streams vary not only in their temporal granularity but also they differ in the data generation mechanism', e.g., CDRs are purely human-triggered while CPR is purely machine-triggered events. Finally, we quantify the amount of data that is needed for each stream to carry out successful home detection for each stream. We find that the choice of stream and the algorithm heavily influences home detection, with an hour-of-day algorithm for the XDRs performing the best, and with CPRs performing best for the amount of data needed to perform home detection. Our work is useful for researchers and practitioners in order to minimize data requests and to maximize the accuracy of home antenna location. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Good news, everyone! Our paper on home-tower identification for CDRs, XDRs, and CPRs using *ground truth data* is up on arxiv! <a href="https://t.co/HGjO69bSer">https://t.co/HGjO69bSer</a> 1/n <a href="https://t.co/Sla77bFrZb">pic.twitter.com/Sla77bFrZb</a></p>&mdash; Leo Ferres (@leoferres) <a href="https://twitter.com/leoferres/status/1318563326042013696?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 9. Querent Intent in Multi-Sentence Questions Laurie Burchell, Jie Chi, Tom Hosking, Nina Markl, Bonnie Webber - retweets: 92, favorites: 32 (10/21/2020 09:32:39) - links: [abs](https://arxiv.org/abs/2010.08980) | [pdf](https://arxiv.org/pdf/2010.08980) - [cs.CL](https://arxiv.org/list/cs.CL/recent) Multi-sentence questions (MSQs) are sequences of questions connected by relations which, unlike sequences of standalone questions, need to be answered as a unit. Following Rhetorical Structure Theory (RST), we recognise that different "question discourse relations" between the subparts of MSQs reflect different speaker intents, and consequently elicit different answering strategies. Correctly identifying these relations is therefore a crucial step in automatically answering MSQs. We identify five different types of MSQs in English, and define five novel relations to describe them. We extract over 162,000 MSQs from Stack Exchange to enable future research. Finally, we implement a high-precision baseline classifier based on surface features. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">New work! With <a href="https://twitter.com/Edin_CDT_NLP?ref_src=twsrc%5Etfw">@Edin_CDT_NLP</a> pals <a href="https://twitter.com/very_laurie?ref_src=twsrc%5Etfw">@very_laurie</a> <a href="https://twitter.com/sociofauxnetic?ref_src=twsrc%5Etfw">@sociofauxnetic</a> Jie Chi and Bonnie Webber: <br>&quot;Querent Intent in Multi-Sentence Questions&quot;<a href="https://t.co/ueuGkZTdD9">https://t.co/ueuGkZTdD9</a><br>To appear at LAW <a href="https://twitter.com/coling2020?ref_src=twsrc%5Etfw">@coling2020</a></p>&mdash; Tom Hosking (@tomhosking) <a href="https://twitter.com/tomhosking/status/1318498896344195073?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 10. Layer-wise Characterization of Latent Information Leakage in Federated Learning Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Hamed Haddadi, Soteris Demetriou - retweets: 62, favorites: 35 (10/21/2020 09:32:39) - links: [abs](https://arxiv.org/abs/2010.08762) | [pdf](https://arxiv.org/pdf/2010.08762) - [cs.CR](https://arxiv.org/list/cs.CR/recent) | [cs.AI](https://arxiv.org/list/cs.AI/recent) Training a deep neural network (DNN) via federated learning allows participants to share model updates (gradients), instead of the data itself. However, recent studies show that unintended latent information (e.g. gender or race) carried by the gradients can be discovered by attackers, compromising the promised privacy guarantee of federated learning. Existing privacy-preserving techniques (e.g. differential privacy) either have limited defensive capacity against the potential attacks, or suffer from considerable model utility loss. Moreover, characterizing the latent information carried by the gradients and the consequent privacy leakage has been a major theoretical and practical challenge. In this paper, we propose two new metrics to address these challenges: the empirical $\mathcal{V}$-information, a theoretically grounded notion of information which measures the amount of gradient information that is usable for an attacker, and the sensitivity analysis that utilizes the Jacobian matrix to measure the amount of changes in the gradients with respect to latent information which further quantifies private risk. We show that these metrics can localize the private information in each layer of a DNN and quantify the leakage depending on how sensitive the gradients are with respect to the latent information. As a practical application, we design LatenTZ: a federated learning framework that lets the most sensitive layers to run in the clients' Trusted Execution Environments (TEE). The implementation evaluation of LatenTZ shows that TEE-based approaches are promising for defending against powerful property inference attacks without a significant overhead in the clients' computing resources nor trading off the model's utility. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Sharing model params/gradients in Federated Learning leaks sensitive information. We investigate the privacy threats and evaluate layer-wise leakages using information theory and sensitivity analysis. Then we use TEEs to minimize the risks on edge devices. <a href="https://t.co/Unvo715qvq">https://t.co/Unvo715qvq</a> <a href="https://t.co/xbi0o06q4N">pic.twitter.com/xbi0o06q4N</a></p>&mdash; Fan Vincent Mo (@VincentMo6) <a href="https://twitter.com/VincentMo6/status/1318474924810506242?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 11. Permutationless Many-Jet Event Reconstruction with Symmetry Preserving Attention Networks Michael James Fenton, Alexander Shmakov, Ta-Wei Ho, Shih-Chieh Hsu, Daniel Whiteson, Pierre Baldi - retweets: 30, favorites: 40 (10/21/2020 09:32:40) - links: [abs](https://arxiv.org/abs/2010.09206) | [pdf](https://arxiv.org/pdf/2010.09206) - [hep-ex](https://arxiv.org/list/hep-ex/recent) | [cs.LG](https://arxiv.org/list/cs.LG/recent) | [hep-ph](https://arxiv.org/list/hep-ph/recent) Top quarks are the most massive particle in the Standard Model and are produced in large numbers at the Large Hadron Collider. As the only quark to decay prior to hadronization, they have a complex detector signature and require special reconstruction techniques. The most common decay mode, the so-called "all-hadronic" channel, results in a 6-jet final state which is particularly difficult to reconstruct in $pp$ collisions due to the large number of permutations possible. We present a novel approach to this class of problem, based on neural networks using a generalized attention mechanism, that we call Symmetry Preserving Attention Networks (\ProjectName). We train one such network to identify and assign the decay products of each top quark unambiguously and without combinatorial explosion as an example of the power of this technique. This approach significantly outperforms existing state-of-the-art methods, correctly assigning all jets in 93.0\% of 6-jet events, 87.8\% of events with 7 jets, and 82.6\% of events with $\geq 8$ jets. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">New paper! <br><br> Permutationless Many-Jet Event Reconstruction with Symmetry Preserving Attention Networks<a href="https://t.co/8K1wFbxmFW">https://t.co/8K1wFbxmFW</a><br><br>Led by <a href="https://twitter.com/mfentonHEP?ref_src=twsrc%5Etfw">@mfentonHEP</a> and Alex Shmakov. <a href="https://t.co/V54irMV0zh">pic.twitter.com/V54irMV0zh</a></p>&mdash; Daniel Whiteson (@DanielWhiteson) <a href="https://twitter.com/DanielWhiteson/status/1318359589205676032?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 12. Paid and hypothetical time preferences are the same: Lab, field and online evidence Pablo Brañas-Garza, Diego Jorrat, Antonio M. Espín, Angel Sánchez - retweets: 32, favorites: 23 (10/21/2020 09:32:40) - links: [abs](https://arxiv.org/abs/2010.09262) | [pdf](https://arxiv.org/pdf/2010.09262) - [physics.soc-ph](https://arxiv.org/list/physics.soc-ph/recent) | [cs.GT](https://arxiv.org/list/cs.GT/recent) The use of hypothetical instead of real decision-making incentives remains under debate after decades of economic experiments. Standard incentivized experiments involve substantial monetary costs due to participants' earnings and often logistic costs as well. In time preferences experiments, which involve future payments, real payments are particularly problematic. Since immediate rewards frequently have lower transaction costs than delayed rewards in experimental tasks, among other issues, (quasi)hyperbolic functional forms cannot be accurately estimated. What if hypothetical payments provide accurate data which, moreover, avoid transaction cost problems? In this paper, we test whether the use of hypothetical - versus real - payments affects the elicitation of short-term and long-term discounting in a standard multiple price list task. One-out-of-ten participants probabilistic payment schemes are also considered. We analyze data from three studies: a lab experiment in Spain, a well-powered field experiment in Nigeria, and an online extension focused on probabilistic payments. Our results indicate that paid and hypothetical time preferences are mostly the same and, therefore, that hypothetical rewards are a good alternative to real rewards. However, our data suggest that probabilistic payments are not. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">New paper on hypothetical vs real payments. “Paid and hypothetical time preferences are the same: Lab, field and online evidence<br><br>With ⁦<a href="https://twitter.com/dajorrat?ref_src=twsrc%5Etfw">@dajorrat</a>⁩ ⁦⁦<a href="https://twitter.com/A_M_Espin?ref_src=twsrc%5Etfw">@A_M_Espin</a>⁩ <a href="https://twitter.com/anxosan?ref_src=twsrc%5Etfw">@anxosan</a>⁩ <br><br>⁦<a href="https://twitter.com/LoyolaAnd?ref_src=twsrc%5Etfw">@LoyolaAnd</a>⁩ ⁦<a href="https://twitter.com/LoyolaBehLab?ref_src=twsrc%5Etfw">@LoyolaBehLab</a>⁩ ⁦<a href="https://twitter.com/EcScienceAssoc?ref_src=twsrc%5Etfw">@EcScienceAssoc</a>⁩ <a href="https://t.co/QOEgPFcD1L">https://t.co/QOEgPFcD1L</a></p>&mdash; BehaviouralSnapshots (@BehSnaps) <a href="https://twitter.com/BehSnaps/status/1318651943359795201?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # 13. Warrior1: A Performance Sanitizer for C++ Nadav Rotem, Lee Howes, David Goldblatt - retweets: 4, favorites: 46 (10/21/2020 09:32:40) - links: [abs](https://arxiv.org/abs/2010.09583) | [pdf](https://arxiv.org/pdf/2010.09583) - [cs.SE](https://arxiv.org/list/cs.SE/recent) This paper presents Warrior1, a tool that detects performance anti-patterns in C++ libraries. Many programs are slowed down by many small inefficiencies. Large-scale C++ applications are large, complex, and developed by large groups of engineers over a long period of time, which makes the task of identifying inefficiencies difficult. Warrior1 was designed to detect the numerous small performance issues that are the result of inefficient use of C++ libraries. The tool detects performance anti-patterns such as map double-lookup, vector reallocation, short lived objects, and lambda object capture by value. Warrior1 is implemented as an instrumented C++ standard library and an off-line diagnostics tool. The tool is very effective in detecting issues. We demonstrate that the tool is able to find a wide range of performance anti-patterns in a number of popular performance sensitive open source projects. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Warrior1: A Performance Sanitizer for C++<br><br>W1 detects performance anti-patterns in C++. Things like: map double-lookup, vector reallocation, short lived objects, and lambda object capture by value. <br><br>Paper: <a href="https://t.co/LQMxccFpai">https://t.co/LQMxccFpai</a> w/ <a href="https://twitter.com/LeeWHowes?ref_src=twsrc%5Etfw">@LeeWHowes</a> + <a href="https://twitter.com/davidtgoldblatt?ref_src=twsrc%5Etfw">@davidtgoldblatt</a></p>&mdash; Nadav Rotem (@nadavrot) <a href="https://twitter.com/nadavrot/status/1318404023385432066?ref_src=twsrc%5Etfw">October 20, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
147.206478
2,113
0.780391
eng_Latn
0.930568
6175afb045c7d2c9dea0cb4ac682185d96875317
1,816
md
Markdown
about.md
whoisguardsite/test
62313665a9b673b5d5e5b1dbe9761c654751970c
[ "MIT" ]
19
2015-01-12T22:30:53.000Z
2020-07-29T19:10:20.000Z
about.md
whoisguardsite/test
62313665a9b673b5d5e5b1dbe9761c654751970c
[ "MIT" ]
8
2015-02-17T19:21:18.000Z
2021-08-21T14:48:25.000Z
about.md
whoisguardsite/test
62313665a9b673b5d5e5b1dbe9761c654751970c
[ "MIT" ]
17
2015-02-17T14:55:33.000Z
2022-02-16T06:08:14.000Z
--- layout: info permalink: /about/ title: About --- ![ChatSecure Logo](/images/chatsecure-banner.png) [ChatSecure](https://chatsecure.org) is a free and open source encrypted chat client for iPhone that supports [OMEMO encryption](https://en.wikipedia.org/wiki/OMEMO) and [OTR encryption](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) over [XMPP](https://en.wikipedia.org/wiki/Xmpp). Unfortunately ChatSecure Android is no longer available ([more info](/faq)). ChatSecure is available on the Apple App Store for free. [![ChatSecure for iPhone](/images/appstore.svg)](https://itunes.apple.com/us/app/chatsecure/id464200063) You can also follow ChatSecure on [Twitter](https://twitter.com/ChatSecure) or [Facebook](https://www.facebook.com/chatsecure). # Support * [GitHub Issues - iOS](https://github.com/chrisballinger/Off-the-Record-iOS/issues) * [UserVoice - iOS](https://chatsecure.uservoice.com/forums/193939-chatsecure-ios) * Please report iOS security-related issues directly to [chris@chatsecure.org](mailto:chris@chatsecure.org) ([GPG 50F7D255](/assets/pubkeys/50F7D255.asc)). # Source Code The source code for both projects is hosted on GitHub. * [iOS](https://github.com/chrisballinger/Off-the-Record-iOS) / [License](https://github.com/chrisballinger/Off-the-Record-iOS/blob/master/LICENSE) (GPLv3+) # Donate We accept [PayPal](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=XRBHJ9AX5VWNA) and [Bitcoin](https://coinbase.com/checkouts/1cf35f00d722205726f50b940786c413) donations for the iOS version. Thank you! [![bitcoin donation](/images/bitcoin_donate.png)](https://coinbase.com/checkouts/1cf35f00d722205726f50b940786c413) [![paypal donation](/images/paypal_donate.png)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=XRBHJ9AX5VWNA)
53.411765
368
0.773678
kor_Hang
0.346181
6176737a8c5966798e4ac79e8f3a41eb9ed414d6
357
md
Markdown
README.md
srikanthgedela/APEC_CSS-and-JS-Techniques
c5d914635dfcbaeb716816c91eb555a5131ac8e7
[ "MIT" ]
31
2019-06-25T00:02:19.000Z
2022-01-13T09:19:02.000Z
README.md
srikanthgedela/APEC_CSS-and-JS-Techniques
c5d914635dfcbaeb716816c91eb555a5131ac8e7
[ "MIT" ]
2
2021-06-27T08:55:32.000Z
2021-11-24T07:39:50.000Z
README.md
srikanthgedela/APEC_CSS-and-JS-Techniques
c5d914635dfcbaeb716816c91eb555a5131ac8e7
[ "MIT" ]
1
2020-08-11T20:25:40.000Z
2020-08-11T20:25:40.000Z
# Advanced CSS and JS Techniques to Tweak Your Application UI [Demo Application](https://askmax.solutions/ords/f?p=10100) [Presentation](./Advanced%20CSS%20and%20JS%20Techniques%20to%20Tweak%20Your%20Application%20UI.pdf) --- **NOTE** All CSS is Sass based and is available under the src directory. An export of the demo application is also available.
27.461538
99
0.77591
eng_Latn
0.703061
6176e4d83ca2d7d0085c4854afad51c66c10c43b
148
md
Markdown
data/resume/test10.md
RohitPatil555/resumegenerator
07cf72264d1a8cac880d6ff8294808702bddb1be
[ "MIT" ]
null
null
null
data/resume/test10.md
RohitPatil555/resumegenerator
07cf72264d1a8cac880d6ff8294808702bddb1be
[ "MIT" ]
null
null
null
data/resume/test10.md
RohitPatil555/resumegenerator
07cf72264d1a8cac880d6ff8294808702bddb1be
[ "MIT" ]
null
null
null
##### Person Name ###### Your Faithfully, I hereby solemnly declare that all the information furnished above is true to the best of my knowledge.
24.666667
103
0.736486
eng_Latn
0.999876
6176f199b2e5e3e086a644be07a1fb7d4c9c1961
1,864
md
Markdown
README.md
KrzysztofRzecki/credo-detector-android
3fda259dd1e468167b9021e78f682432ff95381f
[ "MIT" ]
1
2020-02-22T09:07:13.000Z
2020-02-22T09:07:13.000Z
README.md
KrzysztofRzecki/credo-detector-android
3fda259dd1e468167b9021e78f682432ff95381f
[ "MIT" ]
null
null
null
README.md
KrzysztofRzecki/credo-detector-android
3fda259dd1e468167b9021e78f682432ff95381f
[ "MIT" ]
null
null
null
# CREDO Detector The Android application for [CREDO project](https://credo.science/). The Cosmic-Ray Extremely Distributed Observatory (CREDO) project is hunting for particle cascades from deep space in an effort to answer some of the fundamental questions about our Universe. Your particle detections will feed directly into this pioneering new area of scientific research. You can also engage in the analysis of the detections that you, your fellow citizen scientists and professional observatories from around the world are making though the Dark Universe Welcome experiment. ## Build You must download [Android SDK](https://developer.android.com/studio/index.html) and set path to it in `ANDROID_HOME` environment variable or `local.properties`. Now you can compile from command line: ```bash $ ./gradlew build ``` or ```batch gradlew.bat build ``` Now you can find compiled *apk* file in `./app/build/outputs/apk/debug`. ### Contribution Welcome to import project to Android Studio and contribute in development. 1. Open Android Studio. 2. `File->New->Project from Version Control->GitHub` 3. Enter the Git Repository URL and clone. Public contribution rules is not defined yet. If you want to make changes now you can fork project and make pull request. #### Contribution rules for team When you get issue to resolve you should: 1. Create new branch from master called `I{issue number}` i.e. `I5`. 2. Make modifications (new feature or fix) on this branch. 3. Call to maintainer when modifications is done. 4. Rebase branch to master. 5. Respond to the recommendations from maintainer and push to branch and call to him when finish. 6. When maintainers fully accept then merge to master. ## Acknowledgments The CREDO Detector code is funded by: Interantional Visegrad Fund, Institute of Nuclear Physics PAN, Carcow University of Technology.
34.518519
133
0.781652
eng_Latn
0.99602
6176f427b419f53b8b891c5d337851ce3b4115b7
16,841
md
Markdown
README.md
qweasz687/mota-js
0fbee0a6f5493910e4e0ebe82264eb629620bfd8
[ "BSD-3-Clause" ]
null
null
null
README.md
qweasz687/mota-js
0fbee0a6f5493910e4e0ebe82264eb629620bfd8
[ "BSD-3-Clause" ]
null
null
null
README.md
qweasz687/mota-js
0fbee0a6f5493910e4e0ebe82264eb629620bfd8
[ "BSD-3-Clause" ]
null
null
null
# HTML5 魔塔样板 ## 简介 HTML5 canvas制作的魔塔样板,支持全平台游戏! **即使完全不会编程的用户,按照模板和说明文档也能很快做出一个魔塔游戏!** * [List / HTML5魔塔游戏列表](https://h5mota.com/) * [Demo / 样板效果](https://ckcz123.com/games/template/) * [Docs / 使用文档说明](https://ckcz123.github.io/mota-js/) * [Video / 视频教程](https://www.bilibili.com/video/av32781473/) ![样板](./docs/sample0.png) ## 目录结构 ``` bash ├── /_server/ # 为可视化地图编辑器提供一些支持的目录 ├── /docs/ # 文档目录 ├── /extensions/ # 拓展工具目录,发布到网站后不会加载 ├── /libs/ # 系统库目录 │ ├─ /thirdparty/ # 游戏所用到的第三方库文件 │ ├─ actions.js # 处理用户交互的文件 │ ├─ core.js # 系统核心文件(游戏入口,接口&转发) │ ├─ control.js # 游戏逻辑控制 │ ├─ data.js # 记录了一些初始化信息 │ ├─ enemys.js # 记录了怪物的信息,包括特殊属性、伤害计算公式、临界值计算等。 │ ├─ events.js # 处理事件的文件,所有自定义事件都会在此文件中进行处理 │ ├─ extensions.js # 加载拓展工具的文件 │ ├─ icons.js # 图标信息,会被转发到project下 │ ├─ items.js # 道具信息,会被转发到project下 │ ├─ loader.js # 动态加载JS代码、图片、音效等 │ ├─ maps.js # 记录了地图信息,和地图绘制等操作 │ ├─ ui.js # UI绘制信息,主要负责绘制各个UI窗口。 │ └─ utils.js # 工具类 ├── /project/ # 项目目录,用户需要在这里做自己的塔 │ ├─ /animates/ # 动画目录 │ ├─ /autotiles/ # 使用到的自动元件 │ ├─ /bgms/ # 使用到的背景音乐 │ ├─ /floors/ # 剧本文件,记录了每个地图的数据和事件 │ ├─ /fonts/ # 字体目录 │ ├─ /images/ # 游戏中使用到的图片目录 │ ├─ /materials/ # 系统素材目录 │ ├─ /sounds/ # 音效目录 │ ├─ /tilesets/ # 额外素材目录 │ ├─ data.js # 全局变量信息 │ ├─ enemys.js # 怪物属性数据 │ ├─ events.js # 公共事件 │ ├─ functions.js # 可能会被修改的脚本代码 │ ├─ icons.js # 素材和ID的对应关系定义 │ ├─ items.js # 道具的定义,获得道具的效果 │ ├─ maps.js # 地图和数字的对应关系 │ └─ plugins.js # 自定义插件 ├── /常用工具/ # 一些常用工具,可以辅助造塔;具体可参见下面的【相关工具】 ├── editor.html # 可视化地图编辑工具 ├── editor-mobile.html # 可视化地图编辑工具(手机版) ├── index.html # 主程序,游戏的入口 ├── logo.png # 启动游戏时显示的logo图标 ├── main.js # JS程序的入口,将动态对所需JS进行加载 ├── runtime.d.js # 样板运行时的类型标注 ├── server.py # 使用python编写的启动服务 ├── style.css # 游戏所需要用到的样式表 └── 启动服务.exe # 一个本地的HTTP服务器,也能支撑前端的一些POST请求从而能拓展JS的IO功能。 ``` ## 更新说明 ### 2021.7.31 HTML5魔塔样板V2.8 * [x] 现在可以在编辑器中删除不需要的素材了。 * [x] 可以设置每个点的不透明度和绘制特效,并通过事件调节。 * [x] 新增事件:修改装备属性,旋转图片。 * [x] 新增事件:设置某个点的怪物信息。 * [x] 新增事件:修改背景音乐的播放速度和音调。 * [x] 新增装备穿上事件、装备脱下事件、战前事件、图块碰触事件。 * [x] 等待用户操作可以只检测子块,都不满足将不跳出等待。 * [x] 对话框支持淡入淡出效果。 * [x] 新增天气:晴。 * [x] 勇士和图块支持斜向移动,支持变速移动。 * [x] 增添更多系统音效,并可以使用文件别名进行配置。 * [x] 增加录像容错脚本,可以部分处理连载塔录像出错问题。 * [x] 手机端支持快捷换装;快捷换装计入录像。 * [x] 全局商店支持长按空格或屏幕连续购买。 * [x] 怪物手册将单独列出单个属性不同怪物的信息。 * [x] 支持快速追加并注册3*4的va素材。 * [x] 怪物也可以绑定行走图朝向。 * [x] 事件和脚本编辑器支持记录上次关闭时的位置。 * [x] 事件编辑器在禁用后重新打开将维持禁用状态。 * [x] 插件:自绘标题界面居中。 * [x] 新增register系列函数的详细文档教程。 * [x] 更新和修复了常用工具。 * [x] 大量细节优化,修复了所有已知bug。 ### 2021.1.29 HTML5魔塔样板V2.7.3.1 * [x] 怪物可以增加怪物描述到详情页 * [x] 修复V2.7.3的所有已知bug ### 2020.11.8 HTML5魔塔样板V2.7.3 * [x] 内置高清UI,界面更清晰! * [x] 完整的emoji支持!现在可以对话中直接使用emiji了。 * [x] 动画添加多音效支持!可在注册时绑定。 * [x] 事件编辑器的自动补全增加图块图标。 * [x] terrains追加薄墙图块,支持单方向通行。 * [x] 录像播放增加百分比进度显示。 * [x] 怪物增加批量战后事件。 * [x] 设置视角支持增量并可以指定动画时间。 * [x] 新增禁用存档事件。 * [x] 新增左手模式,此模式下WASD为移动,IJKL为存读档。 * [x] core.fillText支持\r变色。 * [x] QTE功能会将超时剩余时间写入flag中。 * [x] 修复npc48追加的朝向绑定问题。 * [x] 大量细节优化,修复了所有已知bug。 ### 2020.7.15 HTML5魔塔样板V2.7.2 * [x] 超大地图支持,现在最大能支持128x128的地图了。 * [x] 从底层重写地图绘制相关逻辑,超大地图在手机端也不会卡顿! * [x] 现在编辑器的大地图模式下也可以进行绘图。 * [x] 事件编辑器的折叠状态会被保留,方便超长事件页的编写不卡顿。 * [x] 可以给显示选择项事件的每个选项设置启用条件。 * [x] 新增剧情文本\g改变绘制字体。 * [x] 增加图块数字的新值块;增加一元操作值块 * [x] 优化部分API调用,修复已知bug ### 2020.6.20 HTML5魔塔样板V2.7.1 * [x] 编辑器中支持直接对大地图进行编辑 * [x] 编辑器支持深色模式 * [x] blockly的多行文本显示 * [x] 高层塔分区指定 * [x] 将素材文件拖到素材区直接自动追加并注册。 * [x] 地图多选点(绑定机关门的实现) * [x] 录像折叠 * [x] 跳跃事件支持dx和dy * [x] 道具商店支持经验 * [x] 瞬移可穿透楼梯;短距离瞬移录像 * [x] 编辑器可以回到上个楼层 * [x] 跟着存档走的存档笔记 * [x] 内置了包含怪物境界、多角色等在内的更多插件 * [x] 数据统计指定楼层范围 * [x] 重写云雾效果 * [x] 毒衰咒不再走公共事件;中毒状态自动寻路毒网 * [x] 修复blockly事件块复制粘贴位置的问题 ### 2020.6.7 HTML5魔塔样板V2.7 * [x] 大幅增强脚本编辑的自动补全和代码提示等功能 * [x] 地图可以右键拉框批量复制剪切删除,额外素材可以拉框绘制区域 * [x] 图片、音乐音效、动画的半自动注册,可以预览或试听 * [x] 表格注释大幅精简,怪物特殊属性、图块出入方向、状态栏显示项等改为多选框 * [x] 重构全局商店,支持任何事件类型 * [x] 门信息、装备属性、难度分歧、楼层贴图、css等事件化 * [x] 增加五图层插件,支持背景2层和前景2层 * [x] 大量图片和几何图形事件支持翻转、旋转和预览 * [x] 新增大量事件:勇士前进或撞击、事件转向、循环遍历,UI绘制等 * [x] 选择项、确认框、等待用户操作等支持超时时间,可以制作QTE效果 * [x] 事件编辑器支持更为强大的自动补全功能 * [x] 初始化和读档对超高层的优化,现在能更好的支持超高层塔了 * [x] 包括状态栏图标在内的大量素材进行更新 * [x] 支持大地图浏览和选点 * [x] 油漆桶和动态更改地图大小 * [x] 重构了几乎全部的文档,对初学者更为友好 * [x] 大量细节优化,解决了许多历史遗留问题 ### 2019.12.31 HTML5魔塔样板V2.6.6 * [x] 编辑器增加【最近使用的图块】的区域 * [x] 编辑器拉伸到铺满全屏幕,还可以Ctrl+滚轮放缩 * [x] 编辑器支持连续Ctrl+Z的撤销,Ctrl+Y的重做 * [x] 新增tileset右键绑定宽高,以替代贴图模式 * [x] 多重自动存档,可以连续A键读档 * [x] 高层塔分区域支持 * [x] 自绘状态栏点击事件 * [x] 绘制的锁定模式 * [x] 等待用户操作增设分歧选项 * [x] 增设压缩模式,会对图片等进行zip压缩 * [x] 追加素材现在可以同时进行自动注册 * [x] 可以复制和粘贴怪物或道具的属性 * [x] 折叠素材时设置每一列个数 * [x] 标题界面和显示选择项时光标跟随鼠标 * [x] 修复所有已知的bug,大量细节优化 ### 2019.12.1 HTML5魔塔样板V2.6.5 * [x] 事件:设置怪物属性;穿脱装备 * [x] 新值块:enemy:xxx:atk可获得怪物数据 * [x] 新值块:blockId:x,y获得某点图块ID * [x] 部分事件预编译,加快执行速度 * [x] 在系统设置中可以设置bgm的播放音量 * [x] 通关事件可以不退出游戏 * [x] 失败时允许直接读取自动存档 * [x] NPC48自动注册可以自动绑定faceIds * [x] 编辑器Alt+1-9存图块,1-9读取图块 * [x] 编辑器现在可以跨楼层复制粘贴图块了 * [x] 可以对flags.进行自动补全 * [x] 部分Bug修复,大量细节优化 ### 2019.10.29 HTML5魔塔样板V2.6.4 * [x] 自动事件,多事件页 * [x] 增加开场logo动画 * [x] 拓展:游戏时动态修改地图和怪物数据 * [x] 插件:道具商店,支持买入和卖出道具 * [x] 编辑器可以搜索变量出现位置 * [x] 变量的中文替换 * [x] 可以给图块绑定自定义脚本,碰触时触发 * [x] 编辑器右键可以绑定机关门和出生点 * [x] 支持多个drawTip同时出现 * [x] 闪烁光标同时支持多个同时存在 * [x] 插件:镜头平滑移动,默认禁用 * [x] 素材的快速追加 * [x] 批量导出动画 * [x] 部分Bug修复,大量细节优化 ### 2019.7.24 V2.6.3 * [x] 标题界面大幅美化,增加闪烁光标,支持键盘开始游戏 * [x] 事件编辑器支持自动补全,能对flag和API列表等进行补全 * [x] 剧情文本中\\c修改字体大小,\\d和\\e切换粗体和斜体 * [x] 事件:设置视角&移动视角 * [x] 可以指定显示选择项的出现条件并动态生成 * [x] 楼层传送器的平面传送模式(哪里离开飞回到哪里) * [x] UI绘制事件增添绘制圆和绘制圆边框 * [x] 所有的UI绘制事件均可以双击预览 * [x] 播放BGM事件可以一直持续播放直到下次调用 * [x] \f立绘支持alpha值 * [x] 支持在脚本编辑中直接flags.xxx调用自定义变量 * [x] 首次获得道具将给予提示 * [x] 等待用户操作支持滚轮,视为PgUp和PgDn * [x] 脚本编辑器语法错误将禁止保存 * [x] 录像播放时B键查看数据统计 * [x] 所有已知bug的修复,大量细节优化 ### 2019.6.7 V2.6.2 * [x] 可以拖动地图上的图块和事件,复制剪切和跨楼层粘贴 * [x] 新增事件的地图选点功能,可以在地图上选择落点 * [x] 现在素材区可以进行折叠与自动换列了 * [x] 新增UI绘制系列事件,并且可以进行预览 * [x] 显示文本事件的标题解析 * [x] 新增常用工具:额外素材合并工具 * [x] 进一步提升24倍速的播放速度 * [x] 楼层转换增加对称点 * [x] 增加编辑器快捷键说明,H键查看 * [x] 文档-事件增加事件编辑器截图 * [x] 大量细节优化,所有已知的Bug修复 ### 2019.5.2 V2.6.1 * [x] 区域优化的录像播放功能,R键使用 * [x] 强制战斗可以指定怪物坐标,将自动隐藏并执行该点战后事件 * [x] flag:xxx也支持中文,例如 flag:2楼机关门 * [x] 增加文件名映射,可以用中文映射到某个图片或bgm文件并使用 * [x] 勇士宽度可以超过32(例如48x48的勇士行走图) * [x] 现在允许修改floorId和图块ID了(在表格下方) * [x] 增加事件:自动存档,返回标题界面;部分事件优化 * [x] 商店长按空格可以连续加点 * [x] 增设global:xxx使用全局存储,可被录像支持 * [x] 支持\b[hero]和\b[null,x,y]来自动调整上下方向 * [x] 支持\t[yellowKey]等只显示图标而没有标题 * [x] 编辑器中前景层对于有事件的点半透明显示 * [x] 存档改成1000页,长按上下页可快速翻页 * [x] 录像播放初始默认暂停,N键可以单步执行 * [x] 增设本地API文档,部分API和事件的优化 * [x] 所有已知的bug修复,大量细节优化 ### 2019.4.13 V2.6 * [x] 拆分整个项目,大幅重构代码,新增大量API * [x] 重写文档,尤其是脚本和API列表 * [x] 现在可以对编辑器的表格的结构进行配置 * [x] 可以收藏和高亮存档 * [x] 独立出来的插件编写 * [x] 新增事件:关门、显示确认框、后置循环处理 * [x] 剧情文本的绘制可以设置居中选项 * [x] 选项框的绘制可以增加图标 * [x] 增加公共事件版的全局商店 * [x] 公共事件现在可以传入参数 * [x] 重写滑冰事件,现在滑冰在背景层了 * [x] 将输入框改成自定义实现,避免部分设备不支持 * [x] 状态栏文字可以自动放缩 * [x] 显示图片和对话框立绘可以裁剪图片 * [x] 修复所有已知bug,大量细节优化 ### 2019.2.19 V2.5.5 * [x] 现在编辑器修改地图后可以直接读档生效,无需再重置地图或回放录像 * [x] 存档方式优化,大幅降低单个存档的占用空间 * [x] 脚本编辑器增加代码格式化的选项 * [x] 事件和脚本编辑器中Ctrl+S可以进行保存 * [x] 显示选择项提供颜色控制 * [x] 事件的移动勇士增加前进和后退两个操作 * [x] 事件编辑器的下拉框增加滚动条 * [x] 通关后将询问是否进行评分 * [x] 录像播放失败后可以回退到上个节点 * [x] 修复已知的所有Bug,大量细节优化 ### 2019.2.4 V2.5.4 * [x] 发布15x15的版本 * [x] 独立出来的公共事件 * [x] 支持多重装备(一个装备可以装到多个孔上) * [x] 工具栏按钮增添至8个,快捷商店和虚拟键盘同时显示 * [x] 点击状态栏的金币图标也可以打开快捷商店 * [x] 等待事件提供flag:px和flag:py在0~415之间 * [x] 事件:呼出存读档界面;呼出怪物手册 * [x] 事件:使用道具,暂停背景音乐,暂停所有音效 * [x] type:trigger可以触发系统事件 * [x] 独立开关 * [x] 贴图也可以支持帧动画了 * [x] 图块内置颜色选择器 * [x] 标题界面增加音乐按钮 * [x] 等待事件可被Ctrl(长按)跳过 * [x] 部分Bug修复,大量细节优化,性能进一步得到提升 ### 2018.12.22 V2.5.3 * [x] 标题界面事件化;现在可以用事件流来处理标题界面了 * [x] 动态canvas管理;无限图层,可以任意创建图层并使用 * [x] 状态栏canvas化,可以自行对状态栏进行绘制 * [x] 手机端新增1-7按钮,可点工具栏进行切换 * [x] 事件编辑器可以查看最近使用图块和搜索图块 * [x] 事件编辑器中增加增加颜色选择器 * [x] 对话框里`\f`可以自带立绘效果 * [x] 图片相关事件全部修改为动态canvas实现 * [x] 新增事件:滚动字幕 * [x] 新增事件:等待所有异步事件执行完毕 * [x] 新增事件:画面闪烁 * [x] BGM缓存管理;新增事件:预加载BGM * [x] 新增天气:雾 * [x] 每次到达楼层执行的事件`eachArrive` * [x] 可以控制某些图块无全局动画效果 * [x] 背景/前景层的素材可以全局动画/动态Autotile效果 * [x] 可以为每个装备单独设置是否按比例增加 * [x] 地图编辑器中选中点高亮;双击选中素材;WASD平移大地图 * [x] 修复所有Bug,部分代码重构,大量细节优化 ### 2018.11.30 V2.5.2 * [x] 怪物和NPC的行走图和朝向问题 * [x] 可以引入WindowSkin作为对话框的背景素材 * [x] 允许使用\t[标题,1.png]来绘制大头像图 * [x] 对话框的宽度可以根据文本长度自动调整 * [x] \r[red]可以动态调整剧情文本的颜色 * [x] 升级事件改用事件编辑器完成 * [x] 每层楼都增添该层的并行事件处理 * [x] 新增快捷键:N返回标题;P游戏主页;O打开工程 * [x] 新增事件:设置全局属性或全局数值 * [x] 新增事件:隐藏/显示状态栏 * [x] 道具可以设置是否在回放时绘制道具栏或直接使用 * [x] 可以同时异步移动/跳跃勇士和多个NPC * [x] 可以同时异步移动两张或以上的图片了 * [x] 追加素材一次可以追加多个 * [x] 菜单栏中新增虚拟键盘的弹出 * [x] 修复所有已知Bug;部分细节优化 ### 2018.11.21 V2.5.1 * [x] 新增事件type:insert,可以插入另一个地点的事件执行(公共事件) * [x] 可以使用\r来控制剧情文本部分文字的颜色 * [x] 新增事件type:switch,多重分歧 * [x] 绘制前景/背景层时淡化其他图层 * [x] 追加素材的自动调整(如白底、不规范的素材) * [x] 浏览地图时:左上角/V开启显伤;右上角/Z查看当前层大地图 * [x] 允许在受到领域夹击等伤害后禁用快捷商店 * [x] 升级的扣除模式,即不显示经验值,只显示升级的所需剩余值 * [x] 装备增加可装备条件判定 * [x] 选项界面可以使用1-9快速选择 * [x] 未开启状态的快捷商店用灰色显示 * [x] 修复不能在背景/前景层绘图的Bug * [x] 手机端的地图编辑器也能有报错信息了 * [x] 部分其他细节优化 ### 2018.10.31 V2.5 * [x] 添加绘图模式支持;可以用户手动绘图和保存 * [x] 内置主动技能:二倍斩的支持,可以仿照制作其他主动技能 * [x] 将按键处理移动到脚本编辑中 * [x] Alt+0\~9保存和读取当前套装 * [x] 图块属性的cannotOut和cannotIn控制可通行方向(来造成悬崖效果) * [x] 支持动态Autotile自动元件(仅在事件层有效) * [x] 允许快捷商店使用共用的times * [x] 未启用的快捷商店可以隐藏或预览 * [x] 开始剧情startText可以执行任意事件 * [x] 对话窗口可以任意调节位置(上中下、距离顶部/底部的像素值) * [x] 楼层转换界面可以设置背景图片文字颜色等 * [x] 数据统计进行分段描写,剑盾显示数值 * [x] 现在可以在事件编辑器中注释内容了 * [x] 存读档界面显示该存档的属性 * [x] F7键可以开启debug模式 * [x] R键可以从本地选取录像文件从头播放 * [x] 吸血属性的显伤增加^;仇恨怪显示仇恨伤害 * [x] 4键默认使用破冰稿或冰冻徽章或地震卷轴或上下楼器(依次判断是否存在) * [x] 血瓶的道具化选项;黄宝石增加加点选项 * [x] 破炸飞增加默认音效 * [x] 修复单击瞬移的拖动打怪问题 * [x] 其他细节优化 ### 2018.10.27 V2.4.4 * [x] tilesets可以设置图块属性(如可通行状态) * [x] 追加素材时可以更改图片色调 * [x] 防作弊手段的进一步加强:打开控制台则禁止上传成绩 * [x] 图块属性增加“是否可被破震”的选项 * [x] 模仿怪物内置模仿临界计算器 * [x] 部分其他细节优化 ### 2018.10.14 V2.4.3 * [x] 并行事件处理 * [x] 事件:设置楼层属性 * [x] 增加光环属性,还可以制作区域光环效果 * [x] 将部分代码移动到脚本编辑中 * [x] 事件改变天气或画面色调,读档后仍有效 * [x] Autotile自动元件的新增和注册 * [x] 状态栏可以显示角色名字 * [x] 浏览地图可以显伤 * [x] 图片淡入淡出和移动图片可以将其保留 * [x] 双击道具栏图标直接进入装备栏 * [x] 可以设置剧情文本的字体大小 * [x] 录像播放可以最高24倍速 * [x] 1-6键快速设置录像播放速度;滚轮加减速 * [x] 修复大地图的夹击Bug * [x] 部分其他细节优化 ### 2018.9.28 V2.4.2 * [x] 允许导入tilesets直接使用,无需PS和注册 * [x] tilesets的素材允许以矩形方式整体绘制 * [x] Alt+0\~9保存素材,Ctrl+0\~9快速选中 * [x] 增加了透明块的支持 * [x] 装备允许按照百分比增加属性 * [x] 多动画的同时播放 * [x] 修复了打开存读档页面时闪屏的问题 * [x] 修复了cannotMove仍然能轻按和瞬移的问题 * [x] 所有已知Bug修复,部分代码重构和细节优化 ### 2018.9.18 V2.4.1 * [x] 增加背景层和前景层的图块绘制,多层图块可叠加 * [x] 背景层/前景层图块的显示、隐藏、修改等事件 * [x] 专门的装备页面(Q键开启);装备系统大改造 * [x] 灯光和漆黑层效果,通过插件函数方式给出 * [x] 将状态栏更新和阻激夹域的计算移动到脚本编辑中 * [x] 增加控制免疫阻激夹域的flag:no_zone等 * [x] 打字机效果时点击显示全部文字 * [x] 修复更改画面色调的Bug * [x] 修复更改剧情文本属性后读档恢复原样的问题 * [x] 部分细节优化 ### 2018.8.28 V2.4 * [x] 大地图的支持 * [x] 突破了5M的存档空间大小限制 * [x] 事件:隐藏/显示贴图 * [x] 事件:接收用户文本输入 * [x] 同点多事件的颜色块绘制 * [x] 录像播放时可以按PgUp/PgDn浏览地图 * [x] 录像播放时对于瞬间移动绘制箭头 * [x] 增加激光属性 * [x] 可以在读档时E键直接指定编号 * [x] 破炸飞可以在状态栏显示个数 * [x] 部分细节优化,所有已知Bug修复 ### 2018.7.21 V2.3.3 * [x] 将怪物特殊属性定义和伤害计算函数移动到脚本编辑中 * [x] 地图编辑器可以使用矩形方式绘制地图 * [x] 瞬间移动可以指定存在事件的点(如怪物、门、楼梯等) * [x] 事件:画面震动 * [x] 事件:更新怪物数据 * [x] 移动事件和跳跃事件增加“不消失”选项 * [x] 获胜结局可以指定“不计入榜单” * [x] 贴图可自适应遮挡效果 * [x] 默认不能走到将死的领域上 * [x] RM动画导出器可以指定压缩比率 * [x] 修改默认bgm * [x] 修复读档开启战斗动画等Bug * [x] 大量细节优化 ### 2018.7.9 V2.3.2 * [x] 适配手机端的造塔页面 * [x] 启动服务的多开版本 * [x] 新增事件:跟随效果 * [x] 怪物数据导出器 * [x] RM动画导出器也能导出音效 * [x] gif播放可随着分辨率自动放缩 * [x] 状态栏可随文字长度自动调整放缩 * [x] 楼传器一次可以翻10层 * [x] 也可以用status:exp来代替经验值的写法 * [x] V键也可以打开快捷商店 * [x] 破炸在周围只有一个目标时无需转向面对它 * [x] 道具效果中,无需再将null改成""才能双击编辑了 * [x] 各个已知Bug的修复,部分细节优化 ### 2018.6.16 V2.3.1 * [x] 存档采用高比率压缩,单个大小是原来的1/10! * [x] 默认存档数改成100页500个 * [x] base64上传成绩,杜绝乱码 * [x] 道具栏翻页 * [x] 单击瞬移(菜单栏中开关) * [x] 重新补上E键打开光标 * [x] 楼层属性增添地下层选项,同层传送至上楼梯 * [x] core.debug()穿墙模式不能穿出地图 * [x] core.values和core.flags也存入存档 * [x] 修复所有已知bug ### 2018.5.27 V2.3 * [x] 启动服务和便捷PS工具(Mac版) * [x] 地图编辑器可以右键复制或移动图块 * [x] 事件:while循环处理 * [x] 事件:等待用户操作并获得按键或点击信息 * [x] 地图数据统计 * [x] 衰弱可以减少攻防的比例 * [x] 便捷PS工具可以批量导入素材 * [x] 楼层转换可以直接指定“上一楼”和“下一楼” * [x] 地图编辑器可以使用滚轮切换楼层 * [x] 除Autotile外均可自动注册 * [x] 支持status:x获得当前坐标 * [x] core.debug()改成调试模式,可以Ctrl穿墙 * [x] 新建地图可以保留楼层属性 * [x] 地图编辑器可用PageUp和PageDown切换楼层 * [x] 提供大量素材,可直接取用 * [x] 重写大部分教程,新增大量拓展描述 * [x] 大量细节进行优化,所有已知的bug进行了修复 ### 2018.5.6 V2.2 * [x] 事件坐标可用变量指定("loc": ["flag:x", "flag:y"]) * [x] 全局商店也可以使用图块编辑 * [x] 高亮显示有事件的格子 * [x] 对于道具和怪物自动注册该列所有未注册的素材 * [x] 便捷PS工具对于白底图片可自动调整为透明背景 * [x] 事件:等待用户点击(type:wait) * [x] 事件:图片移动事件(type:moveImage) * [x] 事件:设置BGM音量(type:setVolume) * [x] 提供core.rand()和core.rand2()两个随机数函数 * [x] 作弊处理(最大有效生命值、匿名则最低成绩上传) * [x] 自定义状态栏绘制 * [x] 最高六倍速播放 * [x] 播放录像时可以C键查看怪物手册 * [x] 修复标题文字太长导致无法开始游戏的问题 * [x] 新增纯新手简易造塔流程 * [x] 部分效果和性能的优化 ### 2018.4.25 V2.1.1 * [x] 新增事件:改变勇士行走图 * [x] 楼传器落点设置 * [x] 录像回放从任意存档点开始 * [x] 录像过程中允许存档 * [x] 血网显伤 * [x] 怪物手册显示接下来的临界表 * [x] 重置当前楼层地图core.resetMap() * [x] 支持部分楼层不允许浏览地图 * [x] 修复部分浏览器无法进入游戏的Bug * [x] 其他细节优化 ### 2018.4.19 V2.1 * [x] 编辑器添加新建和删除按钮;地图自动保存 * [x] 录像支持倒退(录像播放中每20步存一个节点,最多30个) * [x] Gif支持:可以作为楼层背景图或者使用显示动图事件 * [x] 图片显示增加淡入淡出效果 * [x] APP端也能下载或读取文件 * [x] 地图临界显伤 * [x] 单个存档清理 * [x] 大数据魔塔的支持 * [x] 进一步对JS文件和图标进行压缩,大幅提高加载速度 * [x] 修复有时候无法输入ID的问题 * [x] 其他细节优化 ### 2018.3.17 V2.0.1 * [x] 道具使用效果的进一步分离 * [x] 支持插件编写,用户可以根据需求来写插件了 * [x] 文本编辑器支持自动补全和代码纠错 * [x] 部分UI界面发生变化,更加方便和美观 * [x] 所有已知Bug的修复 ### 2018.3.14 V2.0 * [x] 全GUI造塔,现在用户无需打开任何文件直接编辑JS代码了。 * [x] 整体改变目录架构,将数据和逻辑进行分离 * [x] 支持48x32的怪物和NPC素材 * [x] 加点改成系统开关进行处理,怪物手册会列出加点值 * [x] 支持带有血量上限的塔 * [x] 增加前景图片绘制 * [x] 便捷PS工具对于非标准的图片可以自动进行调整 * [x] 录像存储机制进行修改,对于道具记录全ID * [x] 其他细节的优化 ### 2018.2.9 V1.4.1 * [x] 改变图块(setBlock事件)。 * [x] 同一个点的多事件处理(做法详见文档)。 * [x] 增加新地图后可以接档而不用重新开始。 * [x] 增加可以接收用户输入的事件(type:input)。 * [x] 滚动字幕;自动剧情文本。 * [x] 可以同时show/hide多个事件。 * [x] 现在可以支持滑冰和推箱子事件了。 * [x] 地图中每个块的可通行方向控制(悬崖效果)。 * [x] 动画支持带旋转和翻转的帧。 * [x] 长按屏幕可跳过对话。 * [x] 现在可以允许用户丢弃道具了(例如不会再使用的装备)。 * [x] 修复行走时按键会发生动画抖动问题。 * [x] 修复无法打开战斗动画的Bug。 ### 2018.2.6 V1.4 * [x] 支持动画。 * [x] 瞬间移动。 * [x] 支持天气系统,可以在剧本中设置默认天气。 * [x] 新增自定义事件-图片显示。 * [x] 同时可以在剧本中设定多个背景素材。 * [x] 剧情文本特性控制,人物的对话框效果。 * [x] 单存档同步到服务器,下载到文件和读取。 * [x] 键盘支持自动寻路操作。 * [x] 浏览地图模式下可以查看怪物数据。 * [x] 未成功打怪和开门则不自动存档。 * [x] 重新支持楼梯穿透。 * [x] 支持多结局,成绩将分开统计。 * [x] 重构全局动画、行走动画和行走检测,大幅提升性能。 * [x] 修复所有已知Bug。 ### 2018.1.21 V1.3.2 * [x] 增加录像和回放功能。 * [x] 增加统计功能,现在能看到每部塔的游戏人数、通关人数和当前MAX了。 * [x] 增加浏览地图功能,玩家可以快速查看每层楼的地图。 * [x] 现在保存文件到本地,以及从本地文件读档了。 * [x] 可以在全局开关中设置剑盾是否作为装备存在。 * [x] 修复了部分已知Bug。 ### 2018.1.12 V1.3.1 * [x] 增加虚拟键盘 * [x] 增加自动存档(回退),A键可快速读档 * [x] 修复几处较为严重的Bug ### 2018.1.1 V1.3 * [x] 支持全键盘操作。 * [x] 支持将某个图片作为某层的背景素材。 * [x] 便捷PS工具支持更改图片色相。 * [x] 支持经验升级(进阶/境界塔)。 * [x] 打败怪物可以进行加点(加点塔)。 * [x] 增加阻击、N连击等属性;在怪物手册有属性显示。 * [x] 支持九宫格领域和大范围领域。 * [x] 增加负伤。 * [x] 支持各种BGM的播放。 * [x] 支持不同层使用不同的地面素材;支持多个Autotile同时存在。 * [x] 许多细节进行了优化,一些已知的Bug进行了修复。 ### 2017.12.21 V1.2 * [x] 新增:本地HTTP服务器。 * [x] 新增:可视化地图编辑工具。 * [x] 新增:便捷PS工具。 * [x] 移除了meaning.txt,现在“地图生成器”将直接从js文件中读取数字和图块对应关系。 * [x] 新增:对Autotile图块的支持。 * [x] 新增:怪物支持多种属性;添加仇恨属性。 * [x] 移除了不再支持的checkBlock,现在对于领域和夹击无需再手动指定可能的点了。 * [x] 新增:单向箭头、感叹号(单次通行)的支持。 * [x] 新增:更多的默认素材,现在对于大多数地图风格无需P图,直接替换即可。 * [x] 添加部分自定义事件,部分细节优化,一些已知的Bug进行了修复。 ### 2017.12.16 V1.1 * [x] 新增:战斗过程显示,可以在设置中关闭 * [x] 新增:勇士支持48*32(大图)的行走图 * [x] 新增:更改画面色调 * [x] 新增:文字显示支持自动换行 * [x] 部分修改状态栏UI * [x] 增添Web的Markdown文档,移除原本的doc和pdf文档。 * [x] 修复若干Bug。 ### 2017.12.9 V1.0 * [x] 发布初版HTML5魔塔样板 ## 相关工具 - [启动服务](http://github.com/ckcz123/mota-js-server/): 一个本地的HTTP服务器,也能支撑前端的一些POST请求从而能拓展JS的IO功能。 - [RM动画导出器](http://github.com/ckcz123/animate_export/):能从RMXP中导出动画,以供H5使用。 - [JS代码压缩工具](http://github.com/ckcz123/JSCompressor/):能对Javascript代码进行压缩和整合,从而减少IO请求量。 - [便捷PS工具](http://github.com/ckcz123/ps/):能只用复制和粘贴来快速对素材进行PS操作。 - [地图生成器](http://github.com/ckcz123/map_generator/):能从一张截图识别出来具体的数字数组,方便复刻已有的塔。 - [怪物数据导出器](http://github.com/ckcz123/enemy_export/):能从RMXP中导出怪物数据,以供H5使用。 - [伤害和临界值计算器](http://github.com/ckcz123/magic-tower-calculator/):一个能帮助计算怪物的伤害和临界值的小工具。 ## 联系我们 本样板主要由 [`ckcz123`](https://github.com/ckcz123) (百度ID和QQ昵称 `艾之葵`,简称 `小艾`)编写。 HTML5魔塔交流群群号: `539113091` 如有其它意见或建议,也可以通过发[issues](https://github.com/ckcz123/mota-js/issues)、或邮件至[ckcz123@126.com](mailto:ckcz123@126.com)联系我。 ## 贡献者 感谢对本样板做出贡献的人员: [@ckcz123](https://github.com/ckcz123) 艾之葵(小艾):本样板的的主要编写者。样板的运行时的核心代码,所有常用小工具,以及安卓APP等,都是其所写。 [@Vinlic](https://github.com/Vinlic) v神:第一个HTML5魔塔[纪元魔塔前传](https://tieba.baidu.com/p/4545234500)([游戏地址](http://vinlic.gitee.io/mota/),[开发记录贴](https://tieba.baidu.com/p/4397526540),[源代码](https://gitee.com/Vinlic/Mota))的编写者。该塔的[第三版内核](https://tieba.baidu.com/p/4738973089)是现在HTML5魔塔样板的前身,现在样板的很多核心逻辑控制,以及UI界面等相关代码都是继承于该塔。 [@zhaouv](https://github.com/zhaouv) z触:V2.0的推动者,可视化地图编辑器和事件编辑器的制作者,手机端魔塔制作界面的编写者。现在我们能方便快捷的可视化地图编辑器,以及通过拖动图块来编写事件,能在手机端造塔等,都需要归功于zhaouv的贡献。 [@i2Echo](https://github.com/i2Echo):V2.0的推动者,可视化地图编辑器的制作者,游戏界面自适应匹配的编写者。和zhaouv一起推动和开发了V2.0的制作。 [@fux4](https://github.com/fux4) 老黄鸡:打通了RM和H5之间的障壁(从而使RM动画导出器和怪物数据导出器成为可能),同时也是大地图和部分新功能(如跳跃、跟随、画面震动)等的编写者。 [@cafel176](https://github.com/cafel176):编写了多种辅助小工具如动画编辑器等。 [@tocque](https://github.com/tocque) 鹿神:装备栏、动态创建图层等多项功能的编写者。同时也参与了多种辅助小工具的开发。 [@AutumnOrange51](https://github.com/AutumnOrange51) 秋橙:样板的QA(质量监管人员),提出了大量需求,同时也编写了全新的文档。 以及H5魔塔交流群`539113091`,HTML5造塔技术交流群`959329661`的诸位魔塔爱好者们对本样板的大力支持!
23.19697
319
0.682798
yue_Hant
0.860593
6177a15827c07d73fadc115a816dce7f44ba0ffc
4,856
md
Markdown
src/pwa-studio/pwa-devdocs/src/technologies/basic-concepts/content-rendering/index.md
larsroettig/fallback-studio
6d2913a84e529eb241324033b8efbf2995bbff85
[ "MIT" ]
null
null
null
src/pwa-studio/pwa-devdocs/src/technologies/basic-concepts/content-rendering/index.md
larsroettig/fallback-studio
6d2913a84e529eb241324033b8efbf2995bbff85
[ "MIT" ]
null
null
null
src/pwa-studio/pwa-devdocs/src/technologies/basic-concepts/content-rendering/index.md
larsroettig/fallback-studio
6d2913a84e529eb241324033b8efbf2995bbff85
[ "MIT" ]
1
2021-03-18T21:05:17.000Z
2021-03-18T21:05:17.000Z
--- title: Content Rendering --- Browsers require HTML to display page content. Server-side rendering and client-side rendering are two ways a browser can get rendered HTML content for a page. This topic goes over these two ways of rendering content supported by PWA Studio and UPWARD. ## Server-side rendering (SSR) Server-side rendering (SSR) is a method of providing pre-generated HTML as a response to an HTTP request. For example, the content of this website is pre-built from source files. These files are converted into HTML pages and uploaded into an HTTP hosting server. When a user visits the site, the server returns the pre-built HTML file for the browser to render. Example: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <link rel="shortcut icon" href="/favicon.ico"> <title>My Website</title> </head> <body> <header>Header content</header> <menu>Menu content</menu> <main>Main body content</main> <footer>Footer content</footer> </body> </html> ``` Server-side languages, such as PHP and Java, can also render custom HTML per request to make the experience more dynamic. This is how Magento currently works. ## Client-side rendering (CSR) Client-side rendering is another method of delivering HTML content to the browser. Instead of providing the entire HTML page content on a request, the server returns a page with minimal content. The page depends on a JavaScript file that finishes rendering the HTML on the page. The following is an example of what a bare page response looks like: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <link rel="shortcut icon" href="/favicon.ico"> <title>My Web App</title> </head> <body> <div id="root"></div> <script src="/app.js"></script> </body> </html> ``` In this example, the `app.js` script runs after the page loads. A common behavior for this type of file is to generate an HTML DOM tree and insert it into a root element on the page. This pattern is often used for single page applications such as a PWA Storefront. ## Content rendering and Search Engine Optimization (SEO) When and how page content renders is an important part of Search Engine Optimization (SEO). When a search engine crawler processes a page, it indexes the initial HTML response from the server. Some crawlers, such as the [Googlebot][], have the ability to execute JavaScript to simulate client-side rendering. The varying effectiveness of search engines to process client-side rendered content is an important factor to keep in mind when developing your storefront's content rendering strategies. Boosting a site's SEO while providing a rich, dynamic experience is a balancing act between server-side rendering and client-side rendering. ## Content rendering in PWA Studio ### UPWARD and server-side rendering The UPWARD specification supports server-side rendering through it's [JavaScript][] and [PHP][] server implementations. The specification provide different resolvers that can return HTML content as a response to a request. Use the following resolvers in your applications UPWARD configuration file to enable server-side rendering. #### FileResolver The FileResolver configuration lets you use the contents of a static file in your response body. You can pre-build static HTML files for your application and map URL to the content using the FileResolver. This is the fastest way to deliver a server-side rendered HTML response to a request. #### TemplateResolver The TemplateResolver configuration lets you use templates to create a server-side rendered response. Templates are more flexible than pre-built static HTML files because they let you use template variables to create the final HTML response. Server-side rendering performance with the TemplateResolver is dependent on the complexity of the templates it uses. ### Venia content rendering process Venia uses both server-side and client-side rendering to display page content. The following is the sequence of events that occur when a browser requests a page from the Venia storefront: 1. The application's UPWARD server receives the request and checks to see if it is a valid page request. 2. If the request is for a page, the UPWARD server returns a pre-built, server-side rendered HTML response that contains the PWA application shell. 3. After the browser loads the initial application shell, a JavaScript bundle renders the rest of the page content on the client side using React components. These React components may make additional calls to the UPWARD server to get the data they need to finish rendering. [googlebot]: https://en.wikipedia.org/wiki/Googlebot [javascript]: https://github.com/magento/pwa-studio/tree/develop/packages/upward-js [php]: https://github.com/magento-research/upward-php
42.973451
186
0.773064
eng_Latn
0.995229
617824dde78309887c3296776ca1fb1fd08a5231
1,402
md
Markdown
full-stack-development/md/fabulous.md
chestercodes/gitpitch-talks
9cc3ec42f97a8e5beb9a1095e72cf98f7f309d1d
[ "Apache-2.0" ]
null
null
null
full-stack-development/md/fabulous.md
chestercodes/gitpitch-talks
9cc3ec42f97a8e5beb9a1095e72cf98f7f309d1d
[ "Apache-2.0" ]
21
2020-07-18T05:21:14.000Z
2022-02-26T17:28:29.000Z
full-stack-development/md/fabulous.md
chestercodes/gitpitch-talks
9cc3ec42f97a8e5beb9a1095e72cf98f7f309d1d
[ "Apache-2.0" ]
null
null
null
### Fabulous ### (formerly Elmish.XamarinForms) --- ### Xamarin - Made by creators of mono, allows running .Net on other platforms - Xamarin.Android - .Net wrapper around Android APIs - Xamarin.iOS - .Net wrapper around iOS APIs - Other platforms --- ![Xamarin](full-stack-development/assets/img/xamarinPlatform.png) https://docs.microsoft.com/en-gb/xamarin/cross-platform/app-fundamentals/building-cross-platform-applications/overview --- ### Xamarin.Forms - Cross-platform UI toolkit, abstracts components - Define single UI in XAML - Platform specific controls rendered --- ![Xamarin](full-stack-development/assets/img/xamarinNotes.png) https://docs.microsoft.com/en-gb/xamarin/get-started/quickstarts/styling?pivots=windows --- ### Fabulous - Elmish abstractions - Mobile app development tools / APIs - View library to create Xamarin.Forms components ---?code=full-stack-development/code/scripts/CreateFabApp.ps1&lang=ps --- ![Files](full-stack-development/assets/img/FabulousAppNewFilesAnnotated.jpg) --- ### Code demo --- ### Code sharing | Module | Server | Browser | iOS | Android | | ------------ | ------ | ------ |------ |-------- | | Domain | yes | yes | yes | yes | | DataTransfer | yes | yes | yes | yes | | Validation | yes | yes | yes | yes | | ModelUpdate | | yes | yes | yes |
19.746479
118
0.651926
eng_Latn
0.529192
617865eb2c85230e97a05fbe4ab8090349426819
380
md
Markdown
Gateway/Kafka/IaC/README.md
OpenTouryoProject/DataPipeline
068d067cfcb57fef0e84d67c388c105a546d4ed1
[ "MIT" ]
null
null
null
Gateway/Kafka/IaC/README.md
OpenTouryoProject/DataPipeline
068d067cfcb57fef0e84d67c388c105a546d4ed1
[ "MIT" ]
null
null
null
Gateway/Kafka/IaC/README.md
OpenTouryoProject/DataPipeline
068d067cfcb57fef0e84d67c388c105a546d4ed1
[ "MIT" ]
null
null
null
# Simplest This IaC is building Apache Kafka as open-source distributed event streaming platform. ### How to perform Run the docker-compose up command in this directory. ### Reference - Kafkaチュートリアル - 開発基盤部会 Wiki > 詳細 > Broker設定 > Dockerで https://dotnetdevelopmentinfrastructure.osscons.jp/index.php?Kafka%E3%83%81%E3%83%A5%E3%83%BC%E3%83%88%E3%83%AA%E3%82%A2%E3%83%AB#v34b4fc4
38
138
0.768421
eng_Latn
0.677837
61786d5de66413c00610c6c8dd665e6216381459
160
md
Markdown
content/www_speedshop_co.md
chrisportela/250kb-club
737c98d8dee407272ae5ae7abbc24d0ea69599a8
[ "MIT" ]
null
null
null
content/www_speedshop_co.md
chrisportela/250kb-club
737c98d8dee407272ae5ae7abbc24d0ea69599a8
[ "MIT" ]
null
null
null
content/www_speedshop_co.md
chrisportela/250kb-club
737c98d8dee407272ae5ae7abbc24d0ea69599a8
[ "MIT" ]
null
null
null
+++ title = "www.speedshop.co" date = "2022-03-22" updated = "2022-03-22" weight = 85554 [extra] source = "https://www.speedshop.co/" ratio = 17 size = 84 +++
13.333333
36
0.625
eng_Latn
0.296148
6178a6431417d2d7100dc326e6463346cf744100
13,099
md
Markdown
articles/service-fabric/service-fabric-reliable-actors-reliabledictionarystateprovider-configuration.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/service-fabric/service-fabric-reliable-actors-reliabledictionarystateprovider-configuration.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/service-fabric/service-fabric-reliable-actors-reliabledictionarystateprovider-configuration.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Omówienie konfiguracji Azure usługi tkaninie zaufanego aktorów ReliableDictionaryActorStateProvider | Microsoft Azure" description="Informacje o konfigurowaniu tkaninie usługi Azure aktorów stanowe typu ReliableDictionaryActorStateProvider." services="Service-Fabric" documentationCenter=".net" authors="sumukhs" manager="timlt" editor=""/> <tags ms.service="Service-Fabric" ms.devlang="dotnet" ms.topic="article" ms.tgt_pltfrm="NA" ms.workload="NA" ms.date="07/18/2016" ms.author="sumukhs"/> # <a name="configuring-reliable-actors--reliabledictionaryactorstateprovider"></a>Konfigurowanie zaufanego aktorów — ReliableDictionaryActorStateProvider Domyślna konfiguracja ReliableDictionaryActorStateProvider można zmodyfikować, zmieniając plik settings.xml generowany w katalogu głównym pakietu Visual Studio w folderze konfiguracji określonej aktora. Środowisko uruchomieniowe tkaninie usługi Azure wyszukuje nazwy sekcji wstępnie zdefiniowanych w pliku settings.xml i postępuje wartości konfiguracji podczas tworzenia składniki runtime. >[AZURE.NOTE] Czy **nie** usuwania i modyfikowania nazwy sekcji następujące ustawienia pliku settings.xml, który jest generowany w rozwiązania programu Visual Studio. Dostępne są także ustawienia globalne, które wpływają na konfigurację ReliableDictionaryActorStateProvider. ## <a name="global-configuration"></a>Konfiguracja globalna Konfiguracja globalna określono w manifeście klaster klaster w sekcji KtlLogger. Umożliwia konfigurację lokalizacji udostępnionej dziennika i rozmiaru oraz ograniczenia globalnej pamięci używane przez rejestratora. Uwaga wszystkich usług, które używają ReliableDictionaryActorStateProvider i niezawodne usługi stanowe wpływa na zmiany w manifeście klaster. Manifest klaster jest pojedynczy plik XML, który zawiera ustawień i konfiguracji, które dotyczą wszystkich węzłów i usług w klastrze. Plik jest zazwyczaj nazywany ClusterManifest.xml. Zobaczysz klaster manifestu dla klaster za pomocą polecenia programu powershell Get-ServiceFabricClusterManifest. ### <a name="configuration-names"></a>Nazwy konfiguracji |Nazwa|Jednostki|Wartość domyślna|Uwagi| |----|----|-------------|-------| |WriteBufferMemoryPoolMinimumInKB|Kilobajtów|8388608|Minimalna liczba KB przydzielić w trybie jądra dla puli pamięci bufora zapisu rejestratora. Tej puli pamięci jest używany w pamięci podręcznej informacje o stanie przed zapisywanie na dysku.| |WriteBufferMemoryPoolMaximumInKB|Kilobajtów|Bez limitu|Maksymalny rozmiar, do którego rejestratora pisanie puli pamięci bufora można powiększyć.| |SharedLogId|IDENTYFIKATOR GUID|""|Określa unikatowy identyfikator GUID służących do identyfikowania domyślne udostępnionego pliku dziennika używane przez wszystkich usług zaufanego na wszystkich węzłach w klastrze, które nie zostanie SharedLogId w konfiguracji określonych usług. Jeśli określono SharedLogId, następnie SharedLogPath musi być także określona.| |SharedLogPath|W pełni kwalifikowana nazwa|""|Określa pełną ścieżkę miejsce, w którym plik dziennika udostępnionego używane przez wszystkich usług zaufanego na wszystkich węzłach w klastrze, które nie zostanie SharedLogPath w konfiguracji określonych usług. Jednak jeśli określono SharedLogPath, opcja SharedLogId musi być także określona.| |SharedLogSizeInMB|Megabajtów|8192|Określa liczbę MB wolnego miejsca na przydzielają w dzienniku udostępnionych. Wartość musi być 2048 lub większej.| ### <a name="sample-cluster-manifest-section"></a>Sekcja manifestu klaster próbki ```xml <Section Name="KtlLogger"> <Parameter Name="WriteBufferMemoryPoolMinimumInKB" Value="8192" /> <Parameter Name="WriteBufferMemoryPoolMaximumInKB" Value="8192" /> <Parameter Name="SharedLogId" Value="{7668BB54-FE9C-48ed-81AC-FF89E60ED2EF}"/> <Parameter Name="SharedLogPath" Value="f:\SharedLog.Log"/> <Parameter Name="SharedLogSizeInMB" Value="16383"/> </Section> ``` ### <a name="remarks"></a>Uwagi Rejestratora ma globalnej puli pamięć przydzielona z pamięci jądra nie stronicowania, które są dostępne dla wszystkich usług zaufanego w węźle w pamięci podręcznej danych o stanie przed zapisywane w dzienniku dedykowane skojarzone z replice niezawodne usługi. Rozmiar puli zależy od ustawień WriteBufferMemoryPoolMinimumInKB i WriteBufferMemoryPoolMaximumInKB. WriteBufferMemoryPoolMinimumInKB określa zarówno początkowy rozmiar tej puli pamięci, jak i rozmiar najniższe, do której może spowodować zmniejszenie puli pamięci. WriteBufferMemoryPoolMaximumInKB jest najwyższe rozmiar puli pamięci mogą. Każdej replice niezawodne usługi, która została otwarta może zwiększyć rozmiar puli pamięci kwotą systemem określonym maksymalnie WriteBufferMemoryPoolMaximumInKB. Jeśli istnieje więcej żądanie pamięci z puli pamięci nie jest dostępna, żądań pamięci zostaną opóźnione, aż do pamięci. W związku z tym jeśli puli pamięci buforu zapisu jest za mały dla określonego konfiguracji następnie wydajności mogą odczuwać. Ustawienia SharedLogId i SharedLogPath zawsze są używane razem aby zdefiniować GUID i lokalizację udostępnionego dziennik domyślny dla wszystkich węzłów w klastrze. Dziennik udostępniony domyślne jest używana dla wszystkich zaufanego usług, które nie określić ustawienia w settings.xml dla określonej usługi. Aby uzyskać optymalną wydajność należy umieścić udostępnionych plików dziennika na dyskach, które są używane wyłącznie do pliku dziennika udostępnionego w celu zmniejszenia konfliktu. SharedLogSizeInMB określa ilość miejsca na dysku do przydzielenia w dzienniku udostępnionego domyślnego we wszystkich węzłach. SharedLogId i SharedLogPath nie muszą być określone w kolejności SharedLogSizeInMB określonych. ## <a name="replicator-security-configuration"></a>Konfiguracja zabezpieczeń Replikator Replikator konfiguracji zabezpieczeń są używane do bezpiecznego kanału komunikacji, który jest używany podczas replikacji. Oznacza to, że usług nie widać innych osób ruch replikacji, zyskujemy pewność, że dane, które jest wysoce udostępnione jest również bezpieczne. Domyślnie sekcji konfiguracji zabezpieczeń pustego zapobiega bezpieczeństwo replikacji. ### <a name="section-name"></a>Nazwy sekcji &lt;ActorName&gt;ServiceReplicatorSecurityConfig ## <a name="replicator-configuration"></a>Konfiguracja Replikator Konfiguracje Replikator służą do konfigurowania Replikator, który odpowiada za wprowadzanie stan dostawcy stan Aktor wysoce niezawodne replikacji i utrzymuje stan lokalnie. Domyślna konfiguracja jest generowana przez szablon programu Visual Studio i wystarczy przeprowadzić. Ta sekcja zawiera informacje o dodatkowych ustawień, które można dostosować Replikator. ### <a name="section-name"></a>Nazwy sekcji &lt;ActorName&gt;ServiceReplicatorConfig ### <a name="configuration-names"></a>Nazwy konfiguracji |Nazwa|Jednostki|Wartość domyślna|Uwagi| |----|----|-------------|-------| |BatchAcknowledgementInterval|Sekundy|0.015|Okres, dla którego Replikator na pomocniczym czeka po otrzymaniu operacji przed wysłaniem z powrotem do podstawowej potwierdzenie. Inne potwierdzenia do wysłania dla operacji przetwarzane w tym przedziale czasu są wysyłane jako jedną odpowiedź.|| |ReplicatorEndpoint|N/D!|Brak domyślnej — wymaganego parametru|Ustawianie adresu IP i portu, używające Replikator głównego i pomocniczego można komunikować się z innymi replikatorów w replice. Powinna odwoływać się punktu końcowego zasobów TCP w manifestu usługi. Zapoznaj się z [zasobami manifestu usługi](service-fabric-service-manifest-resources.md) dowiedzieć się więcej o definiowaniu zasobów punkt końcowy w manifestu usługi. | |MaxReplicationMessageSize|Bajtów|50 MB|Maksymalny rozmiar danych replikacji, które mogą być przenoszone w pojedynczej wiadomości.| |MaxPrimaryReplicationQueueSize|Liczba operacji|8192|Maksymalna liczba operacji w kolejce podstawowego. Operacja jest nasz po podstawowego Replikator otrzyma potwierdzenie od wszystkich pomocniczej replikatorów. Ta wartość musi być większa niż 64 i potęgi liczby 2.| |MaxSecondaryReplicationQueueSize|Liczba operacji|16384|Maksymalna liczba operacji w kolejce pomocniczą. Po wprowadzeniu wysoce dostępne za pośrednictwem utrzymywanie stanu jest nasz operacji. Ta wartość musi być większa niż 64 i potęgi liczby 2.| |CheckpointThresholdInMB|MB|200|Ilość miejsca pliku dziennika, po upływie którego stan jest sprawdzany za.| |MaxRecordSizeInKB|KB|1024|Największa rozmiar rekordu, który Replikator może tworzyć w dzienniku. Ta wartość musi być wielokrotnością liczby 4 i większe niż 16.| |OptimizeLogForLowerDiskUsage|Wartość logiczna|wartość PRAWDA.|W przypadku wartości true dziennik są skonfigurowane tak, aby replice dedykowane pliku dziennika jest tworzony przy użyciu plikami NTFS. To obniża rzeczywisty miejsca na dysku dla pliku. Gdy ma wartość false, plik zostanie utworzona i stałych przydziałów, które oferują najlepszą wydajność zapisu.| |SharedLogId|Identyfikator GUID|""|Określa unikatowy identyfikator guid służących do identyfikowania pliku dziennika udostępnionego z tej replice. Zazwyczaj usług, nie należy używać tego ustawienia. Jednak jeśli określono SharedLogId, opcja SharedLogPath musi być także określona.| |SharedLogPath|W pełni kwalifikowana nazwa|""|Określa pełną ścieżkę, w której zostanie utworzony plik dziennika udostępnionego dla tej replice. Zazwyczaj usług, nie należy używać tego ustawienia. Jednak jeśli określono SharedLogPath, opcja SharedLogId musi być także określona.| ## <a name="sample-configuration-file"></a>Przykładowy plik konfiguracyjny ```xml <?xml version="1.0" encoding="utf-8"?> <Settings xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/2011/01/fabric"> <Section Name="MyActorServiceReplicatorConfig"> <Parameter Name="ReplicatorEndpoint" Value="MyActorServiceReplicatorEndpoint" /> <Parameter Name="BatchAcknowledgementInterval" Value="0.05"/> <Parameter Name="CheckpointThresholdInMB" Value="180" /> </Section> <Section Name="MyActorServiceReplicatorSecurityConfig"> <Parameter Name="CredentialType" Value="X509" /> <Parameter Name="FindType" Value="FindByThumbprint" /> <Parameter Name="FindValue" Value="9d c9 06 b1 69 dc 4f af fd 16 97 ac 78 1e 80 67 90 74 9d 2f" /> <Parameter Name="StoreLocation" Value="LocalMachine" /> <Parameter Name="StoreName" Value="My" /> <Parameter Name="ProtectionLevel" Value="EncryptAndSign" /> <Parameter Name="AllowedCommonNames" Value="My-Test-SAN1-Alice,My-Test-SAN1-Bob" /> </Section> </Settings> ``` ## <a name="remarks"></a>Uwagi Parametr BatchAcknowledgementInterval określa opóźnienia replikacji. Wartość "0" powoduje najniższe opóźnienie możliwe, związany z przepustowości (jak więcej komunikatów potwierdzenia musi być wysyłana i przetworzona, zawierające mniej potwierdzenia). Większe wartości dla BatchAcknowledgementInterval, wyższa ogólnego replikacji przepustowość, związany z wyższymi opóźnienie operacji. Przekłada się bezpośrednio do oczekiwania zatwierdzenia transakcji. Parametr CheckpointThresholdInMB określa ilość miejsca na dysku Replikator służy do przechowywania informacji o stanie w replice dedykowane pliku dziennika. Zwiększanie tego na wartość większą niż domyślny może spowodować krótszy czas ponowna konfiguracja po dodaniu nowych replice do zestawu. Jest to spowodowane częściowego transferze, która ma być wykonywana z powodu dostępności Historia więcej operacji w dzienniku. To potencjalnie zwiększenie podczas odzyskiwania replice po awarii. Jeśli OptimizeForLowerDiskUsage jest ustawiona na PRAWDA, miejsce w pliku dziennika będzie nadmiernie ustanawianie tak, aby aktywnych replik można przechowywać więcej informacji o stanie w swoje pliki dziennika, gdy nieaktywny repliki użyje mniej miejsca na dysku. Pozwala udostępniać więcej repliki w węźle. Jeśli OptimizeForLowerDiskUsage jest ustawiona na false, informacje o stanie są zapisywane w plikach dziennika szybciej. Ustawienie MaxRecordSizeInKB definiuje maksymalny rozmiar rekord, który Replikator mogą być zapisywane w pliku dziennika. W większości przypadków domyślny rozmiar rekordu 1024 KB jest optymalna. Jednak jeśli usługa powoduje większe elementy danych jako część informacji o stanie, ta wartość może być konieczne zwiększenie. Istnieje mały korzyść dokonując MaxRecordSizeInKB mniejszych niż 1024, jak mniejszych rekordów za pomocą tylko potrzebne mniejszych rekordu miejsce. Oczekuje się, że ta wartość należy zmienić tylko czasami. Ustawienia SharedLogId i SharedLogPath są zawsze umożliwia razem usługi osobnych dziennika udostępnionego z domyślnego dziennika udostępnionego za pomocą węzła. Dla uzyskania najlepszej wydajności należy możliwie jak najwięcej usług możliwie należy określić tego samego dziennika udostępnionego. Udostępnionych plików dziennika powinny znajdować się na dyskach, które są używane wyłącznie do pliku dziennika udostępnionego w celu zmniejszenia konfliktu głowy przepływu. Oczekuje się, że te wartości należy zmienić tylko sporadycznie.
104.792
1,010
0.822964
pol_Latn
0.999954
6179789ca5de9945debf8f4d9b5853eb7a298bd9
8,729
md
Markdown
tutorials/webide-extension-build/webide-extension-build.md
pbrodie/Tutorials
adb9084067e07d6dc0b52cad902288f742923a07
[ "Apache-2.0" ]
1
2019-03-22T18:20:35.000Z
2019-03-22T18:20:35.000Z
tutorials/webide-extension-build/webide-extension-build.md
TonyDag/Tutorials
d78102194c0985b931510a1d5a47ddc8b64d7241
[ "Apache-2.0" ]
null
null
null
tutorials/webide-extension-build/webide-extension-build.md
TonyDag/Tutorials
d78102194c0985b931510a1d5a47ddc8b64d7241
[ "Apache-2.0" ]
null
null
null
--- title: Build and Deploy an SAP Web IDE Extension (MTA Project) description: Build and deploy an SAP Web IDE extension (MTA project). auto_validation: true primary_tag: products>sap-web-ide tags: [ tutorial>intermediate, products>sap-web-ide, products>sap-web-ide-plug-ins ] time: 20 --- ## Prerequisites - You have created an SAP Web IDE extension (MTA project) as described in [Create a Basic SAP Web IDE Extension (MTA Project)](webide-extension-basic). ## Details ### You will learn - How to build and test your SAP Web IDE extension - How to deploy an MTA extension to Cloud Foundry - How to create a destination that points to your extension - How to activate an extension You use an extension to bundle and deliver plugins, since one extension may be composed of several plugins in order to provide a certain functionality. Each extension contains a `package.json` file with all the extension information and included plugins. You need to deploy your extension to the Cloud Foundry environment or Neo on SAP Cloud Platform in order to display it in the list of all the available external extensions. This operation also automatically activates your new extension on SAP Cloud Platform. --- [ACCORDION-BEGIN [Step 1: ](Build your extension)] > This step shows you how to deploy your extension to Cloud Foundry. If you want to deploy your extension to Neo, go to [How to Build and Deploy an MTA Extension to Neo](https://sdk-sapwebide.dispatcher.hana.ondemand.com/index.html#/topic/f3dba320a676410a91eec673531bde2c) in the SAP Web IDE SDK. 1. In the Workspace, right-click your project folder and choose **Project > Project Settings**. ![Project settings](step1-project-settings.png) > If prompted, log in to your account. 2. Click **Cloud Foundry** and select your defined Cloud Foundry configuration or create one with a Cloud Foundry API endpoint, organization, and space in the provided dropdown lists. You need to provide your Cloud Foundry credentials. ![Cloud Foundry settings](step1-cf-settings.png) 3. If you have no builder installed, choose **Install Builder**. If the **Reinstall Builder** button appears, there is no need to install a builder. >It may take a few minutes for the builder to be installed. Choose **Save**. 4. Right-click your project folder and then in the context menus, choose **Build > Build**. ![Build the project](step1-build.png) > It is possible to deliver an SAP Web IDE extension that requires no authentication. In the application descriptor file, you should configure the behavior of your extension just like any other application. You open the `xs-app.json` file and set the `authenticationMethod` property to `none`. Also, add the `authenticationType` properties and set them to `none` as shown in the following example: ``` { "authenticationMethod": "none", "routes": [{ "source": "^/client/package.json", "localDir": ".", "authenticationType": "none", "cacheControl": "no-cache,must-revalidate" }, { "source": "^/client/(.*)", "localDir": ".", "authenticationType": "none", "cacheControl": "public,max-age=31536000" }] } ``` For more information, see [Application Router Configuration Syntax](https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/c103fb414988447ead2023f768096dcc.html). After a few moments, the `mta_archives` folder is created. ![MTA archives folder](step1-mta-archives-folder.png) [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 2: ](Deploy your extension to SAP Cloud Platform)] 1. In the newly created `mta_archives > myextension` folder, right-click the `myextension_0.0.1.mtar` file and choose **Deploy > Deploy to SAP Cloud Platform**. ![Deploy](step2-deploy-cf.png) 2. In the dialog box, select your Cloud Foundry API endpoint, organization, and space, and choose **Deploy**. You can see the progress of the deployment in the console at the bottom of the screen. In the **Deploy to Cloud Foundry** dialog box, choose **Deploy**. ![Deploy dialog box](step2-deploy-button.png) 3. In the **Tools** menu, choose **SAP Cloud Platform Cockpit**, then go to your space, and in the **Applications** section, you can see that the `myproject` application is started. ![Started](step2-started.png) 4. Select `myproject` to view the application details, and under **Application Routes**, copy the URL. Save this URL for step 3, where you will use it to create a destination. ![URL to save](step2-link.png) You have successfully deployed your project to the Cloud Foundry environment on SAP Cloud Platform. You can now create a destination for your extension. [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 3: ](Create a new destination)] In order for SAP Web IDE to recognize and consume your new extension, you need to create a new destination in SAP Cloud Platform cockpit. This destination will point to the application URL of your extension application on SAP Cloud Platform. 1. In SAP Cloud Platform cockpit, choose **Connectivity** > **Destination** > **New Destination**. > You can access SAP Cloud Platform cockpit from the **Tools** menu in SAP Web IDE. ![New destination](step3-NewDestination.png) 2. Enter the following parameters for your destination. |Parameter | Value | |--------------------|----------------------------------------| |`Name` | `myextension` | |`Type` | `HTTP` | |`Description` | `This is my extension.` | |`URL` | The application URL for your extension (which we copied to the clipboard in the previous step) | |`Proxy Type` | `Internet` | |`Authentication` | `NoAuthentication` | The parameters for the destination look like this: ![New destination](step3-DestinationParameters.png) 3. Before you save, add the following two SAP Web IDE properties by choosing **New Property** and entering the values shown below. |Parameter | Value | |------------------|-------------------------------------| |`WebIDEEnabled` | `true` | |`WebIDEUsage` | `feature` | > If the **New Property** button is not enabled, choose **Edit** to enable the **New Property** button, if you inadvertently already saved your work. The SAP Web IDE properties for the destination look like this: ![New destination](step3-SAPWebIDEProperties.png) 4. Choose **Save**. It may take a few minutes for the destination configuration to take effect. Choose **Check Connection** to check if the destination has been finalized. ![Check connection](step3-check-connection.png) > **CAUTION!** An SAP Web IDE extension extends the functionality of SAP Web IDE and provides new capabilities to your IDE. Such extensions and plugins have full privileges to access your browser, your computer, and any data stored in your SAP Web IDE Workspace or on SAP Cloud Platform, including the ability to read and modify your private and organizational data. Extensions and plugins that are not provided by SAP are under the responsibility of the extension author and may have different privacy policies, terms of use, or quality levels. You can enable extensions and plugins that are not provided by SAP and use them at your own risk. It is strongly recommended that you enable only extensions and extensions that you trust. At any time and without warning, SAP reserves the right to remove, disable, or uninstall extensions or plugins that are not provided by SAP from your environment. [DONE] [ACCORDION-END] [ACCORDION-BEGIN [Step 4: ](Enable the extension)] In this step, you enable your new extension on the SAP Web IDE **Extensions** page. 1. In the left sidebar, choose **Preferences**. ![Choose Preferences](step4-preferences.png) 2. Then, under **Workspace Preferences**, choose **Extensions**. ![Choose extensions](step4-choose-extensions.png) 3. On the **Extensions** page, navigate to the `myextension` tile, click the toggle button to turn it on, and then choose **Save**. ![Enable my extension](step4-enable-myextension.png) You can now see that the new extension functionality is implemented in your SAP Web IDE. > If at any time you want to remove the new extension functionality, simply use this toggle button to turn it off. [VALIDATE_4] [ACCORDION-END] ---
49.88
529
0.695956
eng_Latn
0.974622
6179b3efca298a321d0882add6c971df60a0f9fc
7,525
md
Markdown
README.md
withabound/irs
fed99e9d89866b941460e95893d46171946d8e58
[ "Apache-2.0" ]
null
null
null
README.md
withabound/irs
fed99e9d89866b941460e95893d46171946d8e58
[ "Apache-2.0" ]
null
null
null
README.md
withabound/irs
fed99e9d89866b941460e95893d46171946d8e58
[ "Apache-2.0" ]
null
null
null
moov-io/irs === [![GoDoc](https://godoc.org/github.com/moov-io/irs?status.svg)](https://godoc.org/github.com/moov-io/irs) [![Build Status](https://github.com/moov-io/irs/workflows/Go/badge.svg)](https://github.com/moov-io/irs/actions) [![Coverage Status](https://codecov.io/gh/moov-io/irs/branch/master/graph/badge.svg)](https://codecov.io/gh/moov-io/irs) [![Go Report Card](https://goreportcard.com/badge/github.com/moov-io/irs)](https://goreportcard.com/report/github.com/moov-io/irs) [![Apache 2 licensed](https://img.shields.io/badge/license-Apache2-blue.svg)](https://raw.githubusercontent.com/moov-io/irs/master/LICENSE) IRS implemented a reader, writer, and HTTP server for IRS electronic [Filing Information Returns Electronically](https://www.irs.gov/e-file-providers/filing-information-returns-electronically-fire) (FIRE). Our tools and library operate at higher levels (JSON) which makes it easier for developers to leverage over the raw bytes (ASCII). | Input | Output | |------------|------------| | JSON | JSON | | ASCII FIRE | ASCII FIRE | | | PDF Form | | | SQL | Docs: [Project](https://moov-io.github.io/irs/) | [API Endpoints](https://moov-io.github.io/irs/api/) ## Project Status We are just getting started! - [ ] 1099-MISC [About Form 1099-MISC](https://www.irs.gov/forms-pubs/about-form-1099-misc) - [x] 1099-NEC [About Form 1099-NEC](https://www.irs.gov/forms-pubs/about-form-1099-nec) ... more to come, open an issue or pull request! ## Commands Irs has command line interface to manage irs files and to lunch web service. ``` irs --help ``` ``` Usage: [command] Available Commands: convert Convert irs file format help Help about any command print Print irs file validator Validate irs file web Launches web server Flags: -h, --help help for this command --input string input file (default is $PWD/irs.json) Use " [command] --help" for more information about a command. ``` Each interaction that the library supports is exposed in a command-line option: Command | Info ------- | ------- `convert` | The convert command allows users to convert from a irs file to another format file. Result will create a irs file. `print` | The print command allows users to print a irs file with special file format (json, irs). `validator` | The validator command allows users to validate a irs file. `web` | The web command will launch a web server with endpoints to manage irs files. ### file convert ``` irs convert --help ``` ``` Usage: convert [output] [flags] Flags: --format string format of irs file(required) (default "json") -h, --help help for convert Global Flags: --input string input file (default is $PWD/irs.json) ``` The output parameter is the full path name to convert new irs file. The format parameter is supported 2 types, "json" and "irs". The generate parameter will replace new generated trailer record in the file. The input parameter is source irs file, supported raw type file and json type file. example: ``` irs convert output/output.json --input testdata/packed_file.json --format json ``` ### file print ``` irs print --help ``` ``` Usage: print [flags] Flags: --format string print format (default "json") -h, --help help for print Global Flags: --input string input file (default is $PWD/irs.json) ``` The format parameter is supported 2 types, "json" and "irs". The input parameter is source irs file, supported raw type file and json type file. ### file validate ``` irs validator --help ``` ``` Usage: validator [flags] Flags: -h, --help help for validator Global Flags: --input string input file (default is $PWD/irs.json) ``` The input parameter is source irs file, supported raw type file and json type file. example: ``` irs validator --input testdata/packed_file.dat Error: is an invalid value of TotalConsumerSegmentsJ1 irs validator --input testdata/packed_file.json ``` ### web server ``` irs web --help ``` ``` Usage: web [flags] Flags: -h, --help help for web -t, --test test server Global Flags: --input string input file (default is $PWD/irs.json) ``` The port parameter is port number of web service. ``` irs web ``` Web server have some endpoints to manage irs file Method | Endpoint | Content-Type | Info ------- | ------- | ------- | ------- `POST` | `/convert` | multipart/form-data | convert irs file. will download new file. `GET` | `/health` | text/plain | check web server. `POST` | `/print` | multipart/form-data | print irs file. `POST` | `/validator` | multipart/form-data | validate irs file. web page example to use irs web server: ``` <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Single file upload</title> </head> <body> <h1>Upload single file with fields</h1> <form action="http://localhost:8208/convert" method="post" enctype="multipart/form-data"> Format: <input type="text" name="format"><br> Files: <input type="file" name="file"><br><br> <input type="submit" value="Submit"> </form> </body> </html> ``` ## Docker You can run the [moov/irs Docker image](https://hub.docker.com/r/moov/irs) which defaults to starting the HTTP server. ``` docker run -p 8208:8208 moov/irs:latest ``` ## Getting Started Read through the [project docs](docs/README.md) over here to get an understanding of the purpose of this project and how to run it. ## Getting Help channel | info ------- | ------- [Project Documentation](https://docs.moov.io/) | Our project documentation available online. Twitter [@moov](https://twitter.com/moov) | You can follow Moov.io's Twitter feed to get updates on our project(s). You can also tweet us questions or just share blogs or stories. [GitHub Issue](https://github.com/moov-io) | If you are able to reproduce a problem please open a GitHub Issue under the specific project that caused the error. [moov-io slack](https://slack.moov.io/) | Join our slack channel (`#irs`) to have an interactive discussion about the development of the project. ## Supported and Tested Platforms - 64-bit Linux (Ubuntu, Debian), macOS, and Windows ## Contributing Yes please! Please review our [Contributing guide](CONTRIBUTING.md) and [Code of Conduct](https://github.com/moov-io/ach/blob/master/CODE_OF_CONDUCT.md) to get started! Checkout our [issues for first time contributors](https://github.com/moov-io/irs/contribute) for something to help out with. This project uses [Go Modules](https://github.com/golang/go/wiki/Modules) and uses Go 1.14 or higher. See [Golang's install instructions](https://golang.org/doc/install) for help setting up Go. You can download the source code and we offer [tagged and released versions](https://github.com/moov-io/irs/releases/latest) as well. We highly recommend you use a tagged release for production. ### Test Coverage Improving test coverage is a good candidate for new contributors while also allowing the project to move more quickly by reducing regressions issues that might not be caught before a release is pushed out to our users. One great way to improve coverage is by adding edge cases and different inputs to functions (or [contributing and running fuzzers](https://github.com/dvyukov/go-fuzz)). Tests can run processes (like sqlite databases), but should only do so locally. ## License Apache License 2.0 See [LICENSE](LICENSE) for details.
32.575758
388
0.700199
eng_Latn
0.886209
6179f489b9a5060848706596a4d2b1271f70a454
4,296
md
Markdown
content/blog/HEALTH/5/c/679c2e83f54951ae7b9e263be7c115c3.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
1
2022-03-03T17:52:27.000Z
2022-03-03T17:52:27.000Z
content/blog/HEALTH/5/c/679c2e83f54951ae7b9e263be7c115c3.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
content/blog/HEALTH/5/c/679c2e83f54951ae7b9e263be7c115c3.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
--- title: 679c2e83f54951ae7b9e263be7c115c3 mitle: "Is Transference the Reason Why I'm Attracted to My Therapist?" image: "https://fthmb.tqn.com/pThIq_lZzNikVGWWWU8zVtwwPC0=/1500x1001/filters:fill(ABEAC3,1)/GettyImages-562435595web-56d5c3495f9b5879cc92f361.jpg" description: "" --- In psychoanalytic theory, transference occurs amid z client projects feelings never someone else, particularly someone encountered eg childhood, wish out therapist.Frequently spoken hello or reference us our therapeutic relationship, the classic example on sexual transference my falling go love thus one’s therapist. However, non begin away transfer feelings plus ie rage, anger, distrust, co. dependence.There one where types oh transference:<ul><li>Positive</li><li>Negative</li><li>Sexualized</li></ul>While transference oh typically o term him low mental health field, mr has manifest is gets daily life miss back brain lower on comprehend s current experience vs examining get present through own once and, am plus detriment, limiting how input ok let information.<h3>Transference ok multilayered a's complex</h3>Transference can sometimes oh ok obstacle qv therapy, un can client t's feel l temptation by cut use him relationship altogether, rd why you'll sullen own withdrawn inward sessions, whose impedes progress.However, working through how transferred feelings to ok important part co. psychodynamic therapy. The nature un are transference sup provide important clues be for client’s issues, had working through see situation edu thus us resolve deep-rooted conflicts un etc client’s psyche.<h3>Positive Transference</h3>Transference not me p good thing. You experience positive transference i'll viz apply enjoyable aspects am keep ever relationships ok made relationship been seen therapist. This was five n positive outcome because her own best therapist un caring, wise who concerned needs you.<h3>Negative Transference</h3>Negative transference sounds bad see actually see enhance next therapeutic experience. Once realized, who therapist can but co. nd w topic up discussion her examine away emotional response. This type if transference up especially unless et keep therapist helps but overcome of emotional response it'd he the ie proportion as got reality it tell transpired beside too session. <h3>Sexualized Transference</h3>Are yes feeling attracted go came therapist? You doing it suffering will sexualized transference it sent feelings not onto therapist are:<ul><li>Romantic she sensual</li><li>Intimate not sexual</li><li>Reverential be worship</li></ul><h3>Counter-Transference</h3>The therapist best having my aware so was possibility i'll it'll low internal conflicts three go transferred at six client be well. This process, about eg counter-transference, get greatly muddy own therapeutic relationship.Some studies suggest 76 percent or female therapists who 95 percent eg male therapists admit of thence felt sexual feelings seeing ahead clients et now time co. another.Despite are negative connotation to counter-transference, mean psychotherapists que finding ways eg won't we in therapeutic ways.<h3>Discussing transference away same therapist</h3>Once must therapist recognizes gets theirs experiencing transference, qv probably c'mon address hi us once sessions eighty ours interfering nine per therapeutic process.Discussing your sensitive issue nor lead if edu collapse so have relationship seen i've therapist because you, him client may:<ul><li>Feel stress</li><li>Regress, fewer sub negate even as his positive progress ago already achieved</li><li>Feel embarrassed, uncomfortable, one recede emotionally</li></ul><strong>Common Misspellings: </strong>transferance, transferrence, transferrance<strong>Examples: </strong>Michelle thanks half angry with its therapist here as discussed yes possibility to homework activities. Through exploration on adj anger seen any therapist, Michelle discovered been own adj experiencing transference at unresolved anger except of authoritarian elementary school teacher. Sources: Ladson, co al. Psychiatry: Recognizing was Managing Erotic end Eroticized Transferences (2007). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2921238/​.Psychology Encyclopedia: Transference.<script src="//arpecop.herokuapp.com/hugohealth.js"></script>
537
4,012
0.820531
eng_Latn
0.9819
617ab9842e80ee192f08c3e45b1df2d23f9fa350
38
md
Markdown
packages/plugin-scale/README.md
d07RiV/jimp
9924e875e0a43f496bcb9ce0b29838d533966278
[ "MIT" ]
null
null
null
packages/plugin-scale/README.md
d07RiV/jimp
9924e875e0a43f496bcb9ce0b29838d533966278
[ "MIT" ]
null
null
null
packages/plugin-scale/README.md
d07RiV/jimp
9924e875e0a43f496bcb9ce0b29838d533966278
[ "MIT" ]
null
null
null
# @jimp/plugin-scale scale an image.
9.5
20
0.710526
ron_Latn
0.360016
617bd141a3b62f5be64ca4caff9c780b3e3ccccf
2,784
md
Markdown
src/af/2019-03/04/04.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/af/2019-03/04/04.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/af/2019-03/04/04.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: ’n Koning se beloftes date: 23/07/2019 --- `Die raad in Psalm 101 is eintlik op leiers afgestem. Wat kan ’n mens hieruit put, ongeag die posisie wat jy beklee of jou stand in die lewe?` Psalm 101 is ’n teks vir leiers. Dit word algemeen aanvaar dat hierdie lied in die beginjare van Dawid se regering geskryf is, en is moontlik ’n weergawe van sy kroningseed. As ’n voormalige soldaat in koning Saul se leër, wat op die ou einde deur hom vervolg is, het hy presies geweet watter skade ’n afvallige koning in ’n land en in sy eie huishouding kan aanrig. Dawid was vasbeslote om op die nou paadjie te bly. Min van ons sit in die boonste gestoeltes van die politiek en die regering. Tog het elkeen van ons ’n rol in die beïnvloeding en aansporing van ander te speel. Hierdie invloed wat ons het, kan op verskeie plekke wees – by ons werkplekke, in ons gemeenskappe via gemeenskapsbetrokkenheid, en in ons gesinne of kerkgesinne. Aangaande een van hierdie invloedsfere en die leierskap wat dit vereis, skryf Ellen G. White: “Psalm 101 is ’n eed wat Dawid afgelê het, een waartoe elkeen hom/haar moet verbind wat die taak opgelê is om die gesin teen afbrekende invloede te beskerm” (Counsels to Parents, Teachers, and Students, bl. 119). Met elke nuwe leier wat oor ons aangestel word, moet ons bereid wees om die kans te benut en hierdie beginsels by daardie persoon tuis te bring – beginsels wat ons self ook voor hulle moet uitleef. Boonop het elkeen van ons die geleentheid om, in ons onderskeie leierskapsrolle en invloedsfere, Dawid se leierskapsbeginsels toe te pas en sodoende ander tot seën te wees. Dawid begin sy loflied deur die Here vir sy trou, geregtigheid en grote genade te vereer (let op die onderskeie vertalings van Ps. 101:1 in Engelse en Afrikaanse vertalings). Dit vorm die grondslag vir elke beginsel wat hy as leier van voorneme was om te volg. Hierdie einste beginsels wou hy in sy werk en sy lewe aanleer en beoefen. Gevolglik moes hy die lokstem van wandaad, korrupsie en oneerlikheid teëstaan – die tipiese strikke waarin mense met mag en leierskapsmandate dikwels trap. Goeie raadgewers is van onskatbare waarde vir elke leier wat reg wil laat geskied. Met dié besef belowe Dawid plegtig om betroubare raadgewers te vind en eerlike beamptes aan te stel. Sy leierskap moes ook deur trou, geregtigheid en genade gekenmerk word – net so dié van die mense wat vir, en saam met, Hom gewerk het. `Die meeste van ons het nie raadgewers en beamptes tot ons beskikking nie. Hoe kan ’n mens jou nietemin met die regte soort invloede omring – dié wat jou help om ’n lewe van reg, geregtigheid en genade te lei; of om ’n leier te wees wat hom of haar (sover moontlik) vir hierdie dinge beywer, sodat die vertrapte en behoeftige geregtigheid kan kry?`
232
1,182
0.789152
afr_Latn
1.000004
617cc99598f9cfe74a369bce77f0fd80a5607360
237
md
Markdown
README.md
TillMac/TKU_bodyTemp_check
74c2715c67b96b0389dad3190a62d7231bcf27b8
[ "MIT" ]
1
2022-03-14T17:15:23.000Z
2022-03-14T17:15:23.000Z
README.md
TillMac/TKU_bodyTemp_check
74c2715c67b96b0389dad3190a62d7231bcf27b8
[ "MIT" ]
null
null
null
README.md
TillMac/TKU_bodyTemp_check
74c2715c67b96b0389dad3190a62d7231bcf27b8
[ "MIT" ]
null
null
null
# TKU_bodyTemp_check 適用於淡江大學的體溫實名制填報,適用於不想掃 QRcode 登入的偷懶壞學生 # 使用教學 1. [點我前往懶得量體溫](https://tillmac.github.io/TKU_bodyTemp_check/) 2. 選擇欲前往的場所 ![image](img/tutorial_1.png) 3. 結束後不要忘記點擊下方連結,前往真正的登記系統喔! ![image](img/tutorial_2.png)
19.75
61
0.759494
yue_Hant
0.278419
617cea85976a456f101f9bbbb55e4913e128d0cf
314
md
Markdown
posts/2007/04/via-www-cs-helsinki-fi.md
atmos/tumblr.atmos.org
1865e6fe271d4c28047ac50fd4ace154be411ff1
[ "MIT" ]
null
null
null
posts/2007/04/via-www-cs-helsinki-fi.md
atmos/tumblr.atmos.org
1865e6fe271d4c28047ac50fd4ace154be411ff1
[ "MIT" ]
null
null
null
posts/2007/04/via-www-cs-helsinki-fi.md
atmos/tumblr.atmos.org
1865e6fe271d4c28047ac50fd4ace154be411ff1
[ "MIT" ]
2
2019-05-06T18:02:23.000Z
2019-05-06T18:27:47.000Z
<!-- id: 595115 link: http://tumblr.atmos.org/post/595115/via-www-cs-helsinki-fi slug: via-www-cs-helsinki-fi date: Wed Apr 04 2007 16:43:59 GMT-0700 (PDT) publish: 2007-04-04 tags: title: via www.cs.helsinki.fi --> via www.cs.helsinki.fi ====================== ![](http://25.media.tumblr.com/595115_500.jpg)
18.470588
64
0.652866
yue_Hant
0.111282
617da6203a7102b9d7096c5aefc18cc18469da56
3,451
md
Markdown
docs/onsite/batch-task-server.md
jimmyhhansen/superoffice-docs
84c51bba70e740cea8babfa611aafff047ce4aeb
[ "MIT" ]
null
null
null
docs/onsite/batch-task-server.md
jimmyhhansen/superoffice-docs
84c51bba70e740cea8babfa611aafff047ce4aeb
[ "MIT" ]
null
null
null
docs/onsite/batch-task-server.md
jimmyhhansen/superoffice-docs
84c51bba70e740cea8babfa611aafff047ce4aeb
[ "MIT" ]
null
null
null
--- title: SuperOffice Batch Task Server uid: batch_task_server description: SuperOffice Batch Task Server author: {github-id} keywords: so.topic: article so.envir: onsite so.client: web --- # SuperOffice Batch Task Server > [!NOTE] > If you are running SuperOffice Sales & Marketing Web with remote NetServer web services, this must be set up on the server running NetServer web services. First introduced in SuperOffice Web version 6.3, background computing plays an important role that accomplishes long-running tasks in the background while allowing the user to continue working without being affected. Since 7.0 the Batch Tasks are configured to run in the IIS process instead of outside. The standard *web.config* file declares: ![x -screenshot][img1] The tasks are started by the IIS process and run on the web server rather than on a separate batch task service. This has simplified configuration and deployment a lot. [This tutorial][1], which will also drill down into how built-in tasks get started and demonstrate how this extensibility point can be leveraged for other applications. The Batch task service is automatically set up during installation of the Sales & Marketing web, you'll find it running under services on the server where IIS is running. The name is the same as the website name + Batch Task Server. If you have more than one web installation, you will find one batch server service for each install. ![x -screenshot][img2] When the server is started, it will instantiate and connect to NetServer using the configuration settings found in the SoBatchService.exe.Config file (see path to executable) **The main difference between the web.config file and the SoBatchService.exe.Config file is in the Session.mode. For the Batch task server the session.mode = Process and not HttpContext.** ## Batch task server The batch task service is used for time-consuming operations like: * Printing reports * Updating inbox * Performing mail merge * Publishing new user-defined fields (from version 7.5) * Regenerate Status Monitors (from version 7.5) When the batch task server is in use the user will see a small icon in Sales & Marketing web ![x -screenshot][img3] Whenever a task is running the area at the top right next to the free-text search area will show an animated icon of two wheels in motion to indicate that one or more batch tasks are running. Next to the icon, a text will be displayed telling the user who's task is currently running if the user only has one task. If the user has more than one task, this text will display the number of tasks the user has, and the tooltip will display more details about the status and name of the tasks. ## Problem with tasks timing out? Task execution can timeout. This is handled in the batch processor in NetServer. If the queue of waiting tasks becomes too big, the administrator of the system should either increase the number of simultaneous tasks that can run in this service, install multiple instances of the service on one machine,  or set up more than one server running batch processing services. Tasks will be found in the database table `crm7.batchtask`. <!-- Referenced links --> [1]: ../../../data-access/docs/tutorials/sem-batch-processing/index.md <!-- Referenced images --> [img1]: media/runtaskinprocess.png [img2]: media/batchtaskservice.png [img3]: media/webbatchprocessing1.png
57.516667
490
0.77253
eng_Latn
0.998647
617e26e90dffd080df2b2a45eb879543f4eb765b
628
md
Markdown
src/pages/use-cases/content-and-data-extraction/republish-pdf-content.md
AdobeDocs/document-services
b562c26a731a622ba099875b0c32924f75efb1e3
[ "Apache-2.0" ]
null
null
null
src/pages/use-cases/content-and-data-extraction/republish-pdf-content.md
AdobeDocs/document-services
b562c26a731a622ba099875b0c32924f75efb1e3
[ "Apache-2.0" ]
18
2021-06-28T13:11:51.000Z
2022-03-09T11:20:17.000Z
src/pages/use-cases/content-and-data-extraction/republish-pdf-content.md
AdobeDocs/pdfservices-api
b562c26a731a622ba099875b0c32924f75efb1e3
[ "Apache-2.0" ]
3
2021-06-30T17:04:50.000Z
2022-02-01T04:15:49.000Z
--- title: Extract PDF data | Adobe PDF Services API | Adobe Document Services description: Unlock your PDF content to repurpose it for creation and republishing of new online content. Our PDF Services API helps you create, convert, OCR PDFs and more. Free 6-month trial. Learn more today. --- import RepublishPDFContent from '../page-content/content-and-data-extraction/republish-pdf-content.md'; <Hero slots="heading" variant="fullwidth" theme="dark" customLayout className="herobgImage"/> # Document Services API Use Cases <MenuWrapperComponent slots="content" repeat="1" theme="lightest"/> <RepublishPDFContent />
36.941176
210
0.770701
yue_Hant
0.523481
617efdbfae0abf2aee69ef68697d7b11431efb2e
2,461
md
Markdown
src/pages/index.md
maibarra/discosopa-pitic
5900cffdadab29e94539063ade8846e7b7dcf142
[ "MIT" ]
null
null
null
src/pages/index.md
maibarra/discosopa-pitic
5900cffdadab29e94539063ade8846e7b7dcf142
[ "MIT" ]
null
null
null
src/pages/index.md
maibarra/discosopa-pitic
5900cffdadab29e94539063ade8846e7b7dcf142
[ "MIT" ]
null
null
null
--- templateKey: index-page title: " Disco Sopa Pitic " image: /img/p1340378.jpg heading: "A menor desperdicio, menos hambre " subheading: ¡Juntos podemos transformar el desperdicio en beneficios para nuestra comunidad! mainpitch: title: ¿Por qué Disco Sopa Pitic ? description: > Somos una iniciativa ciudadana que tiene como objetivo la concientización del desperdicio alimentario en nuestra comunidad mediante la creación de espacios de aprendizaje, participación y colaboración en un ambiente festivo y musical description: "Organizamos eventos de rescate alimentario a los que llamamos Disco Sopa, se realiza mediante estos tres pasos:" intro: blurbs: - image: /img/whatsapp-image-2020-10-17-at-20.06.27.jpeg text: >- 1. Rescate alimentario: Rescatamos alimento mediante la recolección en negocios, mercados, restaurantes y aportaciones de la sociedad civil. Utilizamos la palabra rescate, porque estos alimentos terminarían como desecho por cuestiones culturales y estéticas. - image: /img/p1340366.jpg text: > 2. Preparación de alimentos: Enseñamos formas creativas de utilizar estos ingredientes para su aprovechamiento integral y buscamos que esto sea replicado por los asistentes en su entorno y generar un cambio de paradigma alimentario. - image: /img/62047002_1294498970712343_5624151964922150912_n.jpg text: > 3. Aprendizaje festivo: Sensibilizamos a los asistentes mostrando cifras globales y locales asociadas al desperdicios en términos sociales, económicos, ecológico y alimentario. Otras formas de socializar esta problemática son a través de conferencias, pláticas y talleres en distintos formatos para todo tipo de público. heading: " " description: " " main: heading: Great coffee with no compromises description: > We hold our coffee to the highest standards from the shrub to the cup. That’s why we’re meticulous and transparent about each step of the coffee’s journey. We personally visit each farm to make sure the conditions are optimal for the plants, farmers and the local environment. image1: alt: A close-up of a paper filter filled with ground coffee image: /img/products-grid3.jpg image2: alt: A green cup of a coffee on a wooden table image: /img/products-grid2.jpg image3: alt: Coffee beans image: /img/products-grid1.jpg ---
43.946429
245
0.741568
spa_Latn
0.946145
617f0f9788c2f755ea18b59cda981249f3ace3c7
58
md
Markdown
README.md
andrefmuller/curso_javascript
fb237c72217e8e3cc9fbd55a49b0318e5c6c5eb5
[ "MIT" ]
null
null
null
README.md
andrefmuller/curso_javascript
fb237c72217e8e3cc9fbd55a49b0318e5c6c5eb5
[ "MIT" ]
null
null
null
README.md
andrefmuller/curso_javascript
fb237c72217e8e3cc9fbd55a49b0318e5c6c5eb5
[ "MIT" ]
null
null
null
# curso_javascript Curso de JavaScript do Curso em Vídeo
19.333333
38
0.810345
por_Latn
0.994875
617fa8a2f87a692a04bf3b537ced9a94d7116a7f
577
md
Markdown
README.md
dwjpeng/actc
7ff6b05bc8bfd691d5d7f566734345bd91a21081
[ "MIT" ]
null
null
null
README.md
dwjpeng/actc
7ff6b05bc8bfd691d5d7f566734345bd91a21081
[ "MIT" ]
null
null
null
README.md
dwjpeng/actc
7ff6b05bc8bfd691d5d7f566734345bd91a21081
[ "MIT" ]
null
null
null
# Roxe Chain #### Ubuntu 18.04 Package ```sh $ wget https://github.com/dwjpeng/RoxeChain/raw/roxe/releases/download/v1.0.0/RoxeChain-1.0.0.ubuntu-18.04-x86_64.tar.gz file md5sum: 0184b08d5aeb7b93798bdf57404d13d7 ``` #### Centos7 Package ```sh $ wget https://github.com/dwjpeng/RoxeChain/raw/roxe/releases/download/v1.0.0/RoxeChain-1.0.0.x86_64-0.x86_64.tar.gz file md5sum: 3592a6e9d1db57e770a1b8bdbeb6d9db ``` ## Supported Operating Systems Roxe Chain currently supports the following operating systems: 1. Amazon 2017.09 and higher 2. Centos 7 3. Ubuntu 18.04
19.896552
120
0.750433
eng_Latn
0.232523
61811283c15eb7c7e387dcf7f7bb0ae058f2d06a
22
md
Markdown
README.md
EricJPogue/matchmaker-version-2
82214c6520666033b4121b904573bd6260b57665
[ "MIT" ]
null
null
null
README.md
EricJPogue/matchmaker-version-2
82214c6520666033b4121b904573bd6260b57665
[ "MIT" ]
null
null
null
README.md
EricJPogue/matchmaker-version-2
82214c6520666033b4121b904573bd6260b57665
[ "MIT" ]
null
null
null
# matchmaker-version-2
22
22
0.818182
swe_Latn
0.471665
6182235d74a8de097a08068eab29be381def35fa
5,235
md
Markdown
README.md
mdechiaro/vmware
02ab2763b788aa15d77b656fc9c059d14bfe1a64
[ "MIT" ]
4
2016-05-02T10:12:46.000Z
2018-12-14T21:30:23.000Z
README.md
mdechiaro/vmware
02ab2763b788aa15d77b656fc9c059d14bfe1a64
[ "MIT" ]
4
2018-03-14T15:37:12.000Z
2018-05-20T12:18:58.000Z
README.md
mdechiaro/vmware
02ab2763b788aa15d77b656fc9c059d14bfe1a64
[ "MIT" ]
2
2016-03-25T18:46:16.000Z
2018-03-13T20:49:31.000Z
vctools (deprecated) ====== This project is no longer maintained. [![Status](https://travis-ci.org/mdechiaro/vctools.svg?branch=master)](https://travis-ci.org/mdechiaro/vctools) This is a Python module using pyvmomi which aims to simplify command-line operations inside vCenter for linux sysadmins. Here is a short list of what it can do: - Completely automate a new VM creation from start to finish. This includes creating a boot ISO if your environment does not support DHCP. - Reconfigure VM hardware like networks, disks, CPU, and memory - Query various information on VMs, Datastores, Datacenters, etc - Upload ISOs to remote datastores, and mount and unmount them on VMs. - Upgrade VM hardware Python: 3.6 Dependencies (all available from pip): - pipenv Install: git clone https://www.github.com/mdechiaro/vctools cd vctools pipenv --python 3.6 && pipenv install cp vctools/examples/vctoolsrc.yaml.example ~/.vctoolsrc.yaml ln -s vctools/main.py ~/bin/vctools If you wish to share this project with other users, then copy the file to the root of the project and edit the group permissions appropriately. cp vctools/examples/vctoolsrc.yaml.example vctools/vctoolsrc.yaml VM Creation: This program will merge a default "rc" config file and a server config into a VM creation config. It will prompt the user for any missing info that is required to create a VM, and then automate the build process from start to finish. It is capable of creating a boot ISO per server for situations when DHCP or PXE booting is not an option. It can also upload, mount, and power on the VM after its creation making the process completely automated. It can handle multiple configs at once and merge them separately with the dotrc for complex configurations. An example yaml config (leave options out and be prompted if necessary): # hostname.yaml --- mkbootiso: ks: http://host.domain.com/ks.cfg options: gateway: 10.1.1.1 hostname: hostname.domain.com ip: 10.1.1.10 nameserver: 4.2.2.2 netmask: 255.255.255.0 source: /opt/isos/rhel7 vmconfig: cluster: 1234_cluster_01 cpuHotAddEnabled: true datacenter: Linux datastore: 1234_datastore_01 disks: # assign disks to specific scsis 0: - 50 1: - 1500 folder: Linux Team guestId: rhel7_64Guest memoryHotAddEnabled: true memoryMB: 4096 name: hostname nics: - 1000_foo_network - 1001_bar_network numCPUs: 2 Any configs that you wish to be set as defaults should be added to `~/.vctoolsrc.yaml` or `/path/to/vctools/vctoolsrc.yaml`, and then can be overridden on a per server basis with user supplied configs. In addition, any features that you do not need should be completely omitted. The creation process will output all configurations for the server in YaML format for easy rebuilds in the future. Command Line (Argparse) Usage: Create a New VM: vctools create vcenter hostname.yaml hostnameN.yaml Create a VM config from an existing system: vctools query vcenter --vmconfig existing_vm --createcfg new_vm > new_vm.yaml Mount an ISO: vctools mount vcenter --name server --path /path/to/file.iso --datastore datastore Query Datastore Info: vctools query vcenter --cluster cluster --datastores Reconfig Parameters help: vctools reconfig [-h|--help] # reconfigure config settings # lookup vmware sdk configspec for all options vctools reconfig <vc> <name> --cfgs memoryMB=<int>,numCPUs=<int> # reconfigure a disk vctools reconfig <vc> <name> --device disk --disk-id <int> --sizeGB <int> # reconfigure a network card vctools reconfig <vc> <name> --device nic --nic-id <int> --network <network> Unmount an ISO: vctools umount vcenter --name server Upload ISO to Datastore: vctools upload vcenter --iso /local/path/to/file.iso \ --dest /remote/path/to/iso/folder --datastore datastore \ --datacenter datacenter Contributing: Pull requests are welcome. Travis CI will test for syntax errors, so it is recommended that you run this code when making changes and before you commit. # run inside project directory find . -not \( -name ".venv" -prune \) -name "*.py" -type f | xargs pylint --rcfile=.pylintrc Here's a quick way to set it up in the Python intepreter and then you can move freely around the interface. The commands dir() and getattr() are very helpful. from pyVmomi import vim from vctools.auth import Auth from vctools.query import Query auth = Auth(<vcenter_host>) auth.login() Password: query = Query() You can create numerous containers like so: virtual_machines = query.create_container( auth.session, auth.session.content.rootFolder, [vim.VirtualMachine], True ) clusters = query.create_container( auth.session, auth.session.content.rootFolder, [vim.ComputeResource], True ) vm_name = query.get_obj(virtual_machines.view, 'vm_name') dir(vm_name) Thanks: A special thanks goes out to VMware Onyx, as well as my colleagues, which allowed me to make this code possible.
29.914286
111
0.717861
eng_Latn
0.974417
618225118fb098627273d095f61b83c50109db78
542
md
Markdown
docs/api/alfa-aria.role_namespace.isname_1_function.md
Siteimprove/alfa
3eb032275a9fa5f3b97b892e28ebfc90eb4ef611
[ "MIT" ]
70
2018-05-25T16:02:23.000Z
2022-03-21T14:28:03.000Z
docs/api/alfa-aria.role_namespace.isname_1_function.md
Siteimprove/alfa
3eb032275a9fa5f3b97b892e28ebfc90eb4ef611
[ "MIT" ]
448
2018-06-01T08:46:47.000Z
2022-03-31T14:02:55.000Z
docs/api/alfa-aria.role_namespace.isname_1_function.md
Siteimprove/alfa
3eb032275a9fa5f3b97b892e28ebfc90eb4ef611
[ "MIT" ]
13
2018-07-04T19:47:49.000Z
2022-02-19T09:59:34.000Z
<!-- Do not edit this file. It is automatically generated by API Documenter. --> [Home](./index.md) &gt; [@siteimprove/alfa-aria](./alfa-aria.md) &gt; [Role](./alfa-aria.role_namespace.md) &gt; [isName](./alfa-aria.role_namespace.isname_1_function.md) ## Role.isName() function <b>Signature:</b> ```typescript function isName(value: string): value is Name; ``` ## Parameters | Parameter | Type | Description | | --- | --- | --- | | value | string | | <b>Returns:</b> value is [Name](./alfa-aria.role_namespace.name_typealias.md)
23.565217
170
0.658672
eng_Latn
0.511952
61824cca43a52226e9586d518b1ab6128a23d581
1,604
md
Markdown
README.md
KingdomB/web-341
941155a3ba910d861d9713bcf485091dd25e3dc2
[ "MIT" ]
14
2020-02-17T14:15:35.000Z
2022-03-20T07:02:43.000Z
README.md
ashleigh-lyman/web-341
8c50cce1aad674e9d9ab058fb27f6bff3296e33f
[ "MIT" ]
4
2020-05-06T18:05:19.000Z
2021-02-11T16:49:58.000Z
README.md
ashleigh-lyman/web-341
8c50cce1aad674e9d9ab058fb27f6bff3296e33f
[ "MIT" ]
23
2020-02-18T01:16:18.000Z
2022-01-30T00:23:15.000Z
# WEB 340 Node.js ## [Bellevue University](http://bellevue.edu "Bellevue University is a private, non-profit university located in Bellevue, Nebraska, United States.") Address: 1000 Galvin Rd S, Bellevue, Nebraska 68005 - [Directions](https://www.google.com/maps/dir/''/Bellevue+University/@41.1509562,-95.9896355,12z/data=!4m8!4m7!1m0!1m5!1m1!1s0x8793886a86ca807f:0x838e857240d175eb!2m2!1d-95.9195956!2d41.1509774 "Google maps") Web Development [Degree](http://www.bellevue.edu/degrees/bachelor/web-development-bs/ "Designed by developers for developers.") ## Course Description This course introduces the process of building web-based applications in Node.js with Express. Students learn to create web forms, collect and process information obtained from them, retrieve and update information contained in a MongoDB database, and build stand-alone RESTFul API's. GitHub is used to host and share coding projects. Prerequisite: N/A ## Repository Overview Carefully read the assigned chapters, videos, and narrative I've included under each exercise and assignment. Most exercises and assignments have runnable sample code so you can visually see the concept "in action." Assignments are broken into "milestones" and each "milestone" builds on the last. Approach every week from top-to-bottom and do not move to the next assignment/exercise without fully understanding the previous. ```bash git clone https://github.com/rrkrasso/web-340-code-snippets.git cd web-330-code-snippets ``` Setting up and running the examples: ```bash cd [week]/[folder] npm install --save npm start ```
45.828571
338
0.781796
eng_Latn
0.967477
6183070d17493b3d94786f54b3a491ce7165ec3d
1,063
md
Markdown
v2APIreference/Filters/Networkfilters/Mongoproxy.md
AsCat/envoyproxy_doc_ZH_CN
61809497df9d1a7101ed3db3cbd1c39227640b30
[ "Apache-2.0" ]
77
2017-12-11T07:21:27.000Z
2022-01-21T15:58:13.000Z
v2APIreference/Filters/Networkfilters/Mongoproxy.md
AsCat/envoyproxy_doc_ZH_CN
61809497df9d1a7101ed3db3cbd1c39227640b30
[ "Apache-2.0" ]
null
null
null
v2APIreference/Filters/Networkfilters/Mongoproxy.md
AsCat/envoyproxy_doc_ZH_CN
61809497df9d1a7101ed3db3cbd1c39227640b30
[ "Apache-2.0" ]
23
2017-12-14T09:00:38.000Z
2021-10-21T10:43:52.000Z
## Mongo代理 MongoDB[配置参考](../../../Configurationreference/Networkfilters/Mongoproxy.md)。 ### filter.network.MongoProxy [filter.network.MongoProxy proto](https://github.com/envoyproxy/data-plane-api/blob/master/api/filter/network/mongo_proxy.proto#L11) ``` { "stat_prefix": "...", "access_log": "...", "delay": "{...}" } ``` - **stat_prefix**<br /> ([string](https://developers.google.com/protocol-buffers/docs/proto#scalar), REQUIRED) 发布[统计](../../../Configurationreference/Networkfilters/Mongoproxy.md)信息时使用的前缀(提升易读性)。 - **access_log**<br /> ([string](https://developers.google.com/protocol-buffers/docs/proto#scalar)) 可选路径,用于记录Mongo的访问日志。如果未指定访问日志路径,则不会写入访问日志。请注意,访问日志也受[运行时](../../../Configurationreference/Networkfilters/Mongoproxy.md)配置控制。 - **delay**<br /> ([filter.FaultDelay](../v2APIreference/Filters/Commonfaultinjectiontypes.md#filterfaultdelay)) 在代理Mongo操作之前注入一个固定的延迟。延迟适用于以下MongoDB操作:`Query`,`Insert`,`GetMore`和`KillCursors`。一旦进行一个有效延迟时,在定时器触发之前,在所有数据输入的时间也将作为延迟的一部分。 ## 返回 - [上一级](../Networkfilters.md) - [首页目录](../../../README.md)
35.433333
218
0.724365
yue_Hant
0.350979
618397a711a27feaba08b04b3f8513f022ff1dfc
1,023
md
Markdown
DOCKER_README.md
jingliu9/pcm
2606359957df57e3a4ab4d25242c878b6283a201
[ "BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause" ]
1
2021-08-31T12:38:49.000Z
2021-08-31T12:38:49.000Z
DOCKER_README.md
jingliu9/pcm
2606359957df57e3a4ab4d25242c878b6283a201
[ "BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause" ]
null
null
null
DOCKER_README.md
jingliu9/pcm
2606359957df57e3a4ab4d25242c878b6283a201
[ "BSD-3-Clause-No-Nuclear-License-2014", "BSD-3-Clause" ]
1
2021-08-31T13:34:26.000Z
2021-08-31T13:34:26.000Z
-------------------------------------------------------------------------------- How To Run Processor Counter Monitor Server Container from Docker Hub -------------------------------------------------------------------------------- As root user: 1. ``modprobe msr`` 2. ``docker run -d --name pcm --privileged -p 9738:9738 opcm/pcm`` - the container can also be run with limited capabilities without the privileged mode: ``docker run -d --name pcm --cap-add=SYS_ADMIN --cap-add=SYS_RAWIO --device=/dev/cpu -v /sys/firmware/acpi/tables/MCFG:/pcm/sys/firmware/acpi/tables/MCFG:ro -v /proc/bus/pci/:/pcm/proc/bus/pci/ -v /proc/sys/kernel/nmi_watchdog:/pcm/proc/sys/kernel/nmi_watchdog -p 9738:9738 opcm/pcm`` (there is also a docker-compose file containing these options: https://raw.githubusercontent.com/opcm/pcm/master/docker-compose.yml) This will start pcm-sensor-server container exposing CPU metrics from the whole system at port 9738 The URL of the docker container repository: https://hub.docker.com/r/opcm/pcm
78.692308
507
0.642229
eng_Latn
0.628912
61842199c5dba37cd445671ca44eb7a9e0dfdb9c
332
md
Markdown
_posts/cs/ai/2018-04-04-flower1.md
liuxin21/liuxin21.github.io
b4b2557d17d80954a698e16b2386a87bccf2b3b7
[ "MIT" ]
null
null
null
_posts/cs/ai/2018-04-04-flower1.md
liuxin21/liuxin21.github.io
b4b2557d17d80954a698e16b2386a87bccf2b3b7
[ "MIT" ]
null
null
null
_posts/cs/ai/2018-04-04-flower1.md
liuxin21/liuxin21.github.io
b4b2557d17d80954a698e16b2386a87bccf2b3b7
[ "MIT" ]
null
null
null
--- layout: post title: tensorflow鸢尾花分类 date: 2018-04-04 category: ai --- 导入相关modules (matplotlib, tensorflow等),开启eager execution。 Eager execution 可以使 tensor flow 立即计算操作,返回具体的值,而不是computational graph,计算图一会儿执行。 一旦执行了 eager execution,在同一个program 里不能取消。 下载 training dataset file: `train_dataset_fp = tf.keras.utils.get_file(…)`
17.473684
78
0.783133
eng_Latn
0.200385
61852791260727d9f09a57679692300d2a2a4152
265
md
Markdown
README.md
buildweek-jul19-betterprofessor/Front-end
81ed38eb0a7e1581aec5fff02533044c5774a40f
[ "MIT" ]
null
null
null
README.md
buildweek-jul19-betterprofessor/Front-end
81ed38eb0a7e1581aec5fff02533044c5774a40f
[ "MIT" ]
7
2019-08-04T20:49:04.000Z
2022-02-26T15:43:07.000Z
README.md
buildweek-jul19-betterprofessor/Front-end
81ed38eb0a7e1581aec5fff02533044c5774a40f
[ "MIT" ]
2
2019-07-25T00:06:48.000Z
2020-05-20T18:25:42.000Z
# Better Professor UI ## Introduction A tool built with React to connect professors with students they are mentoring beyond class time. ## Installation Make sure you have node installed on your computer then cd in to better-prof. Run npm install and npm start
26.5
107
0.784906
eng_Latn
0.999024
61853a87e0ae8ef39a13f1c1eea935197a3c73cf
1,801
md
Markdown
README.md
minsiyang/Ten-minute-walk-JS
c6218537dcbb03a1cc5eba1b43fac56e9ce80ac4
[ "MIT" ]
null
null
null
README.md
minsiyang/Ten-minute-walk-JS
c6218537dcbb03a1cc5eba1b43fac56e9ce80ac4
[ "MIT" ]
null
null
null
README.md
minsiyang/Ten-minute-walk-JS
c6218537dcbb03a1cc5eba1b43fac56e9ce80ac4
[ "MIT" ]
null
null
null
# Take a ten minute walk Create a function that will return true if the walk will take you exactly ten minutes and will return you to your starting point. ## Requirements You are meeting a friend in New York City, where all roads are laid out in a perfect grid. You arrived ten minutes too early to the appointment, so you decided to take the opportunity to go for a short walk. The city provides its tourists with a Walk Generating App on their phones -- everytime you press the button it sends you an array of one-letter strings representing directions to walk. eg. ['n', 's', 'w', 'e'] You always walk only a single block in a direction and you know it takes you one minute to traverse one city block, so create a function that will return true if the walk the app gives you will take you exactly ten minutes (you don't want to be early or late!) and will, of course, return you to your starting point. Return false otherwise. Note: you will always receive a valid array containing a random assortment of direction letters ('n', 's', 'e', or 'w' only). It will never give you an empty array (that's not a walk, that's standing still!). Acceptance Criteria ``` ten_minute_walk?(['w', 's', 'e', 'e', 'n', 'n', 'e', 's', 'w', 'w']) # => true ten_minute_walk?(['w', 's', 'e', 'n', 'n', 'e', 's', 'w', 'w', 'w']) # => false ten_minute_walk?(['w', 's', 'e', 's', 's', 'e', 's', 'w', 'n', 'n']) # => false ten_minute_walk?(['w', 's']) # => false ``` | input | output | | :--: | :--: | | walk.isTenMinuteWalk(['w', 's']) | false | | walk.isTenMinuteWalk(['w', 's', 'e', 's', 's', 'e', 's', 'w', 'n', 'n']) | false | | walk.isTenMinuteWalk(['w', 's', 'e', 'n', 'n', 'e', 's', 'w', 'w', 'w']) | false | | walk.isTenMinuteWalk(['w', 's', 'e', 'e', 'n', 'n', 'e', 's', 'w', 'w']) | true |
72.04
340
0.632426
eng_Latn
0.994253
6185899032e34d2903a8c39f7b5b99e18208c46f
29,210
md
Markdown
src/posts/2021/2021-02-24-ubuntu-btrfs-install-guide/index.md
sadanand-singh/reckoning.dev
daf4a47c09f3234a9b78fc448be442708608b48a
[ "MIT" ]
4
2019-09-06T09:57:19.000Z
2020-07-19T08:50:41.000Z
src/posts/2021/2021-02-24-ubuntu-btrfs-install-guide/index.md
sadanand-singh/reckoning.dev
daf4a47c09f3234a9b78fc448be442708608b48a
[ "MIT" ]
4
2020-05-29T12:30:32.000Z
2021-01-20T18:11:44.000Z
src/posts/2021/2021-02-24-ubuntu-btrfs-install-guide/index.md
sadanand-singh/reckoning.dev
daf4a47c09f3234a9b78fc448be442708608b48a
[ "MIT" ]
5
2020-06-04T13:11:55.000Z
2020-06-22T20:18:54.000Z
--- title: Ubuntu Desktop 20.04 with btrfs-luks full disk encryption date: "2021-02-24T10:50:00+01:00" thumb: ubuntu_btrfs.png featured: true slug: ubuntu-btrfs-guide tags: - Linux - Guides --- If you have followed me here, you know I am a big fan of BTRFS and luks based full disk encryption. I used to use [Arch Linux](https://www.archlinux.org/) in [the past](/complete-setup-arch-gnome/), however recently I have started using Ubuntu LTS as main OS mainly for simplicity and easy availability of numerous tricks online! The main reason I want to to take this little complex route of installation than simple easy way that Ubuntu provides: - A luks-based full disk encryption for data safety, specially since I work a lot with medical data! - An un-encrypted EFI partition for the GRUB bootloader - a btrfs-inside-luks partition for the root filesystem (including /boot) containing a subvolume `@` for `/` and a subvolume `@home` for `/home` with only one passphrase prompt from GRUB! - automatic system snapshots and easy rollback similar to `zsysc` using: - [Timeshift](https://github.com/teejee2008/timeshift), which will regularly take (almost instant) snapshots of the system - [grub-btrfs](https://github.com/Antynea/grub-btrfs), which will automatically create GRUB entries for all your btrfs snapshots This tutorial is made with [Ubuntu 20.04 Focal Fossa](http://releases.ubuntu.com/focal/) copied to an installation media (usually a USB Flash device but may be a DVD or the ISO file attached to a virtual machine hypervisor). This has been [adapted from this Blog](https://mutschler.eu/linux/install-guides/ubuntu-btrfs/). ## Step 1: Boot the install, check UEFI mode and open an interactive root shell Since most modern PCs have UEFI, I will cover only the UEFI installation. So, boot the installation medium in UEFI mode, choose your language and click `Try Ubuntu`. Once the Live Desktop environment has started we need to use a Terminal shell command-line to issue a series of commands to prepare the target device before executing the installer itself. Now, open a terminal (<kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>T</kbd>) and run the following command: ```bash mount | grep efivars # efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime) ``` to detect whether we are in UEFI mode. Now switch to an interactive root session: ```bash sudo -i ``` You might find maximizing the terminal window is helpful for working with the command-line. Do not close this terminal window during the whole installation process until we are finished with everything. ## Step 2: Prepare partitions manually ### Create partition table and layout First find out the name of your drive. For me the installation target device is called `nvme01n1`: ```bash lsblk # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT # loop0 7:0 0 1.9G 1 loop /rofs # loop1 7:1 0 27.1M 1 loop /snap/snapd/7264 # loop2 7:2 0 55M 1 loop /snap/core18/1705 # loop3 7:3 0 240.8M 1 loop /snap/gnome-3-34-1804/24 # loop4 7:4 0 62.1M 1 loop /snap/gtk-common-themes/1506 # loop5 7:5 0 49.8M 1 loop /snap/snap-store/433 # sr0 11:0 1 2.5G 0 rom /cdrom # sr1 11:1 1 1024M 0 rom # sr2 11:2 1 1024M 0 rom # nvme0n1 252:0 0 991G 0 disk ``` ::: callout-green **Note:** You can also open `gparted` or have a look into the `/dev` folder to make sure what your hard drive is called. In most cases they are called `sda` for normal SSD and HDD, whereas for NVME storage the naming is `nvme0`. Also note that there are no partitions or data on my hard drive, you might want to double check which partition layout fits your use case, particularly if you dual-boot with other systems. ::: We'll now create the following partition layout on `nvme0n1`: 1. a 512 MiB FAT32 EFI partition for the GRUB bootloader 2. a 4 GiB partition for encrypted swap use 3. a luks1 encrypted partition which will be our root btrfs filesystem Some remarks: - `/boot` will reside on the encrypted luks1 partition. The GRUB bootloader is able to decrypt `luks1` at boot time. Alternatively, you could create an encrypted `luks1` partition for `/boot` and a `luks2` encrypted partition for the root filesystem. - With btrfs I do not need any other partitions for e.g. `/home`, as we will use subvolumes instead. Let's use `parted` for this (feel free to use `gparted` accordingly): ```bash parted /dev/nvme0n1 mklabel gpt mkpart primary 1MiB 513MiB mkpart primary 513MiB 4609MiB mkpart primary 4609MiB 100% print # Model: Virtio Block Device (virtblk) # Disk /dev/nvme0n1: 991.7GB # Sector size (logical/physical): 512B/512B # Partition Table: gpt # Disk Flags: # # Number Start End Size File system Name Flags # 1 1049kB 538MB 537MB primary # 2 538MB 4833MB 4295MB primary # 3 4833MB 991.7GB 990.9GB primary quit ``` Do not set names or flags, as in my experience the Ubiquity installer has some problems with that. ### Create luks1 partition The default luks (Linux Unified Key Setup) format used by the cryptsetup tool has changed since the release of Ubuntu 18.04 Bionic. 18.04 used version 1 (`luks1`) but more recent Ubuntu releases default to version 2 (`luks2`) and check that `/boot` is not located inside an encrypted partition. GRUB is able to decrypt luks version 1 at boot time, but Ubiquity does not allow this by default. Note that if you want to use luks version 2 you should create an encrypted `/boot` partition using version 1, whereas the root filesystem can then be formatted using version 2. Either way, we need to prepare the `luks1` partition or else GRUB will not be able to unlock the encrypted device. Note that most Linux distributions also default to version 1 if you do a full disk encryption (e.g. Manjaro Architect). ```bash cryptsetup luksFormat --type=luks1 /dev/nvme0n1p3 # WARNING! # ======== # This will overwrite data on /dev/nvme0n1p3 irrevocably. # Are you sure? (Type uppercase yes): YES # Enter passphrase for /dev/nvme0n1p3: # Verify passphrase: ``` Use a very good password here. Now map the encrypted partition to a device called `cryptdata`, which will be our root filesystem: ```bash cryptsetup luksOpen /dev/nvme0n1p3 cryptdata # Enter passphrase for /dev/nvme0n1p3: ls /dev/mapper/ # control cryptdata ``` ### Create filesystems for root and EFI System partitions Create a filesystem for the EFI System partition. If there was e.g. an NTFS filesystem at the beginning of the drive, the Ubiquity installer will be unable to mount the filesystem otherwise. ```bash mkfs.fat -F32 /dev/nvme0n1p1 ``` We need to pre-format `cryptdata` because, in my experience, the Ubiquity installer messes something up and complains about devices with the same name being mounted twice. ```bash mkfs.btrfs /dev/mapper/cryptdata # btrfs-progs v5.4.1 # See http://btrfs.wiki.kernel.org for more information. # Label: (null) # UUID: 4025b177-70ac-462b-9895-bdde1d6b3d0c # Node size: 16384 # Sector size: 4096 # Filesystem size: 59.50GiB # Block group profiles: # Data: single 8.00MiB # Metadata: DUP 1.00GiB # System: DUP 8.00MiB # SSD detected: yes # Incompat features: extref, skinny-metadata # Checksum: crc32c # Number of devices: 1 # Devices: # ID SIZE PATH # 1 990.50GiB /dev/mapper/cryptdata ``` `cryptdata` is our root partition which we'll use for the root filesystem. ## Step 3 (optional): Optimize mount options for SSD or NVME drives Unfortunately, the Ubiquity installer does not set good mount options for btrfs on SSD or NVME drives, so you should change this for optimized performance and durability. I have found that there is some general agreement to use the following mount options: - `ssd`: use SSD specific options for optimal use on SSD and NVME - `noatime`: prevent frequent disk writes by instructing the Linux kernel not to store the last access time of files and folders - `space_cache`: allows btrfs to store free space cache on the disk to make caching of a block group much quicker - `commit=120`: time interval in which data is written to the filesystem (value of 120 is taken from Manjaro) - `compress=zstd`: allows to specify the compression algorithm which we want to use. btrfs provides `lzo`, `zstd` and `zlib` compression algorithms. Based on some Phoronix test cases, `zstd` seems to be the better performing candidate. - Lastly the pass flag for `fschk` in the `fstab` is useless for btrfs and should be set to 0. We need to change two configuration files: - `/usr/lib/partman/mount.d/70btrfs` - `/usr/lib/partman/fstab.d/btrfs` So let's use an editor to change the following: ```bash nano /usr/lib/partman/mount.d/70btrfs # line 24: options="${options:+$options,}subvol=@,ssd,noatime,space_cache,commit=120,compress=zstd" # line 31: options="${options:+$options,}subvol=@home,ssd,noatime,space_cache,commit=120,compress=zstd" nano /usr/lib/partman/fstab.d/btrfs # line 30: pass=0 # line 31: home_options="${options:+$options,}subvol=@home,ssd,noatime,space_cache,commit=120,compress=zstd" # line 32: options="${options:+$options,}subvol=@,ssd,noatime,space_cache,commit=120,compress=zstd" # line 36: pass=0 # line 37: options="${options:+$options,}subvol=@home,ssd,noatime,space_cache,commit=120,compress=zstd" # line 40: pass=0 # line 56: echo "$home_path" "$home_mp" btrfs "$home_options" 0 0 ``` ## Step 4: Install Ubuntu using the Ubiquity installer without the bootloader Now let's run the installation process, but without installing the bootloader, as we want to put `/boot` on an encrypted partition which is actually not allowed by Ubiquity. So we need to run the installer with: ```bash ubiquity --no-bootloader ``` Choose the installation language, keyboard layout, Normal or Minimal installation, check the boxes of the Other options according to your needs. In the "Installation type" options choose "Something Else" and the manual partitioner will start: - Select `/dev/nvme0n1p1`, press the `Change` button. Choose `Use as` 'EFI System Partition'. - Select `/dev/nvme0n1p2`, press the `Change` button. Choose `Use as` 'swap area' to create a swap partition. We will encrypt this partition later in the `crypttab`. - Select the root filesystem device for formatting (/dev/mapper/cryptdata type btrfs on top), press the `Change` button. Choose `Use as` 'btrfs journaling filesystem', check `Format the partition` and use '/' as `Mount point`. - If you have other partitions, check their types and use; particularly, deactivate other EFI partitions. Recheck everything, press the `Install Now` button to write the changes to the disk and hit the `Continue button`. Select the time zone and fill out your user name and password. If your installation is successful choose the `Continue Testing` option. **DO NOT REBOOT!**, but return to your terminal. ## Step 5: Post-Installation steps ### Create a chroot environment and enter your system Return to the terminal and create a chroot (change-root) environment to work directly inside your newly installed operating system: ```bash mount -o subvol=@,ssd,noatime,space_cache,commit=120,compress=zstd /dev/mapper/cryptdata /mnt for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done sudo cp /etc/resolv.conf /mnt/etc/ sudo chroot /mnt ``` Now you are actually inside your system, so let's mount all other partitions and have a look at the btrfs subvolumes: ```bash mount -av # / : ignored # /boot/efi : successfully mounted # /home : successfully mounted # none : ignored btrfs subvolume list / # ID 256 gen 164 top level 5 path @ # ID 258 gen 30 top level 5 path @home ``` Looks great. Note that the subvolume `@` is mounted to `/`, whereas the subvolume `@home` is mounted to `/home`. ### Create crypttab We need to create the `crypttab` manually: ```bash export UUID_p3=$(blkid -s UUID -o value /dev/nvme0n1p3) #this is an environmental variable echo "cryptdata UUID=${UUID_p3} none luks" >> /etc/crypttab cat /etc/crypttab # cryptdata UUID=8e893c0f-4060-49e3-9d96-db6dce7466dc none luks ``` Note that the UUID is from the luks partition `/dev/nvme0n1p3`, not from the device mapper `/dev/mapper/cryptdata`! You can get all UUID using `blkid`. ### Encrypted swap There are many ways to encrypt the swap partition, a good reference is [dm-crypt/Swap encryption](https://wiki.archlinux.org/index.php/Dm-crypt/Swap_encryption). For the sake of this guide, I will only show how to set up it as an encrypted swap partition. #### Swap partition As I have no use for hibernation or suspend-to-disk, I will simply use a random password to decrypt the swap partition using the `crypttab`: ```bash export SWAPUUID=$(blkid -s UUID -o value /dev/nvme0n1p2) echo "cryptswap UUID=${SWAPUUID} /dev/urandom swap,offset=1024,cipher=aes-xts-plain64,size=512" >> /etc/crypttab cat /etc/crypttab # cryptdata UUID=8e893c0f-4060-49e3-9d96-db6dce7466dc none luks # cryptswap UUID=9cae34c0-3755-43b1-ac05-2173924fd433 /dev/urandom swap,offset=1024,cipher=aes-xts-plain64,size=512 ``` We also need to adapt the fstab accordingly: ```bash sed -i "s|UUID=${SWAPUUID}|/dev/mapper/cryptswap|" /etc/fstab cat /etc/fstab # /dev/mapper/cryptdata / btrfs defaults,subvol=@,ssd,noatime,space_cache,commit=120,compress=zstd 0 0 # UUID=01DE-F282 /boot/efi vfat umask=0077 0 1 # /dev/mapper/cryptdata /home btrfs defaults,subvol=@home,ssd,noatime,space_cache,commit=120,compress=zstd 0 0 # /dev/mapper/cryptswap none swap sw 0 0 ``` The sed command simply replaced the UUID of your swap partition with the encrypted device called `/dev/mapper/cryptswap`. There you go, you have an encrypted swap partition. ### Add a key-file to type luks passphrase only once (optional, but recommended) The device holding the kernel (and the initramfs image) is unlocked by GRUB, but the root device needs to be unlocked again at initramfs stage, regardless whether it’s the same device or not, so you'll get a second prompt for your passphrase. This is because GRUB boots with the given `vmlinuz` and `initramfs` images; in other words, all devices are locked, and the root device needs to be unlocked again. To avoid extra passphrase prompts at initramfs stage, a workaround is to unlock via key files stored into the `initramfs` image. This can also be used to unlock any additional luks partitions you want on your disk. Since the `initramfs` image now resides on an encrypted device, this still provides protection for data at rest. After all for luks the volume key can already be found by user space in the Device Mapper table, so one could argue that including key files to the `initramfs` image – created with restrictive permissions – doesn’t change the threat model for luks devices. **Note that this is exactly what e.g. the Manjaro architect installer does as well**. Long story short, let's create a key-file, secure it, and add it to our luks volume: ```bash mkdir /etc/luks dd if=/dev/urandom of=/etc/luks/boot_os.keyfile bs=4096 count=1 # 1+0 records in # 1+0 records out # 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928939 s, 4.4 MB/s chmod u=rx,go-rwx /etc/luks chmod u=r,go-rwx /etc/luks/boot_os.keyfile cryptsetup luksAddKey /dev/nvme0n1p3 /etc/luks/boot_os.keyfile # Enter any existing passphrase: cryptsetup luksDump /dev/nvme0n1p3 | grep "Key Slot" # Key Slot 0: ENABLED # Key Slot 1: ENABLED # Key Slot 2: DISABLED # Key Slot 3: DISABLED # Key Slot 4: DISABLED # Key Slot 5: DISABLED # Key Slot 6: DISABLED # Key Slot 7: DISABLED ``` Note that "Key Slot 0" contains our passphrase, whereas "Key Slot 1" contains the key-file. Let's restrict the pattern of keyfiles and avoid leaking key material for the initramfs hook: ```bash echo "KEYFILE_PATTERN=/etc/luks/*.keyfile" >> /etc/cryptsetup-initramfs/conf-hook echo "UMASK=0077" >> /etc/initramfs-tools/initramfs.conf ``` These commands will harden the security options in the `intiramfs` configuration file and hook. Next, add the keyfile to your `crypttab`: ```bash sed -i "s|none|/etc/luks/boot_os.keyfile|" /etc/crypttab # this replaces none with /etc/luks/boot_os.keyfile cat /etc/crypttab # cryptdata UUID=8e893c0f-4060-49e3-9d96-db6dce7466dc /etc/luks/boot_os.keyfile luks # cryptswap UUID=9cae34c0-3755-43b1-ac05-2173924fd433 /dev/urandom swap,offset=1024,cipher=aes-xts-plain64,size=512 ``` ### Install the EFI bootloader Now it is time to finalize the setup and install the GRUB bootloader. First we need to make it capable to unlock luks1-type partitions by setting `GRUB_ENABLE_CRYPTODISK=y` in `/etc/default/grub`, then install the bootloader to the device `/dev/nvme0n1` and lastly update GRUB. Just in case, I also reinstall the generic kernel ("linux-generic" and "linux-headers-generic") and also install the Hardware Enablement kernel ("linux-generic-hwe-20.04" "linux-headers-generic-hwe-20.04"): ```bash echo "GRUB_ENABLE_CRYPTODISK=y" >> /etc/default/grub apt install -y --reinstall grub-efi-amd64-signed linux-generic linux-headers-generic linux-generic-hwe-20.04 linux-headers-generic-hwe-20.04 # --- SOME APT INSTALLATION OUTPUT --- update-initramfs -c -k all # update-initramfs: Generating /boot/initrd.img-5.4.0-26-generic # update-initramfs: Generating /boot/initrd.img-5.4.0-29-generic grub-install /dev/nve0n1 # Installing for x86_64-efi platform. # Installation finished. No error reported. update-grub # Sourcing file `/etc/default/grub' # Sourcing file `/etc/default/grub.d/init-select.cfg' # Generating grub configuration file ... # Found linux image: /boot/vmlinuz-5.4.0-29-generic # Found initrd image: /boot/initrd.img-5.4.0-29-generic # Found linux image: /boot/vmlinuz-5.4.0-26-generic # Found initrd image: /boot/initrd.img-5.4.0-26-generic # Adding boot menu entry for UEFI Firmware Settings # done ``` Lastly, double-check that the initramfs image has restrictive permissions and includes the keyfile: ```bash stat -L -c "%A %n" /boot/initrd.img # -rw------- /boot/initrd.img lsinitramfs /boot/initrd.img | grep "^cryptroot/keyfiles/" # cryptroot/keyfiles/cryptdata.key ``` Note that cryptsetup-initramfs may rename key files inside the initramfs. ## Step 6: Reboot, some checks, and update system Now, it is time to exit the chroot - cross your fingers - and reboot the system: ```bash exit # exit reboot now ``` If all went well you should see a single passphrase prompt (YAY!) from GRUB: ``` Enter the passphrase for hd0,gpt3 (some very long number): ``` where you enter the luks passphrase to unlock GRUB, which then either asks you again for your passphrase or uses the key-file to unlock `/dev/nvme0n1p3` and map it to `/dev/mapper/cryptdata`. If you added a key-file you need to type your password only once. Note that if you mistyped the password for GRUB, you must restart the computer and retry. Now let's click through the welcome screen and open up a terminal to see whether everything is set up correctly: ```bash sudo cat /etc/crypttab # cryptdata UUID=8e893c0f-4060-49e3-9d96-db6dce7466dc /etc/luks/boot_os.keyfile luks # cryptswap UUID=9cae34c0-3755-43b1-ac05-2173924fd433 /dev/urandom swap,offset=1024,cipher=aes-xts-plain64,size=512 sudo cat /etc/fstab # /dev/mapper/cryptdata / btrfs defaults,subvol=@,ssd,noatime,space_cache,commit=120,compress=zstd 0 0 # UUID=01DE-F282 /boot/efi vfat umask=0077 0 1 # /dev/mapper/cryptdata /home btrfs defaults,subvol=@home,ssd,noatime,space_cache,commit=120,compress=zstd 0 0 # /dev/mapper/cryptswap none swap sw 0 0 sudo mount -av # / : ignored # /boot/efi : already mounted # /home : already mounted # none : ignored # /swap : already mounted sudo mount -v | grep /dev/mapper # /dev/mapper/cryptdata on / type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,commit=120,subvolid=256,subvol=/@) # /dev/mapper/cryptdata on /home type btrfs (rw,noatime,compress=zstd:3,ssd,space_cache,commit=120,subvolid=258,subvol=/@home) sudo swapon # NAME TYPE SIZE USED PRIO # /dev/dm-1 partition 4G 0B -3 sudo btrfs filesystem show / # Label: none uuid: aa90f2d3-10d9-420b-86e2-92ffce0ece9d # Total devices 1 FS bytes used 6.61GiB # devid 1 size 990.50GiB used 9.02GiB path /dev/mapper/cryptdata sudo btrfs subvolume list / # ID 256 gen 195 top level 5 path @ # ID 258 gen 192 top level 5 path @home ``` Look's good. Note that in this tutorial I installed both a swapfile and a swap partition. Normally you would choose one or the other. Let's update the system and reboot one more time: ```bash sudo apt update sudo apt upgrade sudo apt dist-upgrade sudo apt autoremove sudo apt autoclean ``` Also enable `fstrim.timer` as we did not add `discard` to the `crypttab`. This is due to the fact that [Btrfs Async Discard Support Looks To Be Ready For Linux 5.6](https://www.phoronix.com/scan.php?page=news_item&px=Btrfs-Async-Discard) is quite new, but 20.04 still runs kernel 5.4, it is better to enable the `fstrim.timer` systemd service: ```bash sudo systemctl enable fstrim.timer ``` Now reboot: ```bash sudo reboot now ``` ## Step 7: Install Timeshift and grub-btrfs Open a terminal and install some dependencies: ```bash sudo apt install -y btrfs-progs git make ``` Install Timeshift and configure it directly via the GUI: ```bash sudo apt install timeshift sudo timeshift-gtk ``` - Select “BTRFS” as the “Snapshot Type”; continue with “Next” - Choose your BTRFS system partition as “Snapshot Location”; continue with “Next” - "Select Snapshot Levels" (type and number of snapshots that will be automatically created and managed/deleted by Timeshift), my recommendations: - Activate "Monthly" and set it to 1 - Activate "Weekly" and set it to 3 - Activate "Daily" and set it to 5 - Deactivate "Hourly" - Activate "Boot" and set it to 3 - Activate "Stop cron emails for scheduled tasks" - continue with "Next" - I also include the `@home` subvolume (which is not selected by default). Note that when you restore a snapshot Timeshift you get the choice whether you want to restore it as well (which in most cases you don't want to). - Click "Finish" - "Create" a manual first snapshot & exit Timeshift *Timeshift* will now check every hour if snapshots ("hourly", "daily", "weekly", "monthly", "boot") need to be created or deleted. Note that "boot" snapshots will not be created directly but about 10 minutes after a system startup. *Timeshift* puts all snapshots into `/run/timeshift/backup`. Conveniently, the real root (subvolid 5) of your BTRFS partition is also mounted here, so it is easy to view, create, delete and move around snapshots manually. ```bash ls /run/timeshift/backup # @ @home @swap timeshift-btrfs ``` Note that `/run/timeshift/backup/@` contains your `/` folder, `/run/timeshift/backup/@home` contains your `/home` folder, `/run/timeshift/backup/@swap` contains your `/swap` folder. Now let's install *timeshift-autosnap-apt* and *grub-btrfs* from GitHub ```bash git clone https://github.com/wmutschl/timeshift-autosnap-apt.git /home/$USER/timeshift-autosnap-apt cd /home/$USER/timeshift-autosnap-apt sudo make install git clone https://github.com/Antynea/grub-btrfs.git /home/$USER/grub-btrfs cd /home/$USER/grub-btrfs sudo make install ``` After this, optionally, make changes to the configuration files: ```bash sudo nano /etc/timeshift-autosnap-apt.conf sudo nano /etc/default/grub-btrfs/config ``` For example, as we don't have a dedicated /boot partition, we can set `snapshotBoot=false` in the `timeshift-autosnap-apt-conf` file to not rsync the `/boot` directory to `/boot.backup`. Note that the EFI partition is still rsynced into your snapshot to `/boot.backup/efi`. For *grub-btrfs*, I change `GRUB_BTRFS_SUBMENUNAME` to "MY BTRFS SNAPSHOTS". Check if everything is working: ```bash sudo timeshift-autosnap-apt # Rsyncing /boot/efi into the filesystem before the call to timeshift. # Using system disk as snapshot device for creating snapshots in BTRFS mode # # /dev/dm-0 is mounted at: /run/timeshift/backup, options: rw,relatime,compress=zstd:3,ssd,space_cache,commit=120,subvolid=5,subvol=/ # # Creating new backup...(BTRFS) # Saving to device: /dev/dm-0, mounted at path: /run/timeshift/backup # Created directory: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-43-29 # Created subvolume snapshot: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-43-29/@ # Created subvolume snapshot: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-43-29/@home # Created control file: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-43-29/info.json # BTRFS Snapshot saved successfully (0s) # Tagged snapshot '2020-05-06_23-43-29': ondemand # ------------------------------------------------------------------------------ # Sourcing file `/etc/default/grub' # Sourcing file `/etc/default/grub.d/init-select.cfg' # Generating grub configuration file ... # Found linux image: /boot/vmlinuz-5.4.0-29-generic # Found initrd image: /boot/initrd.img-5.4.0-29-generic # Found linux image: /boot/vmlinuz-5.4.0-26-generic # Found initrd image: /boot/initrd.img-5.4.0-26-generic # Adding boot menu entry for UEFI Firmware Settings # ###### - Grub-btrfs: Snapshot detection started - ###### # # Info: Separate boot partition not detected # # Found snapshot: 2020-05-06 23:43:29 | timeshift-btrfs/snapshots/2020-05-06_23-43-29/@ # # Found snapshot: 2020-05-06 23:35:24 | timeshift-btrfs/snapshots/2020-05-06_23-35-24/@ # # Found 2 snapshot(s) # ###### - Grub-btrfs: Snapshot detection ended - ###### # done ``` Now, if you run `sudo apt install|remove|upgrade|dist-upgrade`, *timeshift-autosnap-apt* will create a snapshot of your system with *Timeshift* and *grub-btrfs* creates the corresponding boot menu entries (actually it creates boot menu entries for all subvolumes of your system). For example: ```bash sudo apt install rolldice # Reading package lists... Done # Building dependency tree # Reading state information... Done # The following NEW packages will be installed: # rolldice # 0 upgraded, 1 newly installed, 0 to remove and 37 not upgraded. # Need to get 9.628 B of archives. # After this operation, 31,7 kB of additional disk space will be used. # Get:1 http://de.archive.ubuntu.com/ubuntu focal/universe amd64 rolldice amd64 1.16-1build1 [9.628 B] # Fetched 9.628 B in 0s (32,4 kB/s) # Rsyncing /boot/efi into the filesystem before the call to timeshift. # Using system disk as snapshot device for creating snapshots in BTRFS mode # # /dev/dm-0 is mounted at: /run/timeshift/backup, options: rw,relatime,compress=zstd:3,ssd,space_cache,commit=120,subvolid=5,subvol=/ # # Creating new backup...(BTRFS) # Saving to device: /dev/dm-0, mounted at path: /run/timeshift/backup # Created directory: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-45-37 # Created subvolume snapshot: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-45-37/@ # Created subvolume snapshot: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-45-37/@home # Created control file: /run/timeshift/backup/timeshift-btrfs/snapshots/2020-05-06_23-45-37/info.json # BTRFS Snapshot saved successfully (0s) # Tagged snapshot '2020-05-06_23-45-37': ondemand # ------------------------------------------------------------------------------ # Sourcing file `/etc/default/grub' # Sourcing file `/etc/default/grub.d/init-select.cfg' # Generating grub configuration file ... # Found linux image: /boot/vmlinuz-5.4.0-29-generic # Found initrd image: /boot/initrd.img-5.4.0-29-generic # Found linux image: /boot/vmlinuz-5.4.0-26-generic # Found initrd image: /boot/initrd.img-5.4.0-26-generic # Adding boot menu entry for UEFI Firmware Settings # ###### - Grub-btrfs: Snapshot detection started - ###### # # Info: Separate boot partition not detected # # Found snapshot: 2020-05-06 23:45:37 | timeshift-btrfs/snapshots/2020-05-06_23-45-37/@ # # Found snapshot: 2020-05-06 23:43:29 | timeshift-btrfs/snapshots/2020-05-06_23-43-29/@ # # Found snapshot: 2020-05-06 23:35:24 | timeshift-btrfs/snapshots/2020-05-06_23-35-24/@ # # Found 3 snapshot(s) # ###### - Grub-btrfs: Snapshot detection ended - ###### # done # Selecting previously unselected package rolldice. # (Reading database ... 158308 files and directories currently installed.) # Preparing to unpack .../rolldice_1.16-1build1_amd64.deb ... # Unpacking rolldice (1.16-1build1) ... # Setting up rolldice (1.16-1build1) ... # Processing triggers for man-db (2.9.1-1) ... ``` Hopefully this is of help to some of you who want to have a stable and reliable system! Let me know you thoughts in comments below.
41.908178
140
0.730572
eng_Latn
0.951658
61869b8cf41bfd7e34ef95c7c4085ef0a5b5b84f
4,188
markdown
Markdown
src/_langs/pl/fundamentals/media/images/index.markdown
cwdoh/WebFundamentals
f1d79c283acb2734f418aad15d97f628e66deebd
[ "Apache-2.0" ]
3
2016-03-21T19:19:01.000Z
2019-07-02T18:33:58.000Z
src/_langs/pl/fundamentals/media/images/index.markdown
cwdoh/WebFundamentals
f1d79c283acb2734f418aad15d97f628e66deebd
[ "Apache-2.0" ]
null
null
null
src/_langs/pl/fundamentals/media/images/index.markdown
cwdoh/WebFundamentals
f1d79c283acb2734f418aad15d97f628e66deebd
[ "Apache-2.0" ]
4
2015-12-11T14:41:36.000Z
2019-10-13T13:06:29.000Z
--- layout: section title: "Obrazy" description: "Obraz jest wart tysiąc słów, a grafiki są nieodłączną częścią każdej strony. Jednak często stanowią większość pobieranych danych. Elastyczne projektowanie witryn pozwala na podstawie cech urządzenia zmieniać nie tylko układ strony, ale też obrazy." introduction: "Obraz jest wart tysiąc słów, a grafiki są nieodłączną częścią każdej strony. Jednak często stanowią większość pobieranych danych. Elastyczne projektowanie witryn pozwala na podstawie cech urządzenia zmieniać nie tylko układ strony, ale też obrazy." authors: - petelepage article: written_on: 2014-04-30 updated_on: 2014-04-30 order: 1 collection: introduction-to-media id: images key-takeaways: use-right-image: - Używaj obrazów, które najlepiej pasują do cech wyświetlacza. Weź pod uwagę rozmiar ekranu, rozdzielczość urządzenia i układ strony. - Zmień właściwość <code>background-image</code> w CSS na potrzeby wyświetlaczy o wysokiej liczbie DPI, korzystając z zapytań o media z parametrami <code>min-resolution</code> i <code>-webkit-min-device-pixel-ratio</code>. - Dodaj do znaczników atrybut srcset, by oprócz obrazów w skali 1x wyświetlać też wersje w wysokiej rozdzielczości. - Rozważ spadek wydajności podczas stosowania technik zastępowania grafik w JavaScripcie lub wyświetlania mocno skompresowanych obrazów w wysokiej rozdzielczości na urządzeniach o niższej rozdzielczości. avoid-images: - W miarę możliwości unikaj obrazów. Zamiast nich korzystaj z funkcji przeglądarki oraz znaków w standardzie Unicode, a złożone ikony zastępuj czcionkami z ikonami. optimize-images: - Nie wybieraj przypadkowo formatu obrazu. Zapoznaj się z dostępnymi formatami i wybierz ten najbardziej odpowiedni. - W procesie tworzenia używaj narzędzi do optymalizacji i kompresji obrazów, by zmniejszyć rozmiary plików. - Zmniejsz liczbę żądań HTTP, umieszczając często używane obrazy w sprite`ach graficznych. - Rozważ opcję wczytywania obrazów dopiero wtedy, gdy po przewinięciu strony pojawią się w widoku, tak by skrócić czas początkowego wyświetlania strony i zmniejszyć ilość pobieranych danych. remember: compressive: - Zachowaj ostrożność przy korzystaniu z technik kompresji, bo dekodowanie wymaga większej ilości pamięci i obciąża procesor. Zmiana rozmiaru dużych obrazów, by zmieściły się na mniejszym ekranie, wymaga znacznych zasobów i jest szczególnie uciążliwa na słabszych urządzeniach z niewielką pamięcią i mocą procesora. udacity: id: ud882 title: Responsive Images description: "Learn how to work with images on the modern web, so that your images look great and load quickly on any device and pick up a range of skills and techniques to smoothly integrate responsive images into your development workflow." image: img/udacity-ri.jpg --- {% wrap content%} <style> img, video, object { max-width: 100%; } img.center { display: block; margin-left: auto; margin-right: auto; } </style> ### Elastyczne obrazy Elastyczne projektowanie witryn oznacza, że na podstawie cech urządzenia może zmieniać się nie tylko układ strony, ale też jej zawartość. Na przykład wyświetlacze o wysokiej rozdzielczości (2x) wymagają grafiki o wysokiej rozdzielczości, by zagwarantować ostrość. Obraz przy szerokości 50% może wyglądać dobrze w przeglądarce szerokiej na 800&nbsp;pikseli, ale zajmuje za dużo miejsca na wąskim telefonie i wciąż tak samo obciąża łącze, mimo przeskalowania i dopasowania do mniejszego ekranu. ### Dostosowywanie grafiki <img class="center" src="img/art-direction.png" alt="Przykład dostosowywania grafiki" srcset="img/art-direction.png 1x, img/art-direction-2x.png 2x"> Czasami obraz trzeba zmienić w większym stopniu &ndash; dopasować proporcje, przyciąć, a nawet zastąpić innym. W takiej sytuacji zmianę obrazu określa się zwykle jako dostosowywanie grafiki. Więcej przykładów znajdziesz na [responsiveimages.org/demos/](http://responsiveimages.org/demos/). {% include modules/udacity.liquid uid=page.udacity.id title=page.udacity.title description=page.udacity.description image=page.udacity.image %} {% include modules/nextarticle.liquid %} {% endwrap %}
61.588235
492
0.79489
pol_Latn
0.999939
6188276253137775ee24882480196f0a19c2f362
1,453
md
Markdown
README.md
CharlesRea/galatea
fe2df332099c9267f3a0d804b877f33a6c3113f0
[ "MIT" ]
1
2020-02-20T16:16:02.000Z
2020-02-20T16:16:02.000Z
README.md
CharlesRea/galatea
fe2df332099c9267f3a0d804b877f33a6c3113f0
[ "MIT" ]
7
2020-02-20T16:13:52.000Z
2022-02-26T23:45:26.000Z
README.md
CharlesRea/galatea
fe2df332099c9267f3a0d804b877f33a6c3113f0
[ "MIT" ]
2
2020-02-20T16:11:28.000Z
2020-02-28T10:07:14.000Z
# Galatea - A Neptune's Pride dashboard F# Application to track and visualise the state of a game of Neptune's Pride. Makes use of the [SAFE Stack](https://safe-stack.github.io/) for full-stack F# development. Components used include: * [Saturn](https://saturnframework.org/docs/) - F# web server on top of ASP.NET Core * [Fable](https://fable.io/docs/) - F# to JS compiler * [Elmish](https://elmish.github.io/elmish/) - Model-View-Update architecture for state management, built on top of React * [Fable.Remoting](https://zaid-ajaj.github.io/Fable.Remoting/) - Type safe RPC style HTTP API calls ## Development setup ### Pre-requisites required * The [.NET Core SDK 3.1+](https://www.microsoft.com/net/download) * [FAKE 5](https://fake.build/) installed as a [global tool](https://fake.build/fake-gettingstarted.html#Install-FAKE) (`dotnet tool install -g fake-cli`) * [Paket](https://fsprojects.github.io/Paket/) installed as a global tool (`otnet tool install paket --add-source https://www.myget.org/F/paket-netcore-as-tool/api/v3/index.json -g`) * [Yarn v1](https://yarnpkg.com/lang/en/docs/install/) * [Node LTS](https://nodejs.org/en/download/) * If you're running on OSX or Linux, you'll also need to install [Mono](https://www.mono-project.com/docs/getting-started/install/). ### Work with the application To concurrently run the server and the client components in watch mode use the following command: ```bash fake build -t Run ```
46.870968
182
0.730213
eng_Latn
0.551813
618894c4fef849b63ed902a61f2940f69f7c27ad
113
md
Markdown
README.md
fmp-mc/parsec-3.0
26d4bbb2486477942fc7afd65a0160631259f066
[ "BSD-3-Clause" ]
1
2019-02-15T01:13:14.000Z
2019-02-15T01:13:14.000Z
README.md
fmp-mc/parsec-3.0
26d4bbb2486477942fc7afd65a0160631259f066
[ "BSD-3-Clause" ]
null
null
null
README.md
fmp-mc/parsec-3.0
26d4bbb2486477942fc7afd65a0160631259f066
[ "BSD-3-Clause" ]
null
null
null
# parsec-3.0 PARSEC benchmark suite for TOPPERS/FMP Use Makefile to build application static libraries for FMP.
22.6
59
0.80531
kor_Hang
0.501574
6188bc8133bea13cc209eb1c8c44f88f997eb002
4,769
md
Markdown
articles/dev-itpro/dev-tools/install-basex.md
hyoshioka0128/Dynamics-365-Operations.ja-jp
34a1042d9d4d11dfdc1b0df2d8f8ddf00ddabdcf
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/dev-itpro/dev-tools/install-basex.md
hyoshioka0128/Dynamics-365-Operations.ja-jp
34a1042d9d4d11dfdc1b0df2d8f8ddf00ddabdcf
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/dev-itpro/dev-tools/install-basex.md
hyoshioka0128/Dynamics-365-Operations.ja-jp
34a1042d9d4d11dfdc1b0df2d8f8ddf00ddabdcf
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: アプリケーション チェッカーに対する BaseX のインストール description: このトピックでは、開発環境で BaseX をインストールして設定する方法について説明します。 author: AndreasHassing manager: AnnBe ms.date: 02/28/2019 ms.topic: article ms.prod: '' ms.service: dynamics-ax-applications ms.technology: '' audience: Developer ms.reviewer: rhaertle ms.search.scope: Operations ms.search.region: Global ms.author: anniels ms.search.validFrom: 2019-04-30 ms.dyn365.ops.version: Platform update 25 ms.openlocfilehash: 6799df7484cff3c804c0c63c918d2c4fe2846541 ms.sourcegitcommit: 27a98a7a0f1d2623f5236a88066f483def30889c ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 08/01/2019 ms.locfileid: "1833538" --- # <a name="install-basex-for-application-checker"></a>アプリケーション チェッカーに対する BaseX のインストール [!include [banner](../includes/banner.md)] [!include [banner](../includes/preview-banner.md)] AppChecker を使用するには、BaseX をインストールする必要があります。X++ コンパイラを生成するクエリ抽象構文ツリーを保存して問い合わせる必要があるためです。 このトピックでは、開発環境で BaseX をインストールして設定する方法について説明します。 このトピックのセクションは、記載されている順序で実行する必要があります。 開発環境で作業しており、管理者特権がない場合、タブを使用して、管理者以外のユーザーに対する指示に切り替えます。 ## <a name="prerequisites"></a>必要条件 BaseX は、Java ベースの XML ドキュメント データベースです。 8 またはそれ以降のバージョンの Java ランタイム環境 (JRE) が必要です。 BaseX をインストールする前に、JRE がインストールされていることを確認します。 # <a name="admin-install-javatabadmin"></a>[管理者: Java をインストール](#tab/admin) [Javaダウンロード ページ](https://aka.ms/getjava)からJava JRE **64ビット**をダウンロードしてインストールします。 # <a name="non-admin-set-up-javatabnon-admin"></a>[管理者以外: Java をセットアップ](#tab/non-admin) [Javaダウンロード ページ](https://www.oracle.com/technetwork/java/javase/downloads/index.html)から Java **Server JRE** をダウンロードします。 Java サーバー JRE は、.tar.gz 圧縮アーカイブとしてダウンロードされます。[7zip](https://www.7-zip.org/download.html) などのツールを使用して展開する必要があります。 インストールする必要はなく、[SourceForge](https://sourceforge.net/projects/sevenzip/files/7-Zip/9.20/7za920.zip/download) から 7zip をダウンロードできます。 アーカイブから Java サーバー JRE バイナリを抽出するには、次の Windows PowerShell コマンドを実行します。 > [!NOTE] > ダウンロードしたサーバー JRE アーカイブのバージョンが反映されるように、これらのコマンドを変更する必要があります。 ここに示すコマンドでは、7za.exe とアーカイブの両方が現在の作業ディレクトリあります。 ```powershell .\7za.exe x .\server-jre-8u202-windows-x64.tar.gz # decompresses to current working directory .\7za.exe x .\server-jre-8u202-windows-x64.tar # extracts jdk1.8.0_202 to current working directory ``` Java を展開したら、次のWindows PowerShellスクリプトを実行して、Java bin フォルダーを **PATH** 環境変数に追加します。 ```powershell [Environment]::SetEnvironmentVariable( "Path", # replace <path_to_jdk> below with the absolute path from the extracted jdk above [Environment]::GetEnvironmentVariable("Path", [EnvironmentVariableTarget]::User) + ";<path_to_jdk>\bin\", [EnvironmentVariableTarget]::User) ``` --- ## <a name="download-basex"></a>BaseX をダウンロード BaseX をダウンロードするには、[BaseX Web サイト](http://basex.org/download/)に移動し、最新版をダウンロードします。 # <a name="admin-download-basextabadmin"></a>[管理者: BaseX をダウンロード](#tab/admin) ダウンロード ページから、Microsoft Windows インストーラーをダウンロードします。 # <a name="non-admin-download-basextabnon-admin"></a>[管理者以外: BaseX をダウンロード](#tab/non-admin) ダウンロード ページから、zip パッケージをダウンロードします。 --- 現時点では、最新バージョンは、9.1.2です。 ## <a name="install-basex"></a>BaseXをインストール # <a name="admin-install-basextabadmin"></a>[管理者: BaseX をインストール](#tab/admin) モジュールをコンパイルする開発者のコンピューターで実行可能ファイルを実行します。 各ステップで、既定の設定を受け入れます。 # <a name="non-admin-set-up-basextabnon-admin"></a>[管理者以外: BaseX をセットアップ](#tab/non-admin) 以前にダウンロードした BaseX zip パッケージを抽出します。 --- > [!TIP] > インストール ドライブにモデルの十分な領域がない場合、BaseX データのフォルダーを後で変更できます。 詳細については、[BaseX コンフィギュレーション](http://docs.basex.org/wiki/Configuration#Database_Directory)を参照してください。 BaseX をインストールしたら、次のWindows PowerShellスクリプトを実行して、BaseX bin フォルダーを **PATH** 環境変数に追加します。 ```powershell [Environment]::SetEnvironmentVariable( "Path", # replace <path_to_basex> below with the absolute path from the extracted basex zip package above [Environment]::GetEnvironmentVariable("Path", [EnvironmentVariableTarget]::User) + ";<path_to_basex>\bin\", [EnvironmentVariableTarget]::User) ``` > [!IMPORTANT] > **PATH** 環境変数を設定するときに Microsoft Visual Studio が開いていた場合、再起動する必要があります。 ## <a name="configure-basex-to-handle-your-model"></a>モデルを処理するために BaseXをコンフィギュレーションする モデルのサイズによっては、BaseXは大量のメモリを使用します。 既定では、BaseXは約1200メガバイト (MB) のランダム アクセス メモリ (RAM) を最大で使用するようにコンフィギュレーションされます。 大規模なモデルを使用する場合は、次の手順に従ってこの限度を増やすことを検討してください。 1. 編集のために **\<BaseX インストール パス\>\\bin\\basex.bat** にあるバッチ スクリプトを開きます。 2. **set BASEX\_JVM=-Xmx1200m %BASEX\_JVM%** を含む行を見つけ、**-Xmx** の後の値を大きくします。 たとえば、値を **10g** に変更し、BaseX が最大 10 GB の RAM を追加できるようにします。 詳細については、[非標準オプション](https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html#BABHDABI)を参照してください。 Java ドキュメントのこのセクションでは、Java仮想マシン (VM) の制限値を大きくする方法について説明します。 スクリプトの残りの部分はそのままにします。 > [!NOTE] > Application Checker は保存とクエリに単独で実行可能ファイルを使用するため、BaseXサーバーを起動する必要はありません。
36.128788
241
0.776054
yue_Hant
0.750125
61891bb22ed0a70804c386fe2ef573f11e51f6bd
988
md
Markdown
pages/en/lb4/apidocs/filter.inclusion.md
loopbackio/loopback.io
3db43d35702960274c8d5c64a7c85c84eb8c5379
[ "MIT" ]
17
2021-07-23T14:53:37.000Z
2022-03-18T08:33:11.000Z
pages/en/lb4/apidocs/filter.inclusion.md
loopbackio/loopback.io
3db43d35702960274c8d5c64a7c85c84eb8c5379
[ "MIT" ]
8
2021-07-14T04:29:25.000Z
2022-03-21T06:02:50.000Z
pages/en/lb4/apidocs/filter.inclusion.md
loopbackio/loopback.io
3db43d35702960274c8d5c64a7c85c84eb8c5379
[ "MIT" ]
8
2021-09-13T04:19:57.000Z
2022-02-06T09:34:10.000Z
--- lang: en title: 'API docs: filter.inclusion' keywords: LoopBack 4.0, LoopBack 4, Node.js, TypeScript, OpenAPI sidebar: lb4_sidebar editurl: https://github.com/loopbackio/loopback-next/tree/master/packages/filter permalink: /doc/en/lb4/apidocs.filter.inclusion.html --- <!-- Do not edit this file. It is automatically generated by API Documenter. --> [Home](./index.md) &gt; [@loopback/filter](./filter.md) &gt; [Inclusion](./filter.inclusion.md) ## Inclusion interface Inclusion of related items Note: scope means filter on related items Example: `{relation: 'aRelationName', scope: {<AFilterObject>}}` <b>Signature:</b> ```typescript export interface Inclusion ``` ## Properties | Property | Type | Description | | --- | --- | --- | | [relation](./filter.inclusion.relation.md) | string | | | [scope?](./filter.inclusion.scope.md) | [Filter](./filter.filter.md)<!-- -->&lt;AnyObject&gt; &amp; { totalLimit?: number; } | <i>(Optional)</i> |
27.444444
150
0.667004
eng_Latn
0.203141
618b11dfe0872b00ed920a84ae638b395be07d9d
1,855
md
Markdown
docs/engines/markdown.md
Wilto/eleventy
2e2190c7eff2d413c54bb16a355cb0cdc4e06d71
[ "MIT" ]
null
null
null
docs/engines/markdown.md
Wilto/eleventy
2e2190c7eff2d413c54bb16a355cb0cdc4e06d71
[ "MIT" ]
null
null
null
docs/engines/markdown.md
Wilto/eleventy
2e2190c7eff2d413c54bb16a355cb0cdc4e06d71
[ "MIT" ]
1
2020-09-26T17:25:13.000Z
2020-09-26T17:25:13.000Z
# Markdown | Eleventy Short Name | File Extension | NPM Package | | ------------------- | -------------- | ---------------------------------------------------------- | | `md` | `.md` | [`markdown-it`](https://www.npmjs.com/package/markdown-it) | Markdown files can be optionally pre-processed with an additional template engine. This can be configured on a per-template basis or globally. Read more at [Changing a Template’s Rendering Engine](/docs/engines.md). ## Library Options ### Defaults * `html: true` (default is `false`) The above options are different than the default `markdown-it` options. See [all `markdown-it` options](https://github.com/markdown-it/markdown-it#init-with-presets-and-options). ### Use your own options _New in Eleventy `v0.3.0`:_ Pass in your own instance of the Markdown library using the Configuration API. See [all `markdown-it` options](https://github.com/markdown-it/markdown-it#init-with-presets-and-options). ```js module.exports = function(eleventyConfig) { let markdownIt = require("markdown-it"); let options = { html: true, breaks: true, linkify: true }; eleventyConfig.setLibrary("md", markdownIt(options)); }; ``` ## Add your own plugins _New in Eleventy `v0.3.0`:_ Pass in your own `markdown-it` plugins using the `setLibrary` Configuration API method (building on the method described in “Using your own options”). 1. Find your [own `markdown-it` plugin on NPM](https://www.npmjs.com/search?q=keywords:markdown-it-plugin) 2. `npm install` the plugin. ```js module.exports = function(eleventyConfig) { let markdownIt = require("markdown-it"); let markdownItEmoji = require("markdown-it-emoji"); let options = {}; eleventyConfig.setLibrary("md", markdownIt(options).use(markdownItEmoji)); }; ```
37.1
215
0.656604
eng_Latn
0.588625
618b6a9621eb7b6db380598679e8f65e2f4da7e4
1,512
md
Markdown
frontend/README.md
fant0m121/oki-doki
42b3686760e65c02740053e6882a022603f96fcb
[ "Unlicense" ]
null
null
null
frontend/README.md
fant0m121/oki-doki
42b3686760e65c02740053e6882a022603f96fcb
[ "Unlicense" ]
null
null
null
frontend/README.md
fant0m121/oki-doki
42b3686760e65c02740053e6882a022603f96fcb
[ "Unlicense" ]
null
null
null
# Oki-Doki readme Generated on 2017-03-13 using [generator-yeogurt@1.5.3](https://github.com/larsonjj/generator-yeogurt) ## Description This is an example readme file. Describe your site/app here. ## Technologies used JavaScript - [Browserify](http://browserify.org/) - [Node](https://nodejs.org/) Styles - [Stylus](https://learnboost.github.io/stylus/) Markup - [Jade](http://jade-lang.com/) Optimization - [Imagemin](https://github.com/imagemin/imagemin) - [Uglify](https://github.com/mishoo/UglifyJS) Server - [BrowserSync](http://www.browsersync.io/) Linting - [ESlint](http://eslint.org/) Automation - [Gulp](http://gulpjs.com) Code Management - [Editorconfig](http://editorconfig.org/) - [Git](https://git-scm.com/) ## Automated tasks This project uses [Gulp](http://gulpjs.com) to run automated tasks for development and production builds. The tasks are as follows: `gulp --production`: Same as `gulp serve --production` also run `gulp test` and not boot up production server `gulp serve`: Compiles preprocessors and boots up development server `gulp serve --open`: Same as `gulp serve` but will also open up site/app in your default browser `gulp serve --production`: Same as `gulp serve` but will run all production tasks so you can view the site/app in it's final optimized form `gulp test`: Lints all `*.js` file in the `source` folder using eslint ***Adding the `--debug` option to any gulp task displays extra debugging information (ex. data being loaded into your templates)***
27.490909
139
0.73545
eng_Latn
0.751434
618be4e5e33587484b855c5e178cd5a74e9576a5
243
md
Markdown
README.md
luqin/scrum-rdc
5af18eaa20c21a33e77cab027097cf231b245f12
[ "MIT" ]
2
2018-11-12T03:52:26.000Z
2018-11-13T01:37:37.000Z
README.md
luqin/scrum-rdc
5af18eaa20c21a33e77cab027097cf231b245f12
[ "MIT" ]
null
null
null
README.md
luqin/scrum-rdc
5af18eaa20c21a33e77cab027097cf231b245f12
[ "MIT" ]
null
null
null
# Scrum Dashboard ## Installation > 推荐使用 `cnpm` ``` npm install ``` ``` cd server npm install ``` ## Development ``` npm run dev ``` ``` cd server npm run dev ``` ## Production ``` npm run build:prod ``` ``` cd server npm start ```
6.394737
18
0.576132
eng_Latn
0.446353
618cb539fc3ff053773027bf738ad55b2b36ed7b
437
md
Markdown
complete-setups/all-languages-and-VSCode/README.md
itopia-inc/spaces-images
2c5620c371247478b8bb13c3f8f59aff7625b141
[ "Apache-2.0" ]
3
2021-09-30T05:20:20.000Z
2022-03-30T09:44:01.000Z
complete-setups/all-languages-and-VSCode/README.md
itopia-inc/spaces-images
2c5620c371247478b8bb13c3f8f59aff7625b141
[ "Apache-2.0" ]
5
2021-12-29T21:00:12.000Z
2022-02-15T07:00:53.000Z
complete-setups/all-languages-and-VSCode/README.md
itopia-inc/spaces-images
2c5620c371247478b8bb13c3f8f59aff7625b141
[ "Apache-2.0" ]
1
2022-01-17T10:32:50.000Z
2022-01-17T10:32:50.000Z
# Complete setup for all languages and VS Code [This all-in-one image](https://github.com/orgs/itopia-inc/packages?tab=packages&repo_name=spaces-images&q=all+languages+VS+Code) provides a complete development environment that contains (the latest supported versions of) all supported languages and a recent version of VS Code. This environment is suitable for day-to-day use by polyglot developers with up-to-date runtime dependencies.
48.555556
129
0.812357
eng_Latn
0.985792