hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
abddc8c9a8e3514c750b428d04e06c552cfa68c8 | 1,243 | adoc | AsciiDoc | docs/docker.adoc | mhus/mhus-parent | cc444db5eee6aafdad7be189806c8927d16339e6 | [
"Apache-2.0"
] | null | null | null | docs/docker.adoc | mhus/mhus-parent | cc444db5eee6aafdad7be189806c8927d16339e6 | [
"Apache-2.0"
] | 4 | 2020-05-09T12:15:49.000Z | 2021-07-19T15:18:03.000Z | docs/docker.adoc | mhus/mhus-parent | cc444db5eee6aafdad7be189806c8927d16339e6 | [
"Apache-2.0"
] | 1 | 2021-12-29T16:50:06.000Z | 2021-12-29T16:50:06.000Z |
Test karaf with docker:
----
docker run --name karaf-test -it -v ~/.m2:/home/user/.m2 -p 15005:5005 -p 18181:8181 mhus/apache-karaf:4.2.6_05
docker stop karaf-test
docker rm karaf-test
----
Install mhus:
----
version_mhus=7.4.0-SNAPSHOT
dev_version=7.0.0-SNAPSHOT
feature:repo-add mvn:org.apache.shiro/shiro-features/1.6.0/xml/features
feature:repo-add mvn:de.mhus.osgi/mhus-features/${version_mhus}/xml/features
feature:install -s mhu-base
feature:install -s mhu-rest
blue-create de.mhus.rest.core.impl.RestServlet
blue-create de.mhus.rest.core.impl.RestWebSocketServlet
blue-create de.mhus.rest.core.nodes.PublicRestNode
blue-create de.mhus.rest.core.nodes.UserInformationRestNode
blue-create de.mhus.rest.core.nodes.JwtTokenNode
feature:install -s mhu-jdbc
feature:repo-add activemq 5.15.8
bundle:install -s mvn:de.mhus.lib.itest/examples-jms/${dev_version}
feature:install -s mhu-jms
feature:install -s mhu-vaadin
feature:install -s mhu-vaadin-ui
bundle:install -s mvn:de.mhus.lib.itest/examples-micro/${dev_version}
feature:install -s mhu-micro
feature:install -s mhu-transform
feature:install -s mhu-crypt
feature:install -s mhu-mongo
feature:install -s mhu-dev
feature:install -s mhu-health-servlet
---- | 18.833333 | 111 | 0.765084 |
805a343aa4a61912669973014b5f3ca271282365 | 1,242 | adoc | AsciiDoc | doc/integrating_applications/topics/guidelines_for_swagger_specifications.adoc | maschmid/syndesis | 96c277e00208e092114454ed185cc36865ab916b | [
"Apache-2.0"
] | null | null | null | doc/integrating_applications/topics/guidelines_for_swagger_specifications.adoc | maschmid/syndesis | 96c277e00208e092114454ed185cc36865ab916b | [
"Apache-2.0"
] | null | null | null | doc/integrating_applications/topics/guidelines_for_swagger_specifications.adoc | maschmid/syndesis | 96c277e00208e092114454ed185cc36865ab916b | [
"Apache-2.0"
] | null | null | null | [id='guidelines-for-swagger-specifications']
= Guidelines for Swagger specifications
The more detail that the Swagger specification provides, the more support
{prodname} can offer when connecting to the API. For example,
the API definition is not required to declare data types for requests
and responses. Without type declarations, {prodname}
defines the corresponding connection action as typeless. However, in an
integration, you cannot add a data mapping step immediately before or
immediately after an API connection that performs a typeless action.
One remedy for this is to add more information to the Swagger specification
before you upload it to {prodname}. Identify the Swagger resource operations that
will map to the actions you want the API connection to perform. In the
Swagger specification, ensure that there is a JSON schema that specifies
each operation's request and response types.
If the Swagger specification for the API declares support for
`application/json` content type and also `application/xml` content type
then the connector uses the JSON format. If the Swagger specification
specifies `consumes` or `produces` parameters that define both
`json` and `xml`, then the connector uses the JSON format.
| 54 | 81 | 0.807568 |
79766f4faf9e2ba8de591d6e631158544671042d | 24,271 | adoc | AsciiDoc | documentation/modules/ROOT/pages/03b-quarkus-deploy-postgresql.adoc | redhat-scholars/java-inner-loop-dev-guide | a883c19582415e6ff339e91f400701ee8bcfbe7a | [
"Apache-2.0"
] | 2 | 2021-04-10T18:00:24.000Z | 2021-04-10T20:32:56.000Z | documentation/modules/ROOT/pages/03b-quarkus-deploy-postgresql.adoc | redhat-scholars/java-inner-loop-dev-guide | a883c19582415e6ff339e91f400701ee8bcfbe7a | [
"Apache-2.0"
] | null | null | null | documentation/modules/ROOT/pages/03b-quarkus-deploy-postgresql.adoc | redhat-scholars/java-inner-loop-dev-guide | a883c19582415e6ff339e91f400701ee8bcfbe7a | [
"Apache-2.0"
] | null | null | null | = PostgreSQL as Database
include::_attributes.adoc[]
:database_name: postgresql
:project_name: %USERNAME%-{artifact_id}-{database_name}
:active_profile: prod-{database_name}
So before we deploy the code we need a Database right? For the sake of simplicity let's deploy PostgreSQL using some simple commands. Maybe in your real deployment you would use an external instance.
[#deploy-database]
== Deploying PostgreSQL on OpenShift
WARNING: Before proceeding log in your cluster using `oc login` with a normal user no need for special permissions.
In order to run our application, we need a namespace, let's create one:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
export PROJECT_NAME={project_name}
oc new-project $\{PROJECT_NAME}
----
In order for the application to work properly we must first deploy a `PostgreSQL` database, like this:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc new-app -e POSTGRESQL_USER=luke -e POSTGRESQL_PASSWORD=secret -e POSTGRESQL_DATABASE=FRUITSDB \
centos/postgresql-10-centos7 --as-deployment-config=true --name=postgresql-db -n $\{PROJECT_NAME}
----
Now let's add some labels to the database deployment object:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc label dc/postgresql-db app.kubernetes.io/part-of=fruit-service-app -n $\{PROJECT_NAME} && \
oc label dc/postgresql-db app.openshift.io/runtime=postgresql --overwrite=true -n $\{PROJECT_NAME}
----
Check the database is running:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc get pod -n $\{PROJECT_NAME}
----
You should see something like this:
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
NAME READY STATUS RESTARTS AGE
postgresql-db-1-deploy 0/1 Completed 0 2d6h
postgresql-db-1-n585q 1/1 Running 0 2d6h
----
[TIP]
===============================
You can also run this command to check the name of the POD:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc get pods -n $\{PROJECT_NAME} -o json | jq -r '.items[] | select(.status.phase | test("Running")) | select(.metadata.name | test("postgresql-db")).metadata.name'
----
===============================
[#deploy-code]
== Deploying the code on OpenShift
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
mvn clean package -Dquarkus.kubernetes.deploy=true -DskipTests -Dquarkus.profile={active_profile}
----
Move to the OpenShift web console and from `Topology` in the `Developer` perspective click on the route link as in the picture.
image::fruit-service-postgresql-topology-quarkus.png[Fruit Service on PostgreSQL Topology]
You should see this.
image::fruit-service-postgresql-display-quarkus.png[Fruit Service on PostgreSQL]
[#run-local-telepresence]
== Extending the inner-loop with Telepresence
.Permissions needed
[IMPORTANT]
===============================
This needs to be run by a `cluster-admin`, but once you run it go back to the normal user.
[.console-input]
[source,bash,options="nowrap",subs="verbatim,attributes+"]
----
oc adm policy add-scc-to-user privileged -z default -n $\{PROJECT_NAME} && \
oc adm policy add-scc-to-user anyuid -z default -n $\{PROJECT_NAME}
----
===============================
NOTE: Telepresence will modify the network so that Services in Kubernetes are reachable from your laptop and vice versa.
The next command will result in the deployment for our application being scaled down to zero and the network altered so that traffic to it ends up in your laptop in port `8080`.
.Deployment scaled down to zero while using Telepresence
image::fruit-service-postgresql-topology-telepresence.png[Fruit Service on PostgreSQL Topology - Quarkus with Telepresence]
IMPORTANT: You'll be asked for `sudo` password, this is normal, telepresence needs to be able to modify networking rules so that you can see Kubernetes Services as local.
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
export TELEPRESENCE_USE_OCP_IMAGE=NO
oc project $\{PROJECT_NAME}
telepresence --swap-deployment {artifact_id_quarkus} --expose 8080
----
Eventually you'll se something like this:
[.console-output]
[source,text,options="nowrap",subs="attributes+"]
----
...
T: Forwarding remote port 8080 to local port 8080.
T: Guessing that Services IP range is ['172.30.0.0/16']. Services started after this point will be inaccessible if are outside this range; restart telepresence
T: if you can't access a new Service.
T: Connected. Flushing DNS cache.
T: Setup complete. Launching your command.
The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
@fruit-service-postgresql-dev/api-cluster-6d30-6d30-example-opentlc-com:6443/user1|bash-3.2$
----
In the `Topology` view you should see this.
NOTE: Our deployment has been scaled down to zero and substituted by a pod generated by `Telepresence`.
image::fruit-service-postgresql-topology-telepresence-quarkus.png[Fruit Service on PostgreSQL Topology - Quarkus with Telepresence]
[TIP]
===============================
Run from another terminal:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
curl http://postgresql-db:5432
----
You should receive which looks bad but it's actually good, this means that the DNS Service name local to Kubernetes can be resolved from your computer and that the port `5432` has been reached!
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
curl: (52) Empty reply from server`
----
===============================
Now let's run our code locally but connected to the database (and/or other consumed services). To do so run this command from another terminal:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
export PROJECT_NAME={project_name}
DB_USER=luke DB_PASSWORD=secret mvn quarkus:dev -Dquarkus.profile={active_profile}
----
Now open a browser and point to link:http://localhost:8080[http://localhost:8080, window=_blank]
TIP: You can edit, save, delete to test the functionalities implemented by `FruitResource` *and debug locally*.
You should see this:
IMPORTANT: You can also use the external route and get the same result.
image::fruit-service-postgresql-display-telepresence-quarkus.png[Fruit Service on PostgreSQL - Quarkus with Telepresence]
Now you can go to the terminal running the code locally and stop the process with kbd:[Ctrl+C].
Also go to the terminal window where `Telepresence` is running locally and type `exit`, you should see something similar to this:
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
...
@fruit-service-postgresql-dev/api-cluster-6d30-6d30-example-opentlc-com:6443/user1|bash-3.2$ exit
exit
T: Your process has exited.
T: Exit cleanup in progress
T: Cleaning up Pod
----
[#run-local-remote-dev]
== Extending the inner-loop with Quarkus remote development
If you use Quarkus there's another method to extend the inner-loop to your cluster, it's called link:https://quarkus.io/guides/maven-tooling#remote-development-mode[Remote Development Mode]. The idea is simple, run your code in the cluster while being able to replace classes/files that you change locally in your IDE automatically.
There are two parts in this setup:
* The server side, where the code runs in a container
* The client side, where you connect to the server side and files are watched and uploaded whenever changes are detected
A couple of things to take into account regarding the server side:
1. The normal `JAR` is substituted by a `mutable` or `fast` JAR, that can be updated/reloaded live
2. An environment variable `QUARKUS_LAUNCH_DEVMODE` needs to be set to `true`
In order to use remote development we need to update the `appllication.properties` file, please add these two lines at the top of the file.
[NOTE]
===============================
1. We only need these properties while using remote development and you should comment them once you're done.
2. You can (or even should) update the live-reload password as it's checked while connecting to the server side of the remote development set up.
===============================
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
# REMOTE DEV
quarkus.package.type=mutable-jar
quarkus.live-reload.password=changeit2
----
Now, delete the current deployment, then deploy it again but this time with remote development mode enabled. Please, run the following commands in order to do so:
[IMPORTANT]
===============================
Pay close attention to these system properties we're setting to deploy our code in remote development mode:
* `-Dquarkus.kubernetes.deploy=true`: deploys to target `openshift`
* `-Dquarkus.profile={active_profile}`: activates this profile
* `-Dquarkus.container-image.build=true`: builds the image using S2I as target is `openshift`
* `-Dquarkus.openshift.env-vars.quarkus-launch-devmode.value=true`: this property commands Quarkus plugin to add an environment variable called `QUARKUS_LAUNCH_DEVMODE` with value `true` to the `DeploymentConfig` object
* `-Dquarkus.openshift.env-vars.quarkus-profile.value={active_profile}`: this property commands Quarkus plugin to add an environment variable called `QUARKUS_PROFILE` with value `{active_profile}` to the `DeploymentConfig` object
===============================
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc delete dc/{artifact_id_quarkus} -n $\{PROJECT_NAME}
mvn clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Dquarkus.profile={active_profile} \
-Dquarkus.container-image.build=true \
-Dquarkus.openshift.env-vars.quarkus-launch-devmode.value=true \
-Dquarkus.openshift.env-vars.quarkus-profile.value={active_profile}
----
You should see this:
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
...
[INFO] [io.quarkus.container.image.openshift.deployment.OpenshiftProcessor] Successfully pushed image-registry.openshift-image-registry.svc:5000/fruit-service-postgresql-dev/atomic-fruit-service@sha256:13c3781e82fa20b664bcc9e3c21a7a51d57c7a140c8232da66aa03ba73ff9b69
[INFO] [io.quarkus.container.image.openshift.deployment.OpenshiftProcessor] Push successful
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Deploying to openshift server: https://api.cluster-6d30.6d30.example.opentlc.com:6443/ in namespace: fruit-service-postgresql-dev.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Applied: Secret fruits-db.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Applied: Service atomic-fruit-service.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Applied: ImageStream atomic-fruit-service.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Applied: ImageStream openjdk-11.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Applied: BuildConfig atomic-fruit-service.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Applied: DeploymentConfig atomic-fruit-service.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] Applied: Route atomic-fruit-service.
[INFO] [io.quarkus.kubernetes.deployment.KubernetesDeployer] The deployed application can be accessed at: http://atomic-fruit-service-fruit-service-postgresql-dev.apps.cluster-6d30.6d30.example.opentlc.com
[INFO] [io.quarkus.deployment.QuarkusAugmentor] Quarkus augmentation completed in 96673ms
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:43 min
[INFO] Finished at: 2021-02-09T13:51:33+01:00
[INFO] ------------------------------------------------------------------------
----
Let's check that the deployment is setup correctly for remote development. Be patient here, give it some seconds before you see something, you may even need to run it again.
NOTE: Dont forget to stop the command by typing kbd:[Ctrl+C]
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc logs -f dc/{artifact_id_quarkus} | grep -i profile
----
IMPORTANT: The important thing is that profile *{active_profile}* and *live coding* have been *activated*.
Eventually you'll see this. It just takes some seconds, be patient.
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
... [io.quarkus] (Quarkus Main Thread) Profile {active_profile} activated. Live Coding activated.
----
We need the external route to our service, we'll save the value in `ROUTE_URL` to use it later.
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
export ROUTE_URL="http://$(oc get route/{artifact_id_quarkus} -o jsonpath='{.spec.host}')"
----
Now we have to connect to our application running in remote dev mode, we'll do it like this.
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
./mvnw quarkus:remote-dev \
-Dquarkus.live-reload.url=$\{ROUTE_URL} \
-Dquarkus.profile={active_profile}
----
You should see something similar to this:
[IMPORTANT]
==============
1. Make sure you see `Connected to remote server` at the end of the log.
2. You'll find interesting that the jar sent while in remote mode is `quarkus-run.jar`, the `mutable-jar`. This fast jar allows sending only those parts that have change and reload them live.
==============
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
[INFO] Scanning for projects...
[INFO]
[INFO] ------------< com.redhat.atomic.fruit:atomic-fruit-service >------------
[INFO] Building atomic-fruit-service 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- quarkus-maven-plugin:1.10.5.Final:remote-dev (default-cli) @ atomic-fruit-service ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 5 resources
[INFO] Nothing to compile - all classes are up to date
Listening for transport dt_socket at address: 5005
2021-02-10 11:38:53,921 INFO [org.jbo.threads] (main) JBoss Threads version 3.1.1.Final
2021-02-10 11:38:54,138 INFO [io.qua.kub.dep.KubernetesDeployer] (build-11) Only the first deployment target (which is 'openshift') selected via "quarkus.kubernetes.deployment-target" will be deployed
2021-02-10 11:38:54,329 INFO [org.hib.Version] (build-10) HHH000412: Hibernate ORM core version 5.4.26.Final
2021-02-10 11:38:54,400 WARN [io.qua.arc.pro.BeanArchives] (build-5) Failed to index boolean: Class does not exist in ClassLoader QuarkusClassLoader:Deployment Class Loader
[INFO] Checking for existing resources in: /Users/cvicensa/Projects/openshift/redhat-scholars/java-inner-loop-dev-guide/apps/quarkus-app/src/main/kubernetes.
[INFO] Adding existing Secret with name: fruits-db.
2021-02-10 11:38:56,848 INFO [io.qua.dep.QuarkusAugmentor] (main) Quarkus augmentation completed in 3265ms
2021-02-10 11:38:59,792 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending lib/deployment/appmodel.dat
2021-02-10 11:38:59,867 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending quarkus-run.jar
2021-02-10 11:38:59,952 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending app/atomic-fruit-service-1.0-SNAPSHOT.jar
2021-02-10 11:38:59,989 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending lib/deployment/build-system.properties
2021-02-10 11:39:00,030 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Connected to remote server
----
If you go to the web console you should see this.
image::fruit-service-postgresql-topology-remote-dev-quarkus.png[Fruit Service on PostgreSQL Topology - Quarkus with Remote Dev]
Let's give it a try, shall we?
Open `$\{ROUTE_URL}` in a browser, you should see this.
image::fruit-service-postgresql-display-remote-dev-quarkus.png[Fruit Service on PostgreSQL - Quarkus with Remote Dev]
Now go to the editor find `src/main/resources/templates/index.html` and replace this:
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
<h1>CRUD Mission - Quarkus</h1>
----
With this:
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
<h1>CRUD Mission - Quarkus Remote</h1>
----
Changes are automatically detected and sent to the remote application. Check the terminal, you should see these new lines:
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
2021-02-10 12:21:45,266 INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Remote dev client thread) File change detected: /Users/cvicensa/Projects/openshift/redhat-scholars/java-inner-loop-dev-guide/apps/quarkus-app/src/main/resources/templates/index.html
2021-02-10 12:21:45,678 INFO [io.qua.kub.dep.KubernetesDeployer] (build-17) Only the first deployment target (which is 'openshift') selected via "quarkus.kubernetes.deployment-target" will be deployed
2021-02-10 12:21:47,092 INFO [io.qua.dep.QuarkusAugmentor] (Remote dev client thread) Quarkus augmentation completed in 1769ms
2021-02-10 12:21:47,106 INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Remote dev client thread) Hot replace total time: 1.852s
2021-02-10 12:21:48,896 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending dev/app/templates/index.html
2021-02-10 12:21:48,964 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending quarkus-run.jar
2021-02-10 12:21:49,006 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending app/atomic-fruit-service-1.0-SNAPSHOT.jar
2021-02-10 12:21:49,044 INFO [io.qua.ver.htt.dep.dev.HttpRemoteDevClient] (Remote dev client thread) Sending lib/deployment/build-system.properties
----
Go back to the UI of our application and refresh change
image::fruit-service-postgresql-display-new-remote-dev-quarkus.png[Fruit Service on PostgreSQL New - Quarkus with Remote Dev]
TIP: Don't forget that whenever you change a file locally the app is rebuilt and relevant elements and uploaded and refreshed
IMPORTANT: Once you're done with the remote development mode don't forget to comment the lines we added before, as in the next exceprt.
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
# REMOTE DEV
#quarkus.package.type=mutable-jar
#quarkus.live-reload.password=changeit2
----
[#binary-deploy]
== Binary deploy S2I
Finally, imagine that after debugging your code locally you want to redeploy on OpenShift in a similar way but without using the OpenShift extension. Well, this is possible by leveraging Source to Image (S2I) let's have a look to the `BuildConfigs` in our project.
NOTE: Once you have deployed your code using the OpenShift plugin one `BuildConfig` has been created for you ({artifact_id_quarkus} in this case). But if you had started from the scratch you could have created that `BuildConfig` by hand and use the technique we're explaining here.
So first of all let's get the BCs (`BuildConfigs`) in our namespace:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc get bc -n $\{PROJECT_NAME}
----
You should get this:
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
NAME TYPE FROM LATEST
{artifact_id_quarkus} Source Binary 1
----
Let's read some details:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc get bc/{artifact_id_quarkus} -o yaml -n $\{PROJECT_NAME}
----
We have copied the relevants parts below to focus on the important part of the BC YAML.
NOTE: Focus on `spec->source->type` => `Binary` this means that in order to build image `spec->output->to->name` you need to provide a binary file, a `JAR` file in this case.
[.console-output]
[source,yaml,options="nowrap",subs="attributes+"]
----
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
app.kubernetes.io/name: {artifact_id_quarkus}
app.kubernetes.io/part-of: fruit-service-app
app.kubernetes.io/version: 1.0-SNAPSHOT
app.openshift.io/runtime: quarkus
department: fruity-dept
name: {artifact_id_quarkus}
namespace: fruit-service-postgresql-dev
...
spec:
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: {artifact_id_quarkus}:1.0-SNAPSHOT
postCommit: {}
resources: {}
runPolicy: Serial
source:
binary: {}
type: Binary
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: openjdk-11:latest
type: Source
status:
...
----
Let's package our application with the right profile and build an image with it:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
mvn clean package -DskipTests -Dquarkus.kubernetes.deploy=false -Dquarkus.profile={active_profile}
----
After a successful build, let's start the build of the image in OpenShift:
NOTE: We have to include `target/{artifact_id_quarkus}-1.0-SNAPSHOT-runner.jar` and the contents of `target/lib/` in a zip file and start a new build with it.
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
zip {artifact_id_quarkus}.zip target/lib/* target/{artifact_id_quarkus}-1.0-SNAPSHOT-runner.jar
oc start-build {artifact_id_quarkus} --from-archive=./{artifact_id_quarkus}.zip -n $\{PROJECT_NAME}
rm {artifact_id_quarkus}.zip
----
As you can see in the output the `JAR` file is uploaded and a new Build is started.
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
Uploading archive file "{artifact_id_quarkus}.zip" as binary input for the build ...
Uploading finished
build.build.openshift.io/{artifact_id}-3 started
----
And let's have a look to the logs while the build is happening:
[.console-input]
[source,bash,options="nowrap",subs="attributes+"]
----
oc logs -f bc/{artifact_id_quarkus} -n $\{PROJECT_NAME}
----
Log output from the current build (only relevant lines):
NOTE: It all starts with `Receiving source from STDIN as archive` so the image is built from a binary file within OpenShift as we checked out before.
[.console-output]
[source,bash,options="nowrap",subs="attributes+"]
----
Receiving source from STDIN as archive ...
Caching blobs under "/var/cache/blobs".
...
Storing signatures
Generating dockerfile with builder image registry.access.redhat.com/ubi8/openjdk-11@sha256:7921ba01d91c5598595dac9a9216e45cd6b175c22d7d859748304067d2097fae
STEP 1: FROM registry.access.redhat.com/ubi8/openjdk-11@sha256:7921ba01d91c5598595dac9a9216e45cd6b175c22d7d859748304067d2097fae
STEP 2: LABEL "io.openshift.build.image"="registry.access.redhat.com/ubi8/openjdk-11@sha256:7921ba01d91c5598595dac9a9216e45cd6b175c22d7d859748304067d2097fae" "io.openshift.build.source-location"="/tmp/build/inputs" "io.openshift.s2i.destination"="/tmp"
STEP 3: ENV OPENSHIFT_BUILD_NAME="atomic-fruit-service-4" OPENSHIFT_BUILD_NAMESPACE="fruit-service-postgresql-dev"
STEP 4: USER root
STEP 5: COPY upload/src /tmp/src
STEP 6: RUN chown -R 185:0 /tmp/src
STEP 7: USER 185
STEP 8: RUN /usr/local/s2i/assemble
INFO S2I source build with plain binaries detected
INFO Copying binaries from /tmp/src to /deployments ...
target/
target/atomic-fruit-service-1.0-SNAPSHOT-runner.jar
target/lib/
target/lib/antlr.antlr-2.7.7.jar
...
STEP 9: CMD /usr/local/s2i/run
STEP 10: COMMIT temp.builder.openshift.io/fruit-service-postgresql-dev/atomic-fruit-service-4:31d1775e
Getting image source signatures
...
Writing manifest to image destination
Storing signatures
--> cd6e79768eb
cd6e79768ebfd82c7617a115f558e16e9a8bd3a2eeb8be9ae605c8e48ef9b75f
Pushing image image-registry.openshift-image-registry.svc:5000/fruit-service-postgresql-dev/atomic-fruit-service:1.0-SNAPSHOT ...
Getting image source signatures
...
Writing manifest to image destination
Storing signatures
Successfully pushed image-registry.openshift-image-registry.svc:5000/fruit-service-postgresql-dev/atomic-fruit-service@sha256:497f0886ac1d9c5610ec20ea4f899e9617fb549eda28342905ff6369c5af4b2d
Push successful
----
You can now test the new image as we have done before! | 43.889693 | 332 | 0.744015 |
7c12856f8910135c02952e7591a6e74676a33695 | 441 | adoc | AsciiDoc | modules/special-rules/pages/tyranid-objectives.adoc | GameBrains/e40k-remastered | df4e405f237074fa606e4f9f3b3c2d63ee9810f2 | [
"Apache-2.0"
] | null | null | null | modules/special-rules/pages/tyranid-objectives.adoc | GameBrains/e40k-remastered | df4e405f237074fa606e4f9f3b3c2d63ee9810f2 | [
"Apache-2.0"
] | 28 | 2021-05-23T11:02:06.000Z | 2022-03-24T09:35:04.000Z | modules/special-rules/pages/tyranid-objectives.adoc | GameBrains/er-core | df4e405f237074fa606e4f9f3b3c2d63ee9810f2 | [
"Apache-2.0"
] | null | null | null | = Tyranid Objectives
`Army ability`
The Hive Mind hungers only for biomass and genetic material.
It doesn't seek to hold ground or capture objectives.
---
When you field a Tyranid army, apply these rules:
* The army does not use objectives, regardless of what scenario you play.
* After you destroy an enemy detachment, add its morale value to your army morale in the Rally phase.
.Related information
* xref:battles:army-morale.adoc[] | 27.5625 | 101 | 0.773243 |
3296418b5988cb9a259cdf7e2ea2c214b07a0cfe | 14,481 | adoc | AsciiDoc | docs/src/reference/asciidoc/en/configuration.adoc | sharplab/spring-security-webauthn | ba91c0125d4926abf53b4e7c350643a53f1c28ed | [
"Apache-2.0"
] | 41 | 2019-01-06T10:39:07.000Z | 2020-05-08T10:47:39.000Z | docs/src/reference/asciidoc/en/configuration.adoc | sharplab/spring-security-webauthn | ba91c0125d4926abf53b4e7c350643a53f1c28ed | [
"Apache-2.0"
] | 133 | 2018-12-03T15:12:22.000Z | 2020-05-10T02:19:51.000Z | docs/src/reference/asciidoc/en/configuration.adoc | sharplab/webauthn4j-spring-security | ba29bab8541e0d73d8157ff239ccba2ac4c604cd | [
"Apache-2.0"
] | 12 | 2019-02-10T04:04:13.000Z | 2020-03-20T07:38:58.000Z |
== Configuration
=== Applications integration
==== Maven dependency
Please add following to pom.xml to introduce WebAuthn4J Spring Security and its dependencies.
[source,xml]
----
<properties>
...
<!-- Use the latest version whenever possible. -->
<webauthn4j-spring-security.version>0.7.0.RELEASE</webauthn4j-spring-security.version>
...
</properties>
<dependency>
<groupId>com.webauthn4j</groupId>
<artifactId>webauthn4j-spring-security-core</artifactId>
<version>${webauthn4j-spring-security.version}</version>
</dependency>
----
==== Java Config
WebAuthn4J Spring Security can be configured through the Spring Security Java Config DSL.
Please define the `SecurityFilterChain` bean as follows and apply the `WebAuthnLoginConfigurer` to the `HttpSecurity` bean.
Through `WebAuthnLoginConfigurer`, you can set various options of the `WebAuthnProcessingFilter`, Attestation options endpoint, and Assertion options endpoint.
[source,java]
----
@Configuration
public class WebSecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http, AuthenticationManager authenticationManager) throws Exception {
// WebAuthn Login
http.apply(WebAuthnLoginConfigurer.webAuthnLogin())
.loginPage("/login")
.usernameParameter("username")
.passwordParameter("rawPassword")
.credentialIdParameter("credentialId")
.clientDataJSONParameter("clientDataJSON")
.authenticatorDataParameter("authenticatorData")
.signatureParameter("signature")
.clientExtensionsJSONParameter("clientExtensionsJSON")
.loginProcessingUrl("/login")
.rpId("example.com")
.attestationOptionsEndpoint()
.attestationOptionsProvider(attestationOptionsProvider)
.and()
.assertionOptionsEndpoint()
.assertionOptionsProvider(assertionOptionsProvider)
.and()
.authenticationManager(authenticationManager);
}
}
----
===== Integrating WebAuthnAuthenticationProvider
`WebAuthnAuthenticationProvider`, an `AuthenticationProvider` for Web Authentication, need to be defined as a Bean.
If you set up two-step authentication combined with password authentication, you also need a Bean definition for `DaoAuthenticationProvider`.
[source,java]
----
@Bean
public WebAuthnAuthenticationProvider webAuthnAuthenticationProvider(WebAuthnAuthenticatorService authenticatorService, WebAuthnManager webAuthnManager){
return new WebAuthnAuthenticationProvider(authenticatorService, webAuthnManager);
}
@Bean
public DaoAuthenticationProvider daoAuthenticationProvider(UserDetailsService userDetailsService){
DaoAuthenticationProvider daoAuthenticationProvider = new DaoAuthenticationProvider();
daoAuthenticationProvider.setUserDetailsService(userDetailsService);
daoAuthenticationProvider.setPasswordEncoder(new BCryptPasswordEncoder());
return daoAuthenticationProvider;
}
@Bean
public AuthenticationManager authenticationManager(List<AuthenticationProvider> providers){
return new ProviderManager(providers);
}
----
==== Persistence layer integration
WebAuthn4J Spring Security looks up an authenticator through the `WebAuthnAuthenticatorService` interface.
Please set a class implementing `WebAuthnAuthenticatorService` to the `WebAuthnAuthenticationProvider`.
Speaking of Java Config, it can be set through a constructor of `WebAuthnAuthenticationProviderConfigurer`.
=== Client interface
W3C Web Authentication specification defines web browser JavaScript APIs only. It is up to implementation how to send a generated credential.
==== WebAuthn authentication request processing
Regarding WebAuthn4J Spring Security, `WebAuthnProcessingFilter` retrieves `credentialId`, `clientData`, `authenticatorData`, `signature`, and `clientExtensionsJSON` from the request sent to login processing url.
`credentialId`, `clientData`, `authenticatorData` and `signature` are binary data, please send them as Base64 strings.
==== WebAuthn registration request processing
Not like authentication request processing, Servlet filter is not provided for registration request processing
because in most cases, data other than WebAuthn like user's first name, last name, or email address are sent at the same time.
While it is basically application's responsibility to handle an authenticator registration process, WebAuthn4J Spring Security provides converters and validators to examine the received credential.
`Base64StringToCollectedClientDataConverter` converts Base64 string to a `CollectedClientData`.
`Base64StringToAttestationObjectConverter` converts Base64 string to a `AttestationObject`.
`WebAuthnRegistrationRequestValidator` validates an authenticator registration request.
==== Options endpoints
Web Authentication needs to obtain a challenge from the server prior to registration and authentication.
When using the FIDO-U2F token as an authentication device, the CredentialIds associated with the user identified by the first authentication factor are also need to be obtained from the server.
To retrieve these data, WebAuthn4J Spring Security offers `AttestationOptionsEndpointFilter` and `AssertionOptionsEndpointFilter`.
=== Customization
==== WebAuthnProcessingFilter
`WebAuthnProcessingFilter` retrieves `credentialId`, `clientData`, `authenticatorData`, `signature`, and `clientExtensionsJSON` from the request and build `WebAuthnAssertionAuthenticationToken`.
If `credentialId` does not exist, it retrieves `username` and `password` to build `UsernamePasswordAuthenticationToken`.
To change request parameter names, configure properties of `WebAuthnProcessingFilter` or corresponding Java Config method of `WebAuthnLoginConfigurer`.
==== WebAuthnAuthenticationProvider
`WebAuthnAuthenticationProvider` is an `AuthenticationProvider` implementation to process a `WebAuthnAssertionAuthenticationToken`.
For WebAuthn assertion verification, `WebAuthnManager` is used. See https://webauthn4j.github.io/webauthn4j/en/[WebAuthn4J reference] for more details of `WebAuthnManager`.
==== Attestation options endpoint, Assertion options endpoint
WebAuthn4J Spring Security provides `AttestationOptionsEndpointFilter` for WebAuthn JS Credential Creation API parameters serving, and `AssertionOptionsEndpointFilter` for WebAuthn JS Credential Get API parameter serving.
As these Parameters generation are delegated through `AttestationOptionsProvider` and `AssertionOptionsProvider` interfaces, they can be customized by implementing these interfaces.
These can be customized through Java Config. Method chains from `WebAuthnLoginConfigurer`'s `attestationOptionsEndpoint` method or `assertionOptionsEndpoint` method are configuration point for that.
[source,java]
----
@Configuration
public class WebSecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http, AuthenticationManager authenticationManager) throws Exception {
// WebAuthn Login
http.apply(WebAuthnLoginConfigurer.webAuthnLogin())
.rpId("example.com")
.attestationOptionsEndpoint()
.attestationOptionsProvider(attestationOptionsProvider)
.processingUrl("/webauthn/attestation/options")
.rp()
.name("example")
.and()
.pubKeyCredParams(
new PublicKeyCredentialParameters(PublicKeyCredentialType.PUBLIC_KEY, COSEAlgorithmIdentifier.ES256),
new PublicKeyCredentialParameters(PublicKeyCredentialType.PUBLIC_KEY, COSEAlgorithmIdentifier.RS1)
)
.authenticatorSelection()
.authenticatorAttachment(AuthenticatorAttachment.CROSS_PLATFORM)
.residentKey(ResidentKeyRequirement.PREFERRED)
.userVerification(UserVerificationRequirement.PREFERRED)
.and()
.attestation(AttestationConveyancePreference.DIRECT)
.extensions()
.credProps(true)
.uvm(true)
.and()
.assertionOptionsEndpoint()
.assertionOptionsProvider(assertionOptionsProvider)
.processingUrl("/webauthn/assertion/options")
.rpId("example.com")
.userVerification(UserVerificationRequirement.PREFERRED)
.and()
.authenticationManager(authenticationManager);
}
}
----
===== Dynamic generation of PublicKeyCredentialUserEntity
Attestation options endpoint can generate `PublicKeyCredentialUserEntity` to be returned dynamically based on the `Authentication` object associated with login user.
To generate `PublicKeyCredentialUserEntity`, `PublicKeyCredentialUserEntityProvider` is provided.
Speaking of Java Config, it can be set in this way:
----
@Configuration
public class WebSecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http, AuthenticationManager authenticationManager) throws Exception {
// WebAuthn Login
http.apply(WebAuthnLoginConfigurer.webAuthnLogin())
.attestationOptionsEndpoint()
.attestationOptionsProvider(attestationOptionsProvider)
.processingUrl("/webauthn/attestation/options")
.processingUrl("/webauthn/attestation/options")
.user(new MyPublicKeyCredentialUserEntityProvider()) // put your PublicKeyCredentialUserEntityProvider implementation
}
}
----
If `PublicKeyCredentialUserEntityProvider` is not set explicitly, WebAuthn4J Spring Security Java Config look up it from Spring Application Context.
Registering its bean to the application context is another way to set it.
==== Selecting authentication method
WebAuthn4J Spring Security supports "Password-less multi-factor authentication with a user-verifying authenticator", "Multi-factor authentication with password and authenticator" and "Single-factor authentication like password".
If you put value on adoption, you may allow password authentication in your web system, or if you give greater importance to security, you may restrict password authentication.
===== How to realize password authentication
To realize "Multi-factor authentication with password and authenticator" and "Single-factor authentication like password", configure not only `WebAuthnAuthenticationProvider` but also `DaoAuthenticationProvider` to process `UsernamePasswordAuthenticationToken`.
"Multi-factor authentication with password and authenticator" can be realized by including additional authorization requirement to check a user is authenticated by WebAuthn.
Whether it is authenticated by WebAuthn can be checked with the `WebAuthnSecurityExpression#isWebAuthnAuthenticated` method.
Register a bean of WebAuthnSecurityExpression instance and call it from JavaConfig. WebAuthn4J Spring Security Sample MPA is a good example for it.
=== Advanced topics
==== Distinction of a user in the middle of multi-factor authentication
In the case where it is needed to show a different view based on authentication level, one way is to switch the view based on the type of the current `Authentication` instance.
[source,java]
----
@RequestMapping(value = "/login", method = RequestMethod.GET)
public String login() {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
if (authenticationTrustResolver.isAnonymous(authentication)) {
return VIEW_LOGIN_LOGIN;
} else {
return VIEW_LOGIN_AUTHENTICATOR_LOGIN;
}
}
----
==== Configuring a credential scope (rpId)
In Web Authentication specification, the scope of a creating credential can be configured through the parameter named "rpId" while creating the credential i.e. registering authenticator.
"rpId" accepts https://html.spec.whatwg.org/multipage/origin.html#concept-origin-effective-domain[effective domain].
For example, in the case where the domain of the site is `webauthn.example.com`, and `webauthn.example.com` is set to
`rpId`, the credential is only available in `webauthn.example.com` and its sub-domain, but if `example.com`
is set to `rpId`, the scope of the credential is relaxed to `example.com` and its sub-domain.
WebAuthn4J Spring Security supports `rpId` configuration through the `rpId` property of `ServerPropertyProviderImpl`, which can be configured through `WebAuthnConfigurer` in JavaConfig.
If you would like to change `rpId` dynamically based on request, set `RpIdProvider`.
==== Attestation statement verification
Web Authentication specification allows the relying party to retrieve an attestation statement from an authenticator if it is requested while authenticator registration.
By verifying attestation statement, the relying party can exclude authenticators not conforming its security requirements.
It's to be noted that the attestation statement contains information that can be used to track user across web sites, it is discouraged to request an attestation statement unnecessarily.
It is also to be noted that the browsers shows an additional dialog to confirm the user consent, lowers usability.
Except for enterprise applications that require strict verification of authenticators, most sites should not request attestation statements.
`WebAuthnRegistrationContextValidator` from WebAuthn4J validates an authenticator registration request, and it delegates attestation statement signature and trustworthiness validation to `WebAuthnManager` and
`CertPathTrustworthinessValidator` interface implementation respectively.
`WebAuthnRegistrationContextValidator.createNonStrictRegistrationContextValidator` factory method can create the
`WebAuthnRegistrationContextValidator` instance that contains `AttestationStatementValidator` and
`CertPathTrustworthinessValidator` configured for web sites not requiring strict attestation verification.
==== TrustAnchorProvider using Spring Resource
While validating an authenticator attestation certificate path on registration,
`TrustAnchorCertPathTrustworthinessValidator` class uses `TrustAnchor` retrieved through `TrustAnchorProvider` interface implementation.
WebAuthn4J Spring Security offers `KeyStoreResourceTrustAnchorProvider` class, which retrieves a
`TrustAnchor` from a Java Key Store file loaded as Spring `Resource`.
| 52.850365 | 261 | 0.776811 |
cc0b9ddd1e789ad483d1c7dc39b551d9e046ee3d | 190 | adoc | AsciiDoc | antora/components/tutorials/modules/petclinic/partials/skinparam.adoc | opencirclesolutions/isis | 4ae407f7d1cb41c56f111a591d8280a9127370b8 | [
"Apache-2.0"
] | 1 | 2022-03-09T01:57:07.000Z | 2022-03-09T01:57:07.000Z | antora/components/tutorials/modules/petclinic/partials/skinparam.adoc | opencirclesolutions/isis | 4ae407f7d1cb41c56f111a591d8280a9127370b8 | [
"Apache-2.0"
] | 25 | 2021-12-15T05:24:54.000Z | 2022-03-31T05:25:56.000Z | antora/components/tutorials/modules/petclinic/partials/skinparam.adoc | pjfanning/isis | c02d8a04ebdeedd85163aebf8f944835dc97a2e2 | [
"Apache-2.0"
] | null | null | null |
hide empty members
hide methods
skinparam class {
BackgroundColor<<desc>> Cyan
BackgroundColor<<ppt>> LightGreen
BackgroundColor<<mi>> LightPink
BackgroundColor<<role>> LightYellow
}
| 15.833333 | 36 | 0.778947 |
70cac981a21cfdd1747da62f4f8c30b190a9037d | 2,652 | adoc | AsciiDoc | modules/ROOT/pages/troubleshoot-query-logs.adoc | mulesoft/docs-anypoint-datagraph | b8782bd5cde85492df5ae5a5d5813e5a24577b35 | [
"BSD-3-Clause"
] | null | null | null | modules/ROOT/pages/troubleshoot-query-logs.adoc | mulesoft/docs-anypoint-datagraph | b8782bd5cde85492df5ae5a5d5813e5a24577b35 | [
"BSD-3-Clause"
] | 3 | 2021-07-15T14:18:21.000Z | 2022-02-01T19:24:43.000Z | modules/ROOT/pages/troubleshoot-query-logs.adoc | mulesoft/docs-anypoint-datagraph | b8782bd5cde85492df5ae5a5d5813e5a24577b35 | [
"BSD-3-Clause"
] | 3 | 2021-06-10T18:03:12.000Z | 2022-02-25T14:06:30.000Z | = Troubleshoot Queries With Response Logs
Anypoint DataGraph provides response logs that contain useful query troubleshooting information.
Response logs contain information about only Anypoint DataGraph queries. Anypoint DataGraph stores up to 100 MB or up to 30 days of log data, whichever limit is reached first.
[NOTE]
--
If you have a Titanium subscription, the xref:monitoring::performance-and-impact#titanium-subscription-limits.adoc[Titanium limits for logs apply].
--
== Log Levels for Anypoint DataGraph Response Logs
Log levels for Anypoint DataGraph include DEBUG, INFO, WARN, and ERROR. Log levels are incremental, and each level contains the following information:
[%header%autowidth.spread]
|===
|Log level |Description |Levels included
|DEBUG |Logs information for request headers and request queries |All levels
|INFO |Logs information for request queries, response times, and the underlying API URLs requested per field in queries |INFO, WARN, and ERROR
|WARN |Lists warning messages |INFO, WARN
|ERROR |Lists error messages on invalid queries, timeouts, authentication failures, and payload errors |ERROR
|===
By default, the log level for each environment is set to INFO. You can change this setting at any time. Changes to the log level persist for _all future queries to that Anypoint DataGraph instance_, regardless of the user.
== View Response Logs for a Query
You must have xref:permissions.adoc[the Operate or Admin permission] to view and search response logs.
If you are a Titanium subscriber, you can also view response logs in xref:monitoring::logs.adoc[Anypoint Monitoring].
To view response logs in Anypoint DataGraph:
. Write and run a query.
. From the actions menu (*...*), select *View Response Logs*.
+
Response logs for that query are displayed:
+
image::datagraph-qsg-response-logs.png[Query response logs page]
== Search Response Logs
Search for logs that contain specified values or search for logs by date and priority.
The *Date & Time* filter enables you to search logs by specifying a date range using the following values:
* Last hour
* Last 24 hours
* Last week
* Last month
You can also filter searches by priority level:
* All priorities
* INFO
* DEBUG
* WARN
* ERROR
To search existing logs:
. Enter a value in the search box, or click Advanced to search for logs for a specified time, day, date range, and message priority.
+
image::search-logs.png[Search fields for query response logs ]
. Click *Apply*.
== Additional Resources
* xref:troubleshoot-query-traces.adoc[Troubleshoot Query Performance with Query Tracing]
* xref:query-unified-schema.adoc[Query the Unified Schema]
| 36.833333 | 222 | 0.782805 |
d52ae5de6921c898a3187886d4696d78e0455223 | 984 | adoc | AsciiDoc | docs/modules/guides/pages/gem-path.adoc | aanno/asciidoctorj | 5c68b618909b25550a1c02e2d168e4da03938338 | [
"Apache-2.0"
] | 516 | 2015-01-03T11:01:21.000Z | 2022-03-29T05:34:31.000Z | docs/modules/guides/pages/gem-path.adoc | aanno/asciidoctorj | 5c68b618909b25550a1c02e2d168e4da03938338 | [
"Apache-2.0"
] | 695 | 2015-01-02T01:52:58.000Z | 2022-02-14T08:45:55.000Z | docs/modules/guides/pages/gem-path.adoc | aanno/asciidoctorj | 5c68b618909b25550a1c02e2d168e4da03938338 | [
"Apache-2.0"
] | 167 | 2015-01-12T15:16:31.000Z | 2021-12-19T00:18:28.000Z | = Loading External Gems with GEM_PATH
By default, AsciidoctorJ comes with all required gems bundled within the jar.
But in some circumstances like xref:run-in-osgi.adoc[OSGi] environments you may require to store gems in an external folder and be loaded by AsciidoctorJ.
As the Java interface `org.asciidoctor.Asciidoctor` and its factory `org.asciidoctor.Asciidoctor.Factory` are agnostic to JRuby there are the interface `org.asciidoctor.jruby.AsciidoctorJRuby` and `org.asciidoctor.jruby.AsciidoctorJRuby.Factory` that allow to get an Asciidoctor instance using JRuby with a certain GEM_PATH.
Note that `org.asciidoctor.jruby.AsciidoctorJRuby` directly extends `org.asciidoctor.Asciidoctor`.
[source,java]
.Example of setting GEM_PATH
----
import static org.asciidoctor.jruby.AsciidoctorJRuby.Factory.create;
import org.asciidoctor.Asciidoctor;
Asciidoctor asciidoctor = create("/my/gem/path"); // <1>
----
<1> Creates an `Asciidoctor` instance with given `GEM_PATH` location.
| 54.666667 | 324 | 0.804878 |
9b77bb514941848869389b4b14aa75b3ae67a23d | 1,738 | adoc | AsciiDoc | docs/hop-user-manual/modules/ROOT/pages/pipeline/transforms/creditcardvalidator.adoc | arihaco/incubator-hop | 606b7cf0e73176816d0622ccd61732eb3e94f225 | [
"Apache-2.0"
] | null | null | null | docs/hop-user-manual/modules/ROOT/pages/pipeline/transforms/creditcardvalidator.adoc | arihaco/incubator-hop | 606b7cf0e73176816d0622ccd61732eb3e94f225 | [
"Apache-2.0"
] | null | null | null | docs/hop-user-manual/modules/ROOT/pages/pipeline/transforms/creditcardvalidator.adoc | arihaco/incubator-hop | 606b7cf0e73176816d0622ccd61732eb3e94f225 | [
"Apache-2.0"
] | null | null | null | ////
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
////
:documentationPath: /pipeline/transforms/
:language: en_US
= Credit card validator
== Description
The Credit card validator transform will help you check the following:
* The validity of a credit card number. This uses a LUHN10 (MOD-10) algorithm.
* The credit card vendor that handles the number: VISA, MasterCard, Diners Club, EnRoute, American Express (AMEX),...
== Options
* transform name: the transform name, unique in a pipeline
* Credit card field: the name of the input field that will contain the credit card number during execution
* Get only digits? : Enable this option if you want to strip all non-numeric characters from the (String) input field
* Output Fields
** Result fieldname: the name of the (Boolean) output field indicating the validity of the number
** Credit card type field: the name of the output field that will hold the credit card type (vendor)
** Not valid message: the name of the output field that will hold the error message.
| 43.45 | 117 | 0.778481 |
8159909fe86ded637fb2666df3188027dd119fc0 | 3,995 | adoc | AsciiDoc | docs/notes.adoc | kspurgin/emendate | d2e294179249a738a871389a95110d77df999bb3 | [
"MIT"
] | 1 | 2021-02-04T17:05:12.000Z | 2021-02-04T17:05:12.000Z | docs/notes.adoc | kspurgin/emendate | d2e294179249a738a871389a95110d77df999bb3 | [
"MIT"
] | 2 | 2021-02-15T16:51:27.000Z | 2021-02-16T16:25:56.000Z | docs/notes.adoc | kspurgin/emendate | d2e294179249a738a871389a95110d77df999bb3 | [
"MIT"
] | null | null | null | = Development notes
Notes on things still to be implemented, things to fix, etc.
== To-do
=== option for handling output of EDTF when only year or year/month are known
==== one configurable option for month level
Example: 2004
If option is `none` (default), output `2004`
If option is `unspecified_digits`, output `2004-XX`
If option is a number (say 4), output `2004-04`
==== one configurable option for day level
Example: April 2004
If option is `none` (default), output `2004-04`
If option is `unspecified_digits`, output `2004-04-XX`
If option is a number (say 7), output `2004-04-07`
==== interactions
If month is none, only none is valid for day. Any other value for day will be ignored.
Otherwise, you can set them independently:
Example: 2004
month: unspecified_digits, day: 17 = 2004-XX-17
month: 10, day: unspecified_digits = 2004-10-XX
== Notes
Islandora has no built-in date functionality. It uses the pre-parsed values in MODS (I7) or Drupal fields (I8), so we can basically do whatever with the date parsing.
=== ISO8601 and BCE
From https://en.wikipedia.org/wiki/ISO_8601#Years
ISO 8601 prescribes, as a minimum, a four-digit year [YYYY] to avoid the year 2000 problem. It therefore represents years from 0000 to 9999, year 0000 being equal to 1 BC and all others AD. However, years prior to 1583 are not automatically allowed by the standard. Instead "values in the range [0000] through [1582] shall only be used by mutual agreement of the partners in information interchange."
=== Early/mid/late season
to do
=== Early/mid/late month
to do
=== Early/mid/late decade
Examples: Early 1990s, mid-1990s, late 1990s
bThere is no standard agreement anywhere about what years of the decade constitute early, mid, and late. Preferences for breaking this up include: 3-4-3 and 4-2-4.
CollectionSpace's date parser handles this as follows:
early 1990s = 1990-1993
mid 1990s = 1994-1996
late 1990s = 1997-1999
This is different than what TimeTwister returns:
early 1990s = 1990-1995
mid 1990s = 1993-1998
late 1990s = 1995-1999
For a cohesive user experience between migration/batch import and use of CollectionSpace UI, we need to do what CS does.
=== Early/mid/late year
Examples: Early 2020, mid-2020, late 2020
In Islandora we'll have to feed it pre-parsed values in MODS or Drupal fields.
CollectionSpace parses these as follows, so we will go with that as the requirement:
early 2020 = 2020-01-01 to 2020-04-30
mid 2020 = 2020-05-01 to 2020-08-31
late 2020 = 2020-09-01 to 2020-12-31
=== Seasons (textual)
Go with what CS does.
*Winter 2020*
CS = 2020-01-01 - 2020-03-31
TT = 2020-01-01 - 2020-03-20
*Spring 2020*
CS = 2020-04-01 - 2020-06-30
Timetwister = 2020-03-20 - 2002-06-21
*Summer 2020*
CS = 2020-07-01 - 2020-09-30
TT = 2020-06-21 - 2020-09-23
*Fall 2020*
CS = 2020-10-01 - 2020-12-31
TT = 2020-09-23 - 2020-12-22
=== Before/after dates
Example: before 1750
Since CollectionSpace is museum oriented, it's possible we need to support *really* old dates.
Cspace only parses a date like this into the latest date. Earliest/single date is nil. So, initially we will just return a single date value (not an inclusive range) (i.e. 1750-01-01), with "before" certainty value.
Example: after 1750
Since the latest date is TODAY, we have an end point and can return the inclusive range. Certainty "after" is assigned to the given date. Certainty "before" is assigned to the current date.
=== Centuries
example: 19th century
CS = 1801-01-01 - 1900-12-31
TT = 1800-01-01 - 1899-12-31
Because of the difference in years used in setting ranges, I'm going to go with CS and not compare what early/mid/late values are set.
`early/mid/late 18th century`
named, early = 1701-01-01 - 1734-12-31
named, mid = 1734-01-01 - 1767-12-31
named, late = 1767-01-01 - 1800-12-31
`early/mid/late 1900s or 19XX`
other, early = 1900-01-01 - 1933-12-31
other, mid = 1933-01-01 - 1966-12-31
other, late = 1966-01-01 - 1999-12-31
| 25.125786 | 400 | 0.732916 |
635d28abc7b3423ef44441488922bd27be558169 | 846 | adoc | AsciiDoc | README.adoc | IanDarwin/ssl_redirectory | be103357af8e19a4d561fdbce8f44291b1f57336 | [
"BSD-2-Clause"
] | 1 | 2015-11-20T22:59:31.000Z | 2015-11-20T22:59:31.000Z | README.adoc | IanDarwin/ssl_redirectory | be103357af8e19a4d561fdbce8f44291b1f57336 | [
"BSD-2-Clause"
] | null | null | null | README.adoc | IanDarwin/ssl_redirectory | be103357af8e19a4d561fdbce8f44291b1f57336 | [
"BSD-2-Clause"
] | null | null | null | = ssl_redirector
General use case: you run a Java EE server and need to redirect from HTTP to HTTPS with the same site name.
== Usage
On JBoss Wildfly, change the hostname in jboss-web.xml.
On other servers, configure and install as needed.
== History
My specific use case: Interim support of running an old site on new server with haproxy
(using haproxy as an SSL-enabling reverse proxy).
Redirect just this one site from http to https (The App Server runs many sites, I can't redirect them all
to SSL because many of them don't have anything that needs security and don't have SSL certs for), and,
can't let the old Seam2 webapp itself do the redirect because
it's behind haproxy so will see the result as http, and redirect to ssl,
and see the result as http...
Recursion: n, see Recursion.
== Bugs
There is probably a better way.
| 32.538462 | 107 | 0.763593 |
0766a22e0c31444e507f55217f88269fb884030d | 5,036 | adoc | AsciiDoc | docs/modules/ROOT/pages/working-with-rdds.adoc | couchbase/couchbase-spark-connector | 28cbdcddf4ecabafb63011dc85e6c9cecd1bed96 | [
"Apache-2.0"
] | 60 | 2015-11-04T16:13:33.000Z | 2022-01-12T15:01:25.000Z | docs/modules/ROOT/pages/working-with-rdds.adoc | couchbase/couchbase-spark-connector | 28cbdcddf4ecabafb63011dc85e6c9cecd1bed96 | [
"Apache-2.0"
] | 19 | 2015-11-30T05:39:51.000Z | 2021-09-09T12:58:23.000Z | docs/modules/ROOT/pages/working-with-rdds.adoc | couchbase/couchbase-spark-connector | 28cbdcddf4ecabafb63011dc85e6c9cecd1bed96 | [
"Apache-2.0"
] | 45 | 2015-11-11T20:34:41.000Z | 2021-07-22T23:07:20.000Z | = Working With RDDs
:page-topic-type: concept
[abstract]
Spark operates on resilient distributed datasets (RDDs). Higher level concepts like DataFrames and Datasets are more and more the primary means of access, but RDDs are still very useful to understand.
When you need to extract data out of Couchbase, the Couchbase Spark connector creates RDDs for you. In addition, you can also persist data to Couchbase using RDDs.
The following spark context is configured to work on the `travel-sample` bucket and can be used to follow the examples. Please configure your connectionString, username and password accordingly.
[source,scala]
----
include::example$WorkingWithRDDs.scala[tag=context,indent=0]
----
All RDD operations operate on the `SparkContext`, so the following import needs to be present before the APIs can be used:
[source,scala]
----
include::example$WorkingWithRDDs.scala[tag=import,indent=0]
----
Many arguments and return types are provided directly from the Couchbase Scala SDK (i.e. `GetResult` and `GetOptions`). This is by intention since it allows the most flexibility when interacting with the SDK. These types are not discussed in detail here, please refer to the official SDK documentation for more information.
== Creating RDDs
The following read operations are available:
[cols="1,1"]
|===
| API |Description
|`couchbaseGet`
| Fetches full documents.
|`couchbaseLookupIn`
| Fetches parts of documents ("subdocument API").
|`couchbaseQuery`
| Performs a N1QL query.
|`couchbaseAnalyticsQuery`
| Performs an analytics query.
|`couchbaseSearchQuery`
| Performs a search query.
|===
Writing APIs are also available on the context:
[cols="1,1"]
|===
| API |Description
|`couchbaseUpsert`
| Stores documents with upsert semantics.
|`couchbaseReplace`
| Stores documents with replace semantics.
|`couchbaseInsert`
| Stores documents with insert semantics.
|`couchbaseRemove`
| Removes documents.
|`couchbaseMutateIn`
| Mutates parts of documents ("subdocument" API)
|`couchbaseQuery`
| Performs a N1QL query.
|===
Note that `couchbaseQuery` is present twice, since you can execute DML statements through it as well as regular SELECTs.
The following example shows how to fetch two documents and prints their content:
[source,scala]
----
include::example$WorkingWithRDDs.scala[tag=get,indent=0]
----
Each API takes a required `Seq[T]`, where `T` depends on the operation being used. The cases classes are named the same as the operation type and allow specifying more parameters than just the document ID where needed.
As an example, for a `couchbaseReplace` the case class signature looks like this:
[source,scala]
----
case class Replace[T](id: String, content: T, cas: Long = 0)
----
So for each entry in the `Seq`, not only you can specify the id and content of the document, but also (optionally) the CAS value to perform an optimistic locking operation.
A N1QL query can be performed like this:
[source,scala]
----
include::example$WorkingWithRDDs.scala[tag=query,indent=0]
----
In addition to the required parameter(s), optional information can also be passed along. Each operation allows to specify its equivalent option block (so for a `couchbaseGet` the `GetOptions` can be supplied). Also, a generic `Keyspace` can be provided which allows to override the implicit defaults from the configuration.
A Keyspace looks like this:
[source,scala]
----
case class Keyspace(
bucket: Option[String] = None,
scope: Option[String] = None,
collection: Option[String] = None
)
----
And you can use it to provide a custom bucket, scope or collection on a per-operation basis.
== Persisting RDDs
While reading operations on the `SparkContext` are common, writing documents to Couchbase at the RDD level usually operates on already existing RDDs.
The following functions are available on an RDD:
[cols="1,1,1"]
|===
| API | Type | Description
|`couchbaseUpsert`
|`RDD[Upsert[_]]`
| Stores documents with upsert semantics.
|`couchbaseReplace`
|`RDD[Replace[_]]`
| Stores documents with replace semantics.
|`couchbaseInsert`
|`RDD[Insert[_]]`
| Stores documents with insert semantics.
|`couchbaseRemove`
|`RDD[Remove]`
| Removes documents.
|`couchbaseMutateIn`
|`RDD[MutateIn]`
| Mutates parts of documents ("subdocument" API)
|===
It is important to understand that those APIs are only available if the RDD has the correct type. The following example illustrates this.
[source,scala]
----
include::example$WorkingWithRDDs.scala[tag=upsert,indent=0]
----
It first loads two documents from the travel-sample bucket and returns a `RDD[GetResult]`. The objective is to store those two documents in the `targetBucket`.
As a next step, inside the map function, a `Upsert` case class is constructed which takes the document ID and content. This type is then passed to the `couchbaseUpsert` function which executes the operation. Note that also a custom `keyspace` is passed which overrides the default implicit one and therefore allows to write the data to a different bucket. | 31.873418 | 355 | 0.764893 |
53243f35e4bbfd74340f816c52a17623217e45ed | 1,310 | asciidoc | AsciiDoc | 01_getting_started/11_using_asynchronous_tasks.asciidoc | Hahihula/gooddata-ruby-examples | 672d7e7fe99d071c4b66269f4ccdb2ca894fe97c | [
"BSD-3-Clause"
] | null | null | null | 01_getting_started/11_using_asynchronous_tasks.asciidoc | Hahihula/gooddata-ruby-examples | 672d7e7fe99d071c4b66269f4ccdb2ca894fe97c | [
"BSD-3-Clause"
] | null | null | null | 01_getting_started/11_using_asynchronous_tasks.asciidoc | Hahihula/gooddata-ruby-examples | 672d7e7fe99d071c4b66269f4ccdb2ca894fe97c | [
"BSD-3-Clause"
] | null | null | null | === Using Asynchronous Tasks with Timeouts
by Tomas Svarovsky
==== Problem
You would like to build on top of the SDK but you would like to have more control over asynchronous tasks.
==== Solution
There are numerous tasks on GoodData API which potentially take more than just couple of seconds to execute. These include report executions, data loads, exports, clones and others.
The way these tasks are implemented in SDK that they block. The execution continues only when the task finishes (either success or error) or the server time limit is reached and the task is killed.
Sometimes it is useful to be able to specify the time limit on the client side. This might be useful for cases where you need to make sure that something is either finished under a certain time threshold or you have to make some other action (notifying a customer). The limit you would like to use is different then the server side limit of GoodData APIs.
You can implement it like this
[source,ruby]
----
# encoding: utf-8
require 'gooddata'
client = GoodData.connect
project = client.projects('project_id')
report = project.reports(1234)
begin
puts report.execute(time_limit: 10)
rescue GoodData::ExecutionLimitExceeded => e
puts "Unfortunately #{report.title} execution did not finish in 10 seconds"
raise e
end
---- | 40.9375 | 355 | 0.779389 |
850ac730cab90f1ecda6671591b0d49b3c08f6c7 | 1,231 | adoc | AsciiDoc | spark-sql-streaming-StreamingGlobalLimitStrategy.adoc | GCPBigData/spark-structured-streaming-book | 34e136ad42201c7e69561a765db6b3f23b0c809b | [
"Apache-2.0"
] | 3 | 2019-06-21T05:27:08.000Z | 2021-01-05T11:01:00.000Z | spark-sql-streaming-StreamingGlobalLimitStrategy.adoc | sungerme/spark-structured-streaming-book | f9aeebd6b73713883ac02b2dbae3a37bf385288c | [
"Apache-2.0"
] | null | null | null | spark-sql-streaming-StreamingGlobalLimitStrategy.adoc | sungerme/spark-structured-streaming-book | f9aeebd6b73713883ac02b2dbae3a37bf385288c | [
"Apache-2.0"
] | 2 | 2019-09-19T07:47:40.000Z | 2019-11-05T11:21:23.000Z | == [[StreamingGlobalLimitStrategy]] StreamingGlobalLimitStrategy Execution Planning Strategy
`StreamingGlobalLimitStrategy` is an execution planning strategy that can plan streaming queries with `ReturnAnswer` and `Limit` logical operators (over streaming queries) with the <<outputMode, Append>> output mode to <<spark-sql-streaming-StreamingGlobalLimitExec.adoc#, StreamingGlobalLimitExec>> physical operator.
TIP: Read up on https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-SparkStrategy.html[Execution Planning Strategies] in https://bit.ly/spark-sql-internals[The Internals of Spark SQL] book.
`StreamingGlobalLimitStrategy` is used (and <<creating-instance, created>>) exclusively when <<spark-sql-streaming-IncrementalExecution.adoc#, IncrementalExecution>> is requested to plan a streaming query.
[[creating-instance]][[outputMode]]
`StreamingGlobalLimitStrategy` takes a single <<spark-sql-streaming-OutputMode.adoc#, OutputMode>> to be created (which is the <<spark-sql-streaming-IncrementalExecution.adoc#outputMode, OutputMode>> of the <<spark-sql-streaming-IncrementalExecution.adoc#, IncrementalExecution>>).
=== [[demo]] Demo: Using StreamingGlobalLimitStrategy
[source, scala]
----
FIXME
----
| 68.388889 | 318 | 0.803412 |
b14da00b591e34b153cc332113114f9087d810ab | 1,866 | adoc | AsciiDoc | modules/olm-policy-fine-grained-permissions.adoc | jlebon/openshift-docs | 424d6686d78b4cb8e38d55aba57ab76651f66081 | [
"Apache-2.0"
] | null | null | null | modules/olm-policy-fine-grained-permissions.adoc | jlebon/openshift-docs | 424d6686d78b4cb8e38d55aba57ab76651f66081 | [
"Apache-2.0"
] | null | null | null | modules/olm-policy-fine-grained-permissions.adoc | jlebon/openshift-docs | 424d6686d78b4cb8e38d55aba57ab76651f66081 | [
"Apache-2.0"
] | null | null | null | // Module included in the following assemblies:
//
// * operators/admin/olm-creating-policy.adoc
[id="olm-policy-fine-grained-permissions_{context}"]
= Fine-grained permissions
OLM uses the service account specified in OperatorGroup to create or update the
following resources related to the Operator being installed:
* ClusterServiceVersion
* Subscription
* Secret
* ServiceAccount
* Service
* ClusterRole and ClusterRoleBinding
* Role and RoleBinding
In order to confine Operators to a designated namespace, cluster administrators
can start by granting the following permissions to the service account:
[NOTE]
====
The following role is a generic example and additional rules might be required
based on the specific Operator.
====
[source,yaml]
----
kind: Role
rules:
- apiGroups: ["operators.coreos.com"]
resources: ["subscriptions", "clusterserviceversions"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "serviceaccounts"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: ["apps"] <1>
resources: ["deployments"]
verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
- apiGroups: [""] <1>
resources: ["pods"]
verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
----
<1> Add permissions to create other resources, such as Deployments and pods shown
here.
In addition, if any Operator specifies a pull secret, the following permissions
must also be added:
[source,yaml]
----
kind: ClusterRole <1>
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
kind: Role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "update", "patch"]
----
<1> Required to get the secret from the OLM namespace.
| 27.043478 | 81 | 0.702036 |
fefc0e9a3cd0a1bd3ecc160b8134c91bdbf05b4c | 1,025 | adoc | AsciiDoc | spring-cloud-dataflow-server-openshift-docs/src/main/asciidoc/index.adoc | michael-wirth/spring-cloud-dataflow-server-openshift | fa614c2b6657f15ab9118ef89b56b7060cdd2502 | [
"Apache-2.0"
] | null | null | null | spring-cloud-dataflow-server-openshift-docs/src/main/asciidoc/index.adoc | michael-wirth/spring-cloud-dataflow-server-openshift | fa614c2b6657f15ab9118ef89b56b7060cdd2502 | [
"Apache-2.0"
] | null | null | null | spring-cloud-dataflow-server-openshift-docs/src/main/asciidoc/index.adoc | michael-wirth/spring-cloud-dataflow-server-openshift | fa614c2b6657f15ab9118ef89b56b7060cdd2502 | [
"Apache-2.0"
] | null | null | null | = Spring Cloud Data Flow Server for OpenShift
Donovan Muller
:doctype: book
:toc:
:toclevels: 4
:source-highlighter: prettify
:numbered:
:icons: font
:hide-uri-scheme:
:attributes: allow-uri-read
:scdf-core-version: {dataflow-project-version}
:scdf-server-kubernetes-version: {spring-cloud-dataflow-version}
:scdf-server-openshift-version: v{project-version}
:stream-starters-bacon-release-version: 1.2.0.RELEASE
:scdf-server-openshift-asciidoc: https://github.com/donovanmuller/spring-cloud-dataflow-server-openshift/raw/master/spring-cloud-dataflow-server-openshift-docs/src/main/asciidoc
// ======================================================================================
include::introduction.adoc[]
include::overview.adoc[]
include::features.adoc[]
include::getting-started.adoc[]
include::configuration.adoc[]
include::server.adoc[]
include::application.adoc[]
include::deployment.adoc[]
include::howto.adoc[]
// ======================================================================================
| 25.625 | 177 | 0.644878 |
a33ce8de04b711e45f69362c337486b65564079e | 1,514 | adoc | AsciiDoc | doc/data-management-handbook.adoc | NINAnor/S-ENDA-DMH | b9a4d13215c1ec9575000a469aa12505cdadef17 | [
"Apache-2.0"
] | null | null | null | doc/data-management-handbook.adoc | NINAnor/S-ENDA-DMH | b9a4d13215c1ec9575000a469aa12505cdadef17 | [
"Apache-2.0"
] | null | null | null | doc/data-management-handbook.adoc | NINAnor/S-ENDA-DMH | b9a4d13215c1ec9575000a469aa12505cdadef17 | [
"Apache-2.0"
] | null | null | null | :doctype: book
:pdf-folio-placement: physical
:media: prepress
:title-logo-image: image::images/Met_RGB_Horisontal_Engelsk.png[pdfwidth=10cm,align=right]
= Data Management Handbook template for MET and partners in S-ENDA written in asciidoc
Nina E. Larsgård, Elodie Fernandez, Morten W. Hansen, ...
:sectnums:
:sectnumlevels: 6
:sectanchors:
:toc: macro
:chapter-label:
:xrefstyle: short
:toclevels: 6
[discrete]
== Abstract
Abstract will come here..
toc::[]
[discrete]
== Revision history
[cols=",,,",]
|=======================================================================
|Version |Date |Comment |Responsible
|2.0 |2021-??-?? |New version based on original MET DMH
|Nina E. Larsgård, Elodie Fernandez, Morten W. Hansen, Matteo De Stefano ...
|=======================================================================
:numbered:
include::introduction.adoc[]
:numbered:
include::{special_intro}[]
:numbered:
include::structure_and_documenting.adoc[]
:numbered:
include::{special_strudoc}[]
:numbered:
include::data_services.adoc[]
:numbered:
include::{special_data_services}[]
:numbered:
include::user_portals.adoc[]
:numbered:
include::{special_user_portals}[]
:numbered:
include::data_governance.adoc[]
:numbered:
include::{special_data_governance}[]
:numbered:
include::acdd_elements.adoc[]
:numbered!:
include::acknowledgements.adoc[]
:numbered!:
include::glossary.adoc[]
:numbered!:
include::acronyms.adoc[]
:numbered!:
include::appendixA.adoc[]
:numbered!:
include::appendixB.adoc[]
| 18.463415 | 90 | 0.672391 |
d1f38f6706b12ad0fee98b29b57986a12159415f | 1,259 | adoc | AsciiDoc | integration-tests/README.adoc | ppalaga/cassandra-quarkus | 020d6fbc9c18226ebb08e1bb0567b43db6fe2922 | [
"Apache-2.0"
] | 497 | 2020-04-15T19:20:16.000Z | 2022-03-31T05:01:16.000Z | integration-tests/README.adoc | ppalaga/cassandra-quarkus | 020d6fbc9c18226ebb08e1bb0567b43db6fe2922 | [
"Apache-2.0"
] | 154 | 2020-03-29T17:42:24.000Z | 2022-03-18T09:58:29.000Z | integration-tests/README.adoc | ppalaga/cassandra-quarkus | 020d6fbc9c18226ebb08e1bb0567b43db6fe2922 | [
"Apache-2.0"
] | 39 | 2020-04-02T17:55:50.000Z | 2022-03-23T18:09:32.000Z | = Cassandra Quarkus - Integration tests
This module hosts integration tests for the Cassandra Quarkus extension.
It contains the following modules:
1. application: the application classes (no tests);
2. default: the main suite of integration tests;
3. metrics-microprofile: specific tests for metrics with MicroProfile;
4. metrics-disabled: specific tests for disabled metrics.
IMPORTANT: Integration tests in submodules of this module are executed as part of the Quarkus
Platform builds. For this reason they need to be deployed to Maven Central. Any changes in the
integration tests modules should be reflected in
https://github.com/quarkusio/quarkus-platform/blob/main/integration-tests/cassandra/invoked/root/pom.xml[this
Quarkus Platform POM].
== Running integration tests
To run the integration tests with regular packaging, simply execute:
mvn clean verify
To run the integration tests in native mode:
mvn clean verify -Dnative
Native mode requires that you point the environment variable `GRAALVM_HOME` to a valid GraalVM
installation root; also, Graal's `native-image` executable must have been previously installed with
`gu install native-image`.
When native mode is on, the build takes considerably longer to finish.
| 38.151515 | 109 | 0.79587 |
2ea1081aef8fb200bfe53e10a912db25333a079f | 177 | adoc | AsciiDoc | data/README.adoc | couchbaselabs/connect-fall-2017-demo | 2a7c8702b0671bdb370c57fa60c504965354d662 | [
"Apache-2.0"
] | 3 | 2018-05-24T07:43:29.000Z | 2020-11-05T18:29:13.000Z | data/README.adoc | couchbaselabs/connect-fall-2017-demo | 2a7c8702b0671bdb370c57fa60c504965354d662 | [
"Apache-2.0"
] | null | null | null | data/README.adoc | couchbaselabs/connect-fall-2017-demo | 2a7c8702b0671bdb370c57fa60c504965354d662 | [
"Apache-2.0"
] | 1 | 2018-09-30T19:39:34.000Z | 2018-09-30T19:39:34.000Z | https://github.com/synthetichealth/synthea[]
https://github.com/smart-on-fhir/sample-patients[]
https://sb-fhir-stu3.smarthealthit.org/smartstu3/open[]
http://fhirtest.uhn.ca[]
| 35.4 | 55 | 0.768362 |
0458d23beb5a394f0b94e225d1770aedfc98c8c7 | 219 | adoc | AsciiDoc | core/requirements/core/REQ_http.adoc | GeoLabs/ogcapi-processes | 81898e9b9c9b12be5ba6d84c9849e6a78cc64b78 | [
"OML"
] | 18 | 2019-02-28T07:49:44.000Z | 2020-11-06T12:44:54.000Z | core/requirements/core/REQ_http.adoc | GeoLabs/ogcapi-processes | 81898e9b9c9b12be5ba6d84c9849e6a78cc64b78 | [
"OML"
] | 134 | 2020-12-01T07:34:10.000Z | 2022-03-15T19:34:08.000Z | core/requirements/core/REQ_http.adoc | GeoLabs/ogcapi-processes | 81898e9b9c9b12be5ba6d84c9849e6a78cc64b78 | [
"OML"
] | 17 | 2019-03-27T12:17:00.000Z | 2020-11-10T17:32:35.000Z |
[[req_core_http]]
[requirement]
====
[%metadata]
label:: /req/core/http
The server SHALL conform to <<rfc2616,HTTP 1.1>>.
If the server supports HTTPS, the server SHALL also conform to
<<rfc2818,HTTP over TLS>>.
==== | 18.25 | 62 | 0.69863 |
b9ddb8fe62e2d5222402f8ae38b0525eeb9cb548 | 2,395 | adoc | AsciiDoc | _posts/2018-06-27-Running-cockpit-on-a-Satellite-Capsule-Foreman-Proxy.adoc | wzzrd/hubpress.io | 240164e68772a7dd2c5bf7e3c0d1eb1c6d600544 | [
"MIT"
] | 1 | 2017-02-15T12:19:53.000Z | 2017-02-15T12:19:53.000Z | _posts/2018-06-27-Running-cockpit-on-a-Satellite-Capsule-Foreman-Proxy.adoc | wzzrd/hubpress.io | 240164e68772a7dd2c5bf7e3c0d1eb1c6d600544 | [
"MIT"
] | null | null | null | _posts/2018-06-27-Running-cockpit-on-a-Satellite-Capsule-Foreman-Proxy.adoc | wzzrd/hubpress.io | 240164e68772a7dd2c5bf7e3c0d1eb1c6d600544 | [
"MIT"
] | null | null | null | = Running cockpit on a Satellite Capsule / Foreman Proxy
:published_at: 2018-06-27
:hp-tags: Satellite, Capsule, Cockpit, SSL, Certificates, FreeIPA, IdM, Foreman
:source-highlighter: highlightjs
When I bought my new workstation - those Ryzen beasts are FAST! - I decided I would build my new lab properly, and with properly, I meant with proper SSL certificates for all apps.
So I set up Red Hat IdM on a simple VM, and then imported the CA into my browser. From that point on, all of my lab VMs are new running with a certificate signed by that CA. Nice green lock icons in my browser. Yay!
I've built a RHV cluster, a Satellite 6 machine, with two Capsules, an Ansible Tower node, and more infrastructure, just to play with, all with proper certificates signed by my IdM CA.
That last hurdle was to setup Cockpit on all my VMs, and use a proper certificate for that as well.
For 'normal' VMs, that fairly easy, but my Capsules (and the Satellite itself) have processes that try to bind to the same port as Cockpit.
The workaround is simple and solid, and I'm documenting it here for posterity:
First, you install the Cockpit software itself:
[source,bash]
----
yum -y install cockpit
----
But because Cockpit needs to bind on another port, we will override it's unit file:
[source,bash]
----
mkdir -p /usr/lib/systemd/system/cockpit.socket
cat << EOF > /usr/lib/systemd/system/cockpit.socket/10-port.conf
[Socket]
# need to reset the list of ListenStreams first, else it becomes a list that still includes 9090
ListenStream=
ListenStream=10090
EOF
systemctl daemon-reload
----
We need to open a port for Cockpit to be reachable from the outside:
[source,bash]
----
# cockpit cannot run on 9090 on a capsule, because there's a capsule process there already
firewall-cmd --add-port=10090/tcp --permanent
firewall-cmd --reload
----
And tell SELinux to actually allow Cockpit to bind to that port, too:
[source,bash]
----
semanage port -a -t websm_port_t -p tcp 10090
----
Finally, we'll use the Capsules certificate and private key (I stored them in /etc/capscerts) to create a single file that Cockpit will use:
[source,bash]
----
cd /etc/capscerts
cat caps.crt >> /etc/cockpit/ws-certs.d/caps.cert
cat caps.key >> /etc/cockpit/ws-certs.d/caps.cert
----
Finally, we restart the Cockpit socket
[source,bash]
----
systemctl restart cockpit.socket
----
And we're done!
| 34.214286 | 215 | 0.749478 |
3ebd62e5b17d1dda72d28e2aa65c5ee3c16c0cae | 853 | adoc | AsciiDoc | _posts/2017-05-17-Editeur-CSS-dans-Eclipse.adoc | jabbytechnologies/blog | 91526776328bd0833c5ea49d9449c1a5417c885f | [
"MIT"
] | null | null | null | _posts/2017-05-17-Editeur-CSS-dans-Eclipse.adoc | jabbytechnologies/blog | 91526776328bd0833c5ea49d9449c1a5417c885f | [
"MIT"
] | null | null | null | _posts/2017-05-17-Editeur-CSS-dans-Eclipse.adoc | jabbytechnologies/blog | 91526776328bd0833c5ea49d9449c1a5417c885f | [
"MIT"
] | null | null | null | = Editeur CSS dans Eclipse
Lorsque je développe, j'apprécie de pouvoir utiliser mon IDE Java pour la partie web.
Malheureusement le constat n'est pas très bon sur les fonctionnalités fournies par WTP.
== Editeur CSS du projet WTP
=== Rapide constat
J'ai un certains nombres de "grief" L'éditeur CSS ne permet pas de prendre en compt
J'ai donc entrepris en septembre dernier de patcher un peu l'éditeur CSS fourni par le projet WTP d'Eclipse.
Clairement, il y a beaucoup de travail et j'ai essayé de faire de petits patchs pour améliorer l'utilisabilité du produit.
Certaines ont été mergées récemment.
Voici la liste des changements que vous pourrez retrouver dans la prochaine version d'Eclipse
Lien : https://git.eclipse.org/r/#/q/project:sourceediting/webtools.sourceediting+status:merged+owner:gautier.desaintmartinlacaze%2540gmail.com
| 38.772727 | 143 | 0.794842 |
b8389a333435ffeb9473a26924f8ffee499f6945 | 447 | adoc | AsciiDoc | doc/unicode.adoc | ComputerNerd/moonfltk | 6da77cacb0b158907c5cf9320b9d06ca44dd43c0 | [
"MIT"
] | 38 | 2016-06-22T18:39:45.000Z | 2021-09-14T00:15:18.000Z | doc/unicode.adoc | ComputerNerd/moonfltk | 6da77cacb0b158907c5cf9320b9d06ca44dd43c0 | [
"MIT"
] | 23 | 2018-03-21T13:19:32.000Z | 2022-02-15T12:09:23.000Z | doc/unicode.adoc | ComputerNerd/moonfltk | 6da77cacb0b158907c5cf9320b9d06ca44dd43c0 | [
"MIT"
] | 9 | 2016-06-23T11:44:29.000Z | 2022-02-05T09:08:49.000Z |
[[unicode]]
=== Unicode and UTF-8
[small]#Rfr: link:++http://www.fltk.org/doc-1.3/group__fl__unicode.html++[Unicode and UTF-8 functions]. +
See also Lua's native http://www.lua.org/manual/5.3/manual.html#6.5[UTF-8 Support].#
* _string_ = *fl.utf_tolower*(_string_) +
_string_ = *fl.utf_toupper*(_string_)
* _-1|0|+1_ = *fl.utf_strcasecmp*(_string~1~_, _string~2~_)
////
//@@TODO
* *fl.* ( )
* *fl.* (__) +
* *fl.* ( ) +
-> __
boolean
////
| 18.625 | 105 | 0.61745 |
56c65836ba0d54c4f5805a17d35a20ef731294f8 | 67 | adoc | AsciiDoc | fixtures/asciidoc-pages-app/source/hello-with-mixed-page-data.adoc | FiveYellowMice/middleman-asciidoc | de646cb939ec78bebf7075496f73d45a0313fe38 | [
"MIT"
] | 26 | 2015-04-04T23:50:39.000Z | 2021-12-09T08:07:22.000Z | fixtures/asciidoc-pages-app/source/hello-with-mixed-page-data.adoc | FiveYellowMice/middleman-asciidoc | de646cb939ec78bebf7075496f73d45a0313fe38 | [
"MIT"
] | 64 | 2015-01-01T03:26:21.000Z | 2020-11-19T01:38:03.000Z | fixtures/asciidoc-pages-app/source/hello-with-mixed-page-data.adoc | FiveYellowMice/middleman-asciidoc | de646cb939ec78bebf7075496f73d45a0313fe38 | [
"MIT"
] | 10 | 2015-04-04T00:02:44.000Z | 2022-03-12T19:07:45.000Z | ---
layout: default
---
= Page Title
:showtitle:
Hello, AsciiDoc!
| 8.375 | 16 | 0.656716 |
ed659cc1b2e2340fbc258d7f8b9059ce40aeee01 | 674 | adoc | AsciiDoc | remote-admin-create-cache/README.adoc | durgeshanaokar/redhat-datagrid-tutorials | 70fccf0a341aeb17a2adb255a416142043cd49cc | [
"Apache-2.0"
] | null | null | null | remote-admin-create-cache/README.adoc | durgeshanaokar/redhat-datagrid-tutorials | 70fccf0a341aeb17a2adb255a416142043cd49cc | [
"Apache-2.0"
] | null | null | null | remote-admin-create-cache/README.adoc | durgeshanaokar/redhat-datagrid-tutorials | 70fccf0a341aeb17a2adb255a416142043cd49cc | [
"Apache-2.0"
] | null | null | null | = Remote Cache Administration
**Authors:** Wolf Fink +
**Technologies:** Infinispan, Hot Rod, Java +
**Summary:** Use Hot Rod Java clients to remotely create and administer caches
on Infinispan servers.
. Open `infinispan.xml` for editing under `server/conf` in the server installation directory.
. Add the following cache configuration to create a template:
+
----
<distributed-cache-configuration name="MyDistCachecConfig" mode="ASYNC">
<state-transfer await-initial-transfer="false"/>
</distributed-cache-configuration>
----
+
. Build and run the Hot Rod Java client:
+
----
$ mvn -s ../maven-settings.xml clean package
$ mvn -s ../maven-settings.xml exec:exec
----
| 28.083333 | 93 | 0.732938 |
1d7586cf6bdabed0d047c59f9d04f42c575e1e1d | 31,511 | asciidoc | AsciiDoc | docs/src/reference/the-graphcomputer.asciidoc | arings/tinkerpop | 3852c87343bacce513fc5cfb1cf9095fa759cb43 | [
"Apache-2.0"
] | null | null | null | docs/src/reference/the-graphcomputer.asciidoc | arings/tinkerpop | 3852c87343bacce513fc5cfb1cf9095fa759cb43 | [
"Apache-2.0"
] | null | null | null | docs/src/reference/the-graphcomputer.asciidoc | arings/tinkerpop | 3852c87343bacce513fc5cfb1cf9095fa759cb43 | [
"Apache-2.0"
] | null | null | null | ////
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
////
[[graphcomputer]]
= The GraphComputer
image:graphcomputer-puffers.png[width=350,float=right] TinkerPop provides two primary means of interacting with a
graph: link:http://en.wikipedia.org/wiki/Online_transaction_processing[online transaction processing] (OLTP) and
link:http://en.wikipedia.org/wiki/Online_analytical_processing[online analytical processing] (OLAP). OLTP-based
graph systems allow the user to query the graph in real-time. However, typically, real-time performance is only
possible when a local traversal is enacted. A local traversal is one that starts at a particular vertex (or small set
of vertices) and touches a small set of connected vertices (by any arbitrary path of arbitrary length). In short, OLTP
queries interact with a limited set of data and respond on the order of milliseconds or seconds. On the other hand,
with OLAP graph processing, the entire graph is processed and thus, every vertex and edge is analyzed (some times
more than once for iterative, recursive algorithms). Due to the amount of data being processed, the results are
typically not returned in real-time and for massive graphs (i.e. graphs represented across a cluster of machines),
results can take on the order of minutes or hours.
* *OLTP*: real-time, limited data accessed, random data access, sequential processing, querying
* *OLAP*: long running, entire data set accessed, sequential data access, parallel processing, batch processing
image::oltp-vs-olap.png[width=600]
The image above demonstrates the difference between Gremlin OLTP and Gremlin OLAP. With Gremlin OLTP, the graph is
walked by moving from vertex-to-vertex via incident edges. With Gremlin OLAP, all vertices are provided a
`VertexProgram`. The programs send messages to one another with the topological structure of the graph acting as the
communication network (though random message passing possible). In many respects, the messages passed are like
the OLTP traversers moving from vertex-to-vertex. However, all messages are moving independent of one another, in
parallel. Once a vertex program is finished computing, TinkerPop's OLAP engine supports any number
link:http://en.wikipedia.org/wiki/MapReduce[`MapReduce`] jobs over the resultant graph.
IMPORTANT: `GraphComputer` was designed from the start to be used within a multi-JVM, distributed environment --
in other words, a multi-machine compute cluster. As such, all the computing objects must be able to be migrated
between JVMs. The pattern promoted is to store state information in a `Configuration` object to later be regenerated
by a loading process. It is important to realize that `VertexProgram`, `MapReduce`, and numerous particular instances
rely heavily on the state of the computing classes (not the structure, but the processes) to be stored in a
`Configuration`.
[[vertexprogram]]
== VertexProgram
image:bsp-diagram.png[width=400,float=right] GraphComputer takes a `VertexProgram`. A VertexProgram can be thought of
as a piece of code that is executed at each vertex in logically parallel manner until some termination condition is
met (e.g. a number of iterations have occurred, no more data is changing in the graph, etc.). A submitted
`VertexProgram` is copied to all the workers in the graph. A worker is not an explicit concept in the API, but is
assumed of all `GraphComputer` implementations. At minimum each vertex is a worker (though this would be inefficient
due to the fact that each vertex would maintain a VertexProgram). In practice, the workers partition the vertex set
and and are responsible for the execution of the VertexProgram over all the vertices within their sphere of influence.
The workers orchestrate the execution of the `VertexProgram.execute()` method on all their vertices in an
link:http://en.wikipedia.org/wiki/Bulk_synchronous_parallel[bulk synchronous parallel] (BSP) fashion. The vertices
are able to communicate with one another via messages. There are two kinds of messages in Gremlin OLAP:
`MessageScope.Local` and `MessageScope.Global`. A local message is a message to an adjacent vertex. A global
message is a message to any arbitrary vertex in the graph. Once the VertexProgram has completed its execution,
any number of `MapReduce` jobs are evaluated. MapReduce jobs are provided by the user via `GraphComputer.mapReduce()`
or by the `VertexProgram` via `VertexProgram.getMapReducers()`.
image::graphcomputer.png[width=500]
The example below demonstrates how to submit a VertexProgram to a graph's GraphComputer. `GraphComputer.submit()`
yields a `Future<ComputerResult>`. The `ComputerResult` has the resultant computed graph which can be a full copy
of the original graph (see <<hadoop-gremlin,Hadoop-Gremlin>>) or a view over the original graph (see
<<tinkergraph-gremlin,TinkerGraph>>). The ComputerResult also provides access to computational side-effects called `Memory`
(which includes, for example, runtime, number of iterations, results of MapReduce jobs, and VertexProgram-specific
memory manipulations).
[gremlin-groovy,modern]
----
result = graph.compute().program(PageRankVertexProgram.build().create()).submit().get()
result.memory().runtime
g = result.graph().traversal()
g.V().valueMap()
----
NOTE: This model of "vertex-centric graph computing" was made popular by Google's
link:http://googleresearch.blogspot.com/2009/06/large-scale-graph-computing-at-google.html[Pregel] graph engine.
In the open source world, this model is found in OLAP graph computing systems such as link:https://giraph.apache.org/[Giraph],
link:https://hama.apache.org/[Hama]. TinkerPop extends the
popularized model with integrated post-processing <<mapreduce,MapReduce>> jobs over the vertex set.
[[mapreduce]]
== MapReduce
The BSP model proposed by Pregel stores the results of the computation in a distributed manner as properties on the
elements in the graph. In many situations, it is necessary to aggregate those resultant properties into a single
result set (i.e. a statistic). For instance, assume a VertexProgram that computes a nominal cluster for each vertex
(i.e. link:http://en.wikipedia.org/wiki/Community_structure[a graph clustering algorithm]). At the end of the
computation, each vertex will have a property denoting the cluster it was assigned to. TinkerPop provides the
ability to answer global questions about the clusters. For instance, in order to answer the following questions,
`MapReduce` jobs are required:
* How many vertices are in each cluster? (*presented below*)
* How many unique clusters are there? (*presented below*)
* What is the average age of each vertex in each cluster?
* What is the degree distribution of the vertices in each cluster?
A compressed representation of the `MapReduce` API in TinkerPop is provided below. The key idea is that the
`map`-stage processes all vertices to emit key/value pairs. Those values are aggregated on their respective key
for the `reduce`-stage to do its processing to ultimately yield more key/value pairs.
[source,java]
public interface MapReduce<MK, MV, RK, RV, R> {
public void map(final Vertex vertex, final MapEmitter<MK, MV> emitter);
public void reduce(final MK key, final Iterator<MV> values, final ReduceEmitter<RK, RV> emitter);
// there are more methods
}
IMPORTANT: The vertex that is passed into the `MapReduce.map()` method does not contain edges. The vertex only
contains original and computed vertex properties. This reduces the amount of data required to be loaded and ensures
that MapReduce is used for post-processing computed results. All edge-based computing should be accomplished in the
`VertexProgram`.
image:mapreduce.png[width=650]
The `MapReduce` extension to GraphComputer is made explicit when examining the
<<peerpressurevertexprogram,`PeerPressureVertexProgram`>> and corresponding `ClusterPopulationMapReduce`.
In the code below, the GraphComputer result returns the computed on `Graph` as well as the `Memory` of the
computation (`ComputerResult`). The memory maintain the results of any MapReduce jobs. The cluster population
MapReduce result states that there are 5 vertices in cluster 1 and 1 vertex in cluster 6. This can be verified
(in a serial manner) by looking at the `PeerPressureVertexProgram.CLUSTER` property of the resultant graph. Notice
that the property is "hidden" unless it is directly accessed via name.
[gremlin-groovy,modern]
----
graph = TinkerFactory.createModern()
result = graph.compute().program(PeerPressureVertexProgram.build().create()).mapReduce(ClusterPopulationMapReduce.build().create()).submit().get()
result.memory().get('clusterPopulation')
g = result.graph().traversal()
g.V().values(PeerPressureVertexProgram.CLUSTER).groupCount().next()
g.V().valueMap()
----
If there are numerous statistics desired, then its possible to register as many MapReduce jobs as needed. For
instance, the `ClusterCountMapReduce` determines how many unique clusters were created by the peer pressure algorithm.
Below both `ClusterCountMapReduce` and `ClusterPopulationMapReduce` are computed over the resultant graph.
[gremlin-groovy,modern]
----
result = graph.compute().program(PeerPressureVertexProgram.build().create()).
mapReduce(ClusterPopulationMapReduce.build().create()).
mapReduce(ClusterCountMapReduce.build().create()).submit().get()
result.memory().clusterPopulation
result.memory().clusterCount
----
IMPORTANT: The MapReduce model of TinkerPop does not support MapReduce chaining. Thus, the order in which the
MapReduce jobs are executed is irrelevant. This is made apparent when realizing that the `map()`-stage takes a
`Vertex` as its input and the `reduce()`-stage yields key/value pairs. Thus, the results of reduce can not fed back
into a `map()`.
== A Collection of VertexPrograms
TinkerPop provides a collection of VertexPrograms that implement common algorithms. This section discusses the various
implementations.
IMPORTANT: The vertex programs presented are what are provided as of TinkerPop x.y.z. Over time, with future releases,
more algorithms will be added.
[[pagerankvertexprogram]]
=== PageRankVertexProgram
image:gremlin-pagerank.png[width=400,float=right] link:http://en.wikipedia.org/wiki/PageRank[PageRank] is perhaps the
most popular OLAP-oriented graph algorithm. This link:http://en.wikipedia.org/wiki/Centrality[eigenvector centrality]
variant was developed by Brin and Page of Google. PageRank defines a centrality value for all vertices in the graph,
where centrality is defined recursively where a vertex is central if it is connected to central vertices. PageRank is
an iterative algorithm that converges to a link:http://en.wikipedia.org/wiki/Ergodicity[steady state distribution]. If
the pageRank values are normalized to 1.0, then the pageRank value of a vertex is the probability that a random walker
will be seen that that vertex in the graph at any arbitrary moment in time. In order to help developers understand the
methods of a `VertexProgram`, the PageRankVertexProgram code is analyzed below.
[source,java]
----
public class PageRankVertexProgram implements VertexProgram<Double> { <1>
public static final String PAGE_RANK = "gremlin.pageRankVertexProgram.pageRank";
private static final String EDGE_COUNT = "gremlin.pageRankVertexProgram.edgeCount";
private static final String PROPERTY = "gremlin.pageRankVertexProgram.property";
private static final String VERTEX_COUNT = "gremlin.pageRankVertexProgram.vertexCount";
private static final String ALPHA = "gremlin.pageRankVertexProgram.alpha";
private static final String EPSILON = "gremlin.pageRankVertexProgram.epsilon";
private static final String MAX_ITERATIONS = "gremlin.pageRankVertexProgram.maxIterations";
private static final String EDGE_TRAVERSAL = "gremlin.pageRankVertexProgram.edgeTraversal";
private static final String INITIAL_RANK_TRAVERSAL = "gremlin.pageRankVertexProgram.initialRankTraversal";
private static final String TELEPORTATION_ENERGY = "gremlin.pageRankVertexProgram.teleportationEnergy";
private static final String CONVERGENCE_ERROR = "gremlin.pageRankVertexProgram.convergenceError";
private MessageScope.Local<Double> incidentMessageScope = MessageScope.Local.of(__::outE); <2>
private MessageScope.Local<Double> countMessageScope = MessageScope.Local.of(new MessageScope.Local.ReverseTraversalSupplier(this.incidentMessageScope));
private PureTraversal<Vertex, Edge> edgeTraversal = null;
private PureTraversal<Vertex, ? extends Number> initialRankTraversal = null;
private double alpha = 0.85d;
private double epsilon = 0.00001d;
private int maxIterations = 20;
private String property = PAGE_RANK; <3>
private Set<VertexComputeKey> vertexComputeKeys;
private Set<MemoryComputeKey> memoryComputeKeys;
private PageRankVertexProgram() {
}
@Override
public void loadState(final Graph graph, final Configuration configuration) { <4>
if (configuration.containsKey(INITIAL_RANK_TRAVERSAL))
this.initialRankTraversal = PureTraversal.loadState(configuration, INITIAL_RANK_TRAVERSAL, graph);
if (configuration.containsKey(EDGE_TRAVERSAL)) {
this.edgeTraversal = PureTraversal.loadState(configuration, EDGE_TRAVERSAL, graph);
this.incidentMessageScope = MessageScope.Local.of(() -> this.edgeTraversal.get().clone());
this.countMessageScope = MessageScope.Local.of(new MessageScope.Local.ReverseTraversalSupplier(this.incidentMessageScope));
}
this.alpha = configuration.getDouble(ALPHA, this.alpha);
this.epsilon = configuration.getDouble(EPSILON, this.epsilon);
this.maxIterations = configuration.getInt(MAX_ITERATIONS, 20);
this.property = configuration.getString(PROPERTY, PAGE_RANK);
this.vertexComputeKeys = new HashSet<>(Arrays.asList(
VertexComputeKey.of(this.property, false),
VertexComputeKey.of(EDGE_COUNT, true))); <5>
this.memoryComputeKeys = new HashSet<>(Arrays.asList(
MemoryComputeKey.of(TELEPORTATION_ENERGY, Operator.sum, true, true),
MemoryComputeKey.of(VERTEX_COUNT, Operator.sum, true, true),
MemoryComputeKey.of(CONVERGENCE_ERROR, Operator.sum, false, true)));
}
@Override
public void storeState(final Configuration configuration) {
VertexProgram.super.storeState(configuration);
configuration.setProperty(ALPHA, this.alpha);
configuration.setProperty(EPSILON, this.epsilon);
configuration.setProperty(PROPERTY, this.property);
configuration.setProperty(MAX_ITERATIONS, this.maxIterations);
if (null != this.edgeTraversal)
this.edgeTraversal.storeState(configuration, EDGE_TRAVERSAL);
if (null != this.initialRankTraversal)
this.initialRankTraversal.storeState(configuration, INITIAL_RANK_TRAVERSAL);
}
@Override
public GraphComputer.ResultGraph getPreferredResultGraph() {
return GraphComputer.ResultGraph.NEW;
}
@Override
public GraphComputer.Persist getPreferredPersist() {
return GraphComputer.Persist.VERTEX_PROPERTIES;
}
@Override
public Set<VertexComputeKey> getVertexComputeKeys() { <6>
return this.vertexComputeKeys;
}
@Override
public Optional<MessageCombiner<Double>> getMessageCombiner() {
return (Optional) PageRankMessageCombiner.instance();
}
@Override
public Set<MemoryComputeKey> getMemoryComputeKeys() {
return this.memoryComputeKeys;
}
@Override
public Set<MessageScope> getMessageScopes(final Memory memory) {
final Set<MessageScope> set = new HashSet<>();
set.add(memory.isInitialIteration() ? this.countMessageScope : this.incidentMessageScope);
return set;
}
@Override
public PageRankVertexProgram clone() {
try {
final PageRankVertexProgram clone = (PageRankVertexProgram) super.clone();
if (null != this.initialRankTraversal)
clone.initialRankTraversal = this.initialRankTraversal.clone();
return clone;
} catch (final CloneNotSupportedException e) {
throw new IllegalStateException(e.getMessage(), e);
}
}
@Override
public void setup(final Memory memory) {
memory.set(TELEPORTATION_ENERGY, null == this.initialRankTraversal ? 1.0d : 0.0d);
memory.set(VERTEX_COUNT, 0.0d);
memory.set(CONVERGENCE_ERROR, 1.0d);
}
@Override
public void execute(final Vertex vertex, Messenger<Double> messenger, final Memory memory) { <7>
if (memory.isInitialIteration()) {
messenger.sendMessage(this.countMessageScope, 1.0d); <8>
memory.add(VERTEX_COUNT, 1.0d);
} else {
final double vertexCount = memory.<Double>get(VERTEX_COUNT);
final double edgeCount;
double pageRank;
if (1 == memory.getIteration()) {
edgeCount = IteratorUtils.reduce(messenger.receiveMessages(), 0.0d, (a, b) -> a + b);
vertex.property(VertexProperty.Cardinality.single, EDGE_COUNT, edgeCount);
pageRank = null == this.initialRankTraversal ?
0.0d :
TraversalUtil.apply(vertex, this.initialRankTraversal.get()).doubleValue(); <9>
} else {
edgeCount = vertex.value(EDGE_COUNT);
pageRank = IteratorUtils.reduce(messenger.receiveMessages(), 0.0d, (a, b) -> a + b); <10>
}
//////////////////////////
final double teleporationEnergy = memory.get(TELEPORTATION_ENERGY);
if (teleporationEnergy > 0.0d) {
final double localTerminalEnergy = teleporationEnergy / vertexCount;
pageRank = pageRank + localTerminalEnergy;
memory.add(TELEPORTATION_ENERGY, -localTerminalEnergy);
}
final double previousPageRank = vertex.<Double>property(this.property).orElse(0.0d);
memory.add(CONVERGENCE_ERROR, Math.abs(pageRank - previousPageRank));
vertex.property(VertexProperty.Cardinality.single, this.property, pageRank);
memory.add(TELEPORTATION_ENERGY, (1.0d - this.alpha) * pageRank);
pageRank = this.alpha * pageRank;
if (edgeCount > 0.0d)
messenger.sendMessage(this.incidentMessageScope, pageRank / edgeCount);
else
memory.add(TELEPORTATION_ENERGY, pageRank);
}
}
@Override
public boolean terminate(final Memory memory) { <11>
boolean terminate = memory.<Double>get(CONVERGENCE_ERROR) < this.epsilon || memory.getIteration() >= this.maxIterations;
memory.set(CONVERGENCE_ERROR, 0.0d);
return terminate;
}
@Override
public String toString() {
return StringFactory.vertexProgramString(this, "alpha=" + this.alpha + ", epsilon=" + this.epsilon + ", iterations=" + this.maxIterations);
}
}
----
<1> `PageRankVertexProgram` implements `VertexProgram<Double>` because the messages it sends are Java doubles.
<2> The default path of energy propagation is via outgoing edges from the current vertex.
<3> The resulting PageRank values for the vertices are stored as a vertex property.
<4> A vertex program is constructed using an Apache `Configuration` to ensure easy dissemination across a cluster of JVMs.
<5> `EDGE_COUNT` is a transient "scratch data" compute key while `PAGE_RANK` is not.
<6> A vertex program must define the "compute keys" that are the properties being operated on during the computation.
<7> The "while"-loop of the vertex program.
<8> In order to determine how to distribute the energy to neighbors, a "1"-count is used to determine how many incident vertices exist for the `MessageScope`.
<9> Initially, each vertex is provided an equal amount of energy represented as a double.
<10> Energy is aggregated, computed on according to the PageRank algorithm, and then disseminated according to the defined `MessageScope.Local`.
<11> The computation is terminated after epsilon-convergence is met or a pre-defined number of iterations have taken place.
The above `PageRankVertexProgram` is used as follows.
[gremlin-groovy,modern]
----
result = graph.compute().program(PageRankVertexProgram.build().create()).submit().get()
result.memory().runtime
g = result.graph().traversal()
g.V().valueMap()
----
Note that `GraphTraversal` provides a <<pagerank-step,`pageRank()`>>-step.
[gremlin-groovy,modern]
----
g = graph.traversal().withComputer()
g.V().pageRank().valueMap()
g.V().pageRank().by('pageRank').times(5).order().by('pageRank').valueMap()
----
[[peerpressurevertexprogram]]
=== PeerPressureVertexProgram
The `PeerPressureVertexProgram` is a clustering algorithm that assigns a nominal value to each vertex in the graph.
The nominal value represents the vertex's cluster. If two vertices have the same nominal value, then they are in the
same cluster. The algorithm proceeds in the following manner.
. Every vertex assigns itself to a unique cluster ID (initially, its vertex ID).
. Every vertex determines its per neighbor vote strength as 1.0d / incident edges count.
. Every vertex sends its cluster ID and vote strength to its adjacent vertices as a `Pair<Serializable,Double>`
. Every vertex generates a vote energy distribution of received cluster IDs and changes its current cluster ID to the most frequent cluster ID.
.. If there is a tie, then the cluster with the lowest `toString()` comparison is selected.
. Steps 3 and 4 repeat until either a max number of iterations has occurred or no vertex has adjusted its cluster anymore.
Note that `GraphTraversal` provides a <<peerpressure-step,`peerPressure()`>>-step.
[gremlin-groovy,modern]
----
g = graph.traversal().withComputer()
g.V().peerPressure().by('cluster').valueMap()
g.V().peerPressure().by(outE('knows')).by('cluster').valueMap()
----
[[bulkdumpervertexprogram]]
[[clonevertexprogram]]
=== CloneVertexProgram
The `CloneVertexProgram` (known in versions prior to 3.2.10 as `BulkDumperVertexProgram`) copies a whole graph from
any graph `InputFormat` to any graph `OutputFormat`. TinkerPop provides the following:
* `OutputFormat`
** `GraphSONOutputFormat`
** `GryoOutputFormat`
** `ScriptOutputFormat`
* `InputFormat`
** `GraphSONInputFormat`
** `GryoInputFormat`
** `ScriptInputFormat`).
An <<clonevertexprogramusingspark,example>> is provided in the SparkGraphComputer section.
Graph Providers should consider writing their own `OutputFormat` and `InputFormat` which would allow bulk loading and
export capabilities through this `VertexProgram`. This topic is discussed further in the
link:http://tinkerpop.apache.org/docs/x.y.z/dev/provider/#bulk-import-export[Provider Documentation].
[[traversalvertexprogram]]
=== TraversalVertexProgram
image:traversal-vertex-program.png[width=250,float=left] The `TraversalVertexProgram` is a "special" VertexProgram in
that it can be executed via a `Traversal` and a `GraphComputer`. In Gremlin, it is possible to have
the same traversal executed using either the standard OLTP-engine or the `GraphComputer` OLAP-engine. The difference
being where the traversal is submitted.
NOTE: This model of graph traversal in a BSP system was first implemented by the
link:https://github.com/thinkaurelius/faunus/wiki[Faunus] graph analytics engine and originally described in
link:http://markorodriguez.com/2011/04/19/local-and-distributed-traversal-engines/[Local and Distributed Traversal Engines].
[gremlin-groovy,modern]
----
g = graph.traversal()
g.V().both().hasLabel('person').values('age').groupCount().next() // OLTP
g = graph.traversal().withComputer()
g.V().both().hasLabel('person').values('age').groupCount().next() // OLAP
----
image::olap-traversal.png[width=650]
In the OLAP example above, a `TraversalVertexProgram` is (logically) sent to each vertex in the graph. Each instance
evaluation requires (logically) 5 BSP iterations and each iteration is interpreted as such:
. `g.V()`: Put a traverser on each vertex in the graph.
. `both()`: Propagate each traverser to the vertices `both`-adjacent to its current vertex.
. `hasLabel('person')`: If the vertex is not a person, kill the traversers at that vertex.
. `values('age')`: Have all the traversers reference the integer age of their current vertex.
. `groupCount()`: Count how many times a particular age has been seen.
While 5 iterations were presented, in fact, `TraversalVertexProgram` will execute the traversal in only
2 iterations. The reason being is that `g.V().both()` and `hasLabel('person').values('age').groupCount()` can be
executed in a single iteration as any message sent would simply be to the current executing vertex. Thus, a simple optimization
exists in Gremlin OLAP called "reflexive message passing" which simulates non-message-passing BSP iterations within a
single BSP iteration.
The same OLAP traversal can be executed using the standard `graph.compute()` model, though at the expense of verbosity.
`TraversalVertexProgram` provides a fluent `Builder` for constructing a `TraversalVertexProgram`. The specified
`traversal()` can be either a direct `Traversal` object or a
link:http://en.wikipedia.org/wiki/Scripting_for_the_Java_Platform[JSR-223] script that will generate a
`Traversal`. There is no benefit to using the model below. It is demonstrated to help elucidate how Gremlin OLAP traversals
are ultimately compiled for execution on a `GraphComputer`.
[gremlin-groovy,modern]
----
result = graph.compute().program(TraversalVertexProgram.build().traversal(g.V().both().hasLabel('person').values('age').groupCount('a')).create()).submit().get()
result.memory().a
result.memory().iteration
result.memory().runtime
----
[[distributed-gremlin-gotchas]]
==== Distributed Gremlin Gotchas
Gremlin OLTP is not identical to Gremlin OLAP.
IMPORTANT: There are two primary theoretical differences between Gremlin OLTP and Gremlin OLAP. First, Gremlin OLTP
(via `Traversal`) leverages a link:http://en.wikipedia.org/wiki/Depth-first_search[depth-first] execution engine.
Depth-first execution has a limited memory footprint due to link:http://en.wikipedia.org/wiki/Lazy_evaluation[lazy evaluation].
On the other hand, Gremlin OLAP (via `TraversalVertexProgram`) leverages a
link:http://en.wikipedia.org/wiki/Breadth-first_search[breadth-first] execution engine which maintains a larger memory
footprint, but a better time complexity due to vertex-local traversers being able to be "bulked." The second difference
is that Gremlin OLTP is executed in a serial/streaming fashion, while Gremlin OLAP is executed in a parallel/step-wise fashion. These two
fundamental differences lead to the behaviors enumerated below.
image::gremlin-without-a-cause.png[width=200,float=right]
. Traversal sideEffects are represented as a distributed data structure across `GraphComputer` workers. It is not
possible to get a global view of a sideEffect until after an iteration has occurred and global sideEffects are re-broadcasted to the workers.
In some situations, a "stale" local representation of the sideEffect is sufficient to ensure the intended semantics of the
traversal are respected. However, this is not generally true so be wary of traversals that require global views of a
sideEffect. To ensure a fresh global representation, use `barrier()` prior to accessing the global sideEffect. Note that this
only comes into play with custom steps and <<general-steps,lambda steps>>. The standard Gremlin step library is respective of OLAP semantics.
. When evaluating traversals that rely on path information (i.e. the history of the traversal), practical
computational limits can easily be reached due the link:http://en.wikipedia.org/wiki/Combinatorial_explosion[combinatoric explosion]
of data. With path computing enabled, every traverser is unique and thus, must be enumerated as opposed to being
counted/merged. The difference being a collection of paths vs. a single 64-bit long at a single vertex. In other words,
bulking is very unlikely with traversers that maintain path information. For more
information on this concept, please see link:https://thinkaurelius.wordpress.com/2012/11/11/faunus-provides-big-graph-data-analytics/[Faunus Provides Big Graph Data].
. Steps that are concerned with the global ordering of traversers do not have a meaningful representation in
OLAP. For example, what does <<order-step,`order()`>>-step mean when all traversers are being processed in parallel?
Even if the traversers were aggregated and ordered, then at the next step they would return to being executed in
parallel and thus, in an unpredictable order. When `order()`-like steps are executed at the end of a traversal (i.e
the final step), `TraversalVertexProgram` ensures a serial representation is ordered accordingly. Moreover, it is intelligent enough
to maintain the ordering of `g.V().hasLabel("person").order().by("age").values("name")`. However, the OLAP traversal
`g.V().hasLabel("person").order().by("age").out().values("name")` will lose the original ordering as the `out()`-step
will rebroadcast traversers across the cluster.
[[graph-filter]]
== Graph Filter
Most OLAP jobs do not require the entire source graph to faithfully execute their `VertexProgram`. For instance, if
`PageRankVertexProgram` is only going to compute the centrality of people in the friendship-graph, then the following
`GraphFilter` can be applied.
[source,java]
----
graph.computer().
vertices(hasLabel("person")).
edges(bothE("knows")).
program(PageRankVertexProgram...)
----
There are two methods for constructing a `GraphFilter`.
* `vertices(Traversal<Vertex,Vertex>)`: A traversal that will be used that can only analyze a vertex and its properties.
If the traversal `hasNext()`, the input `Vertex` is passed to the `GraphComputer`.
* `edges(Traversal<Vertex,Edge>)`: A traversal that will iterate all legal edges for the source vertex.
`GraphFilter` is a "push-down predicate" that providers can reason on to determine the most efficient way to provide
graph data to the `GraphComputer`.
IMPORTANT: Apache TinkerPop provides `GraphFilterStrategy` <<traversalstrategy,traversal strategy>> which analyzes a submitted
OLAP traversal and, if possible, creates an appropriate `GraphFilter` automatically. For instance, `g.V().count()` would
yield a `GraphFilter.edges(limit(0))`. Thus, for traversal submissions, users typically do not need to be aware of creating
graph filters explicitly. Users can use the <<explain-step,`explain()`>>-step to see the `GraphFilter` generated by `GraphFilterStrategy`.
| 57.818349 | 166 | 0.760909 |
9247502d445a812d4c09653c60712d5eb8b6c0e3 | 111 | adoc | AsciiDoc | docs/asciidoc/modules/ROOT/partials/usage/apoc.load.driver.adoc | alexwoolford/neo4j-apoc-procedures | ad48af6a7dde19825d7fcc100fd2c0612536289d | [
"Apache-2.0"
] | 1,481 | 2016-04-16T00:24:31.000Z | 2022-03-29T08:15:38.000Z | docs/asciidoc/modules/ROOT/partials/usage/apoc.load.driver.adoc | alexwoolford/neo4j-apoc-procedures | ad48af6a7dde19825d7fcc100fd2c0612536289d | [
"Apache-2.0"
] | 1,747 | 2016-04-23T07:53:53.000Z | 2022-03-31T14:35:58.000Z | docs/asciidoc/modules/ROOT/partials/usage/apoc.load.driver.adoc | alexwoolford/neo4j-apoc-procedures | ad48af6a7dde19825d7fcc100fd2c0612536289d | [
"Apache-2.0"
] | 528 | 2016-04-16T23:11:11.000Z | 2022-03-23T02:12:43.000Z | The following loads the JDBC driver:
[source,cypher]
----
CALL apoc.load.driver("com.mysql.jdbc.Driver");
---- | 18.5 | 47 | 0.702703 |
e66cc05957b7cdb5b9aa11b8befad8fd0c5459d4 | 5,285 | adoc | AsciiDoc | documentation/src/main/asciidoc/user_guide/scripting.adoc | gbadner/infinispan | d204f42e09fea829c7e0118ceee4d4b14de2d0a1 | [
"Apache-2.0"
] | null | null | null | documentation/src/main/asciidoc/user_guide/scripting.adoc | gbadner/infinispan | d204f42e09fea829c7e0118ceee4d4b14de2d0a1 | [
"Apache-2.0"
] | null | null | null | documentation/src/main/asciidoc/user_guide/scripting.adoc | gbadner/infinispan | d204f42e09fea829c7e0118ceee4d4b14de2d0a1 | [
"Apache-2.0"
] | null | null | null | === Scripting
Scripting is a feature of {brandname} Server which allows invoking server-side scripts from remote clients.
Scripting leverages the JDK's javax.script ScriptEngines, therefore allowing the use of any JVM languages which offer one.
By default, the JDK comes with Nashorn, a ScriptEngine capable of running JavaScript.
==== Installing scripts
Scripts are stored in a special script cache, named '___script_cache'.
Adding a script is therefore as simple as +put+ting it into the cache itself.
If the name of the script contains a filename extension, e.g. +myscript.js+, then that extension determines the engine that
will be used to execute it.
Alternatively the script engine can be selected using script metadata (see below).
Be aware that, when security is enabled, access to the script cache via the remote protocols requires
that the user belongs to the pass:['___script_manager'] role.
==== Script metadata
Script metadata is additional information about the script that the user can provide to the server to affect how a
script is executed.
It is contained in a specially-formatted comment on the first lines of the script.
Properties are specified as +key=value+ pairs, separated by commas.
You can use several different comment styles: The `//`, `;;`, `#` depending on the scripting language you use.
You can split metadata over multiple lines if necessary, and you can use single (') or double (") quotes to delimit your values.
The following are examples of valid metadata comments:
[source,javascript]
----
// name=test, language=javascript
// mode=local, parameters=[a,b,c]
----
===== Metadata properties
The following metadata property keys are available
* mode: defines the mode of execution of a script. Can be one of the following values:
** local: the script will be executed only by the node handling the request. The script itself however can invoke clustered operations
** distributed: runs the script using the Distributed Executor Service
* language: defines the script engine that will be used to execute the script, e.g. Javascript
* extension: an alternative method of specifying the script engine that will be used to execute the script, e.g. js
* role: a specific role which is required to execute the script
* parameters: an array of valid parameter names for this script. Invocations which specify parameter names not included in this list will cause an exception.
* datatype: optional property providing information, in the form of
Media Types (also known as MIME) about the type of the data stored in the
caches, as well as parameter and return values. Currently it only accepts a
single value which is `text/plain; charset=utf-8`, indicating that data is
String UTF-8 format. This metadata parameter is designed for remote clients
that only support a particular type of data, making it easy for them to
retrieve, store and work with parameters.
Since the execution mode is a characteristic of the script, nothing special needs to be done on the client to invoke scripts in different modes.
==== Script bindings
The script engine within {brandname} exposes several internal objects as bindings in the scope of the script execution.
These are:
* cache: the cache against which the script is being executed
* marshaller: the marshaller to use for marshalling/unmarshalling data to the cache
* cacheManager: the cacheManager for the cache
* scriptingManager: the instance of the script manager which is being used to run the script. This can be used to run other scripts from a script.
==== Script parameters
Aside from the standard bindings described above, when a script is executed it can be passed a set of named parameters which also appear as bindings.
Parameters are passed as +name,value+ pairs where +name+ is a string and +value+ can be any value that is understood by the marshaller in use.
The following is an example of a JavaScript script which takes two parameters, +multiplicand+ and +multiplier+ and multiplies them.
Because the last operation is an expression evaluation, its result is returned to the invoker.
[source,javascript]
----
// mode=local,language=javascript
multiplicand * multiplier
----
To store the script in the script cache, use the following Hot Rod code:
[source,java]
----
RemoteCache<String, String> scriptCache = cacheManager.getCache("___script_cache");
scriptCache.put("multiplication.js",
"// mode=local,language=javascript\n" +
"multiplicand * multiplier\n");
----
==== Running Scripts using the Hot Rod Java client
The following example shows how to invoke the above script by passing two named parameters.
[source,java]
----
RemoteCache<String, Integer> cache = cacheManager.getCache();
// Create the parameters for script execution
Map<String, Object> params = new HashMap<>();
params.put("multiplicand", 10);
params.put("multiplier", 20);
// Run the script on the server, passing in the parameters
Object result = cache.execute("multiplication.js", params);
----
==== Distributed execution
The following is a script which runs on all nodes.
Each node will return its address, and the results from all nodes will be collected in a List and returned to the client.
[source,javascript]
----
// mode:distributed,language=javascript
cacheManager.getAddress().toString();
----
| 49.392523 | 157 | 0.778997 |
f90bc402768f9a9fb9ec1a9943b658d93b1cdf50 | 18,936 | adoc | AsciiDoc | spark-sql-DataSource.adoc | AbhiBigData/SparkSQL_Book | b9ed161d1c759d20d07d3f2395424482ec6b5d81 | [
"Apache-2.0"
] | null | null | null | spark-sql-DataSource.adoc | AbhiBigData/SparkSQL_Book | b9ed161d1c759d20d07d3f2395424482ec6b5d81 | [
"Apache-2.0"
] | null | null | null | spark-sql-DataSource.adoc | AbhiBigData/SparkSQL_Book | b9ed161d1c759d20d07d3f2395424482ec6b5d81 | [
"Apache-2.0"
] | null | null | null | == [[DataSource]] DataSource -- Pluggable Data Provider Framework
`DataSource` is one of the main parts of *Data Source API* in Spark SQL (together with link:spark-sql-DataFrameReader.adoc[DataFrameReader] for loading datasets, link:spark-sql-DataFrameWriter.adoc[DataFrameWriter] for saving datasets and `StreamSourceProvider` for creating streaming sources).
`DataSource` models a *pluggable data provider framework* with the <<providers, extension points>> for Spark SQL integrators to expand the list of supported external data sources in Spark SQL.
`DataSource` is <<creating-instance, created>> when:
* `DataFrameWriter` is requested to link:spark-sql-DataFrameWriter.adoc#saveToV1Source[save to a data source (per Data Source V1 contract)]
* link:spark-sql-Analyzer-FindDataSourceTable.adoc#readDataSourceTable[FindDataSourceTable] and link:spark-sql-Analyzer-ResolveSQLOnFile.adoc#apply[ResolveSQLOnFile] logical evaluation rules are executed
* link:spark-sql-LogicalPlan-CreateDataSourceTableCommand.adoc#run[CreateDataSourceTableCommand], link:spark-sql-LogicalPlan-CreateDataSourceTableAsSelectCommand.adoc#run[CreateDataSourceTableAsSelectCommand], link:spark-sql-LogicalPlan-InsertIntoDataSourceDirCommand.adoc#run[InsertIntoDataSourceDirCommand], link:spark-sql-LogicalPlan-CreateTempViewUsing.adoc#run[CreateTempViewUsing] are executed
* `HiveMetastoreCatalog` is requested to link:spark-sql-HiveMetastoreCatalog.adoc#convertToLogicalRelation[convertToLogicalRelation]
* Spark Structured Streaming's `FileStreamSource`, `DataStreamReader` and `DataStreamWriter`
[[providers]]
.DataSource's Provider (and Format) Contracts
[cols="1,3",options="header",width="100%"]
|===
| Extension Point
| Description
| link:spark-sql-CreatableRelationProvider.adoc[CreatableRelationProvider]
| [[CreatableRelationProvider]] Data source that saves the result of a structured query per save mode and returns the schema
| link:spark-sql-FileFormat.adoc[FileFormat]
a| [[FileFormat]] Used in:
* <<sourceSchema, sourceSchema>> for streamed reading
* <<write, write>> for writing a `DataFrame` to a `DataSource` (as part of creating a table as select)
| link:spark-sql-RelationProvider.adoc[RelationProvider]
| [[RelationProvider]] Data source that supports schema inference and can be accessed using SQL's `USING` clause
| link:spark-sql-SchemaRelationProvider.adoc[SchemaRelationProvider]
| [[SchemaRelationProvider]] Data source that requires a user-defined schema
| `StreamSourceProvider`
a| [[StreamSourceProvider]] Used in:
* <<sourceSchema, sourceSchema>> and <<createSource, createSource>> for streamed reading
* <<createSink, createSink>> for streamed writing
* <<resolveRelation, resolveRelation>> for resolved link:spark-sql-BaseRelation.adoc[BaseRelation].
|===
As a user, you interact with `DataSource` by link:spark-sql-DataFrameReader.adoc[DataFrameReader] (when you execute link:spark-sql-SparkSession.adoc#read[spark.read] or link:spark-sql-SparkSession.adoc#readStream[spark.readStream]) or SQL's `CREATE TABLE USING`.
[source, scala]
----
// Batch reading
val people: DataFrame = spark.read
.format("csv")
.load("people.csv")
// Streamed reading
val messages: DataFrame = spark.readStream
.format("kafka")
.option("subscribe", "topic")
.option("kafka.bootstrap.servers", "localhost:9092")
.load
----
`DataSource` uses a link:spark-sql-SparkSession.adoc[SparkSession], a class name, a collection of `paths`, optional user-specified link:spark-sql-schema.adoc[schema], a collection of partition columns, a bucket specification, and configuration options.
NOTE: Data source is also called a *table provider*.
[[internal-registries]]
.DataSource's Internal Properties (e.g. Registries, Counters and Flags)
[cols="1,2",options="header",width="100%"]
|===
| Name
| Description
| `providingClass`
| [[providingClass]] The Java class (`java.lang.Class`) that...FIXME
Used when...FIXME
| `sourceInfo`
| [[sourceInfo]] FIXME
Used when...FIXME
| `caseInsensitiveOptions`
| [[caseInsensitiveOptions]] FIXME
Used when...FIXME
| `equality`
| [[equality]] FIXME
Used when...FIXME
| `backwardCompatibilityMap`
| [[backwardCompatibilityMap]] FIXME
Used when...FIXME
|===
=== [[writeAndRead]] Writing Data to Data Source per Save Mode Followed by Reading Rows Back (as BaseRelation) -- `writeAndRead` Method
[source, scala]
----
writeAndRead(mode: SaveMode, data: DataFrame): BaseRelation
----
CAUTION: FIXME
NOTE: `writeAndRead` is used exclusively when link:spark-sql-LogicalPlan-CreateDataSourceTableAsSelectCommand.adoc#run[CreateDataSourceTableAsSelectCommand] logical command is executed.
=== [[write]] Writing DataFrame to Data Source Per Save Mode -- `write` Method
[source, scala]
----
write(mode: SaveMode, data: DataFrame): BaseRelation
----
`write` writes the result of executing a structured query (as link:spark-sql-DataFrame.adoc[DataFrame]) to a data source per save `mode`.
Internally, `write` <<lookupDataSource, looks up the data source>> and branches off per <<providingClass, providingClass>>.
[[write-providingClass-branches]]
.write's Branches per Supported providingClass (in execution order)
[width="100%",cols="1,2",options="header"]
|===
| providingClass
| Description
| link:spark-sql-CreatableRelationProvider.adoc[CreatableRelationProvider]
| Executes link:spark-sql-CreatableRelationProvider.adoc#createRelation[CreatableRelationProvider.createRelation]
| link:spark-sql-FileFormat.adoc[FileFormat]
| <<writeInFileFormat, writeInFileFormat>>
| _others_
| Reports a `RuntimeException`
|===
NOTE: `write` does not support the internal `CalendarIntervalType` in the link:spark-sql-schema.adoc[schema of `data` `DataFrame`] and throws a `AnalysisException` when there is one.
NOTE: `write` is used exclusively when link:spark-sql-LogicalPlan-RunnableCommand.adoc#SaveIntoDataSourceCommand[SaveIntoDataSourceCommand] is executed.
=== [[writeInFileFormat]] `writeInFileFormat` Internal Method
CAUTION: FIXME
For link:spark-sql-FileFormat.adoc[FileFormat] data sources, `write` takes all `paths` and `path` option and makes sure that there is only one.
NOTE: `write` uses Hadoop's https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/Path.html[Path] to access the https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html[FileSystem] and calculate the qualified output path.
`write` requests `PartitioningUtils` to link:spark-sql-PartitioningUtils.adoc#validatePartitionColumn[validatePartitionColumn].
When appending to a table, ...FIXME
In the end, `write` (for a link:spark-sql-FileFormat.adoc[FileFormat] data source) link:spark-sql-SessionState.adoc#executePlan[prepares a `InsertIntoHadoopFsRelationCommand` logical plan] with link:spark-sql-QueryExecution.adoc#toRdd[executes] it.
CAUTION: FIXME Is `toRdd` a job execution?
=== [[createSource]] `createSource` Method
[source, scala]
----
createSource(metadataPath: String): Source
----
CAUTION: FIXME
=== [[createSink]] `createSink` Method
CAUTION: FIXME
==== [[sourceSchema]] `sourceSchema` Internal Method
[source, scala]
----
sourceSchema(): SourceInfo
----
`sourceSchema` returns the name and link:spark-sql-schema.adoc[schema] of the data source for streamed reading.
CAUTION: FIXME Why is the method called? Why does this bother with streamed reading and data sources?!
It supports two class hierarchies, i.e. link:spark-sql-FileFormat.adoc[FileFormat] and Structured Streaming's `StreamSourceProvider` data sources.
Internally, `sourceSchema` first creates an instance of the data source and...
CAUTION: FIXME Finish...
For Structured Streaming's `StreamSourceProvider` data sources, `sourceSchema` relays calls to `StreamSourceProvider.sourceSchema`.
For link:spark-sql-FileFormat.adoc[FileFormat] data sources, `sourceSchema` makes sure that `path` option was specified.
TIP: `path` is looked up in a case-insensitive way so `paTh` and `PATH` and `pAtH` are all acceptable. Use the lower-case version of `path`, though.
NOTE: `path` can use https://en.wikipedia.org/wiki/Glob_%28programming%29[glob pattern] (not regex syntax), i.e. contain any of `{}[]*?\` characters.
It checks whether the path exists if a glob pattern is not used. In case it did not exist you will see the following `AnalysisException` exception in the logs:
```
scala> spark.read.load("the.file.does.not.exist.parquet")
org.apache.spark.sql.AnalysisException: Path does not exist: file:/Users/jacek/dev/oss/spark/the.file.does.not.exist.parquet;
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$12.apply(DataSource.scala:375)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$12.apply(DataSource.scala:364)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:364)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)
... 48 elided
```
If link:spark-sql-properties.adoc#spark.sql.streaming.schemaInference[spark.sql.streaming.schemaInference] is disabled and the data source is different than link:spark-sql-TextFileFormat.adoc[TextFileFormat], and the input `userSpecifiedSchema` is not specified, the following `IllegalArgumentException` exception is thrown:
[options="wrap"]
----
Schema must be specified when creating a streaming source DataFrame. If some files already exist in the directory, then depending on the file format you may be able to create a static DataFrame on that directory with 'spark.read.load(directory)' and infer schema from it.
----
CAUTION: FIXME I don't think the exception will ever happen for non-streaming sources since the schema is going to be defined earlier. When?
Eventually, it returns a `SourceInfo` with `FileSource[path]` and the schema (as calculated using the <<inferFileFormatSchema, inferFileFormatSchema>> internal method).
For any other data source, it throws `UnsupportedOperationException` exception:
```
Data source [className] does not support streamed reading
```
==== [[inferFileFormatSchema]] `inferFileFormatSchema` Internal Method
[source, scala]
----
inferFileFormatSchema(format: FileFormat): StructType
----
`inferFileFormatSchema` private method computes (aka _infers_) schema (as link:spark-sql-StructType.adoc[StructType]). It returns `userSpecifiedSchema` if specified or uses `FileFormat.inferSchema`. It throws a `AnalysisException` when is unable to infer schema.
It uses `path` option for the list of directory paths.
NOTE: It is used by <<sourceSchema, DataSource.sourceSchema>> and <<createSource, DataSource.createSource>> when link:spark-sql-FileFormat.adoc[FileFormat] is processed.
=== [[resolveRelation]] Resolving Relation (Creating BaseRelation) -- `resolveRelation` Method
[source, scala]
----
resolveRelation(checkFilesExist: Boolean = true): BaseRelation
----
`resolveRelation` resolves (i.e. creates) a link:spark-sql-BaseRelation.adoc[BaseRelation].
Internally, `resolveRelation` tries to create an instance of the <<providingClass, providingClass>> and branches off per its type and whether the optional <<userSpecifiedSchema, user-specified schema>> was specified or not.
.Resolving BaseRelation per Provider and User-Specified Schema
[cols="1,3",options="header",width="100%"]
|===
| Provider
| Behaviour
| link:spark-sql-SchemaRelationProvider.adoc[SchemaRelationProvider]
| Executes link:spark-sql-SchemaRelationProvider.adoc#createRelation[SchemaRelationProvider.createRelation] with the provided schema
| link:spark-sql-RelationProvider.adoc[RelationProvider]
| Executes link:spark-sql-RelationProvider.adoc#createRelation[RelationProvider.createRelation]
| link:spark-sql-FileFormat.adoc[FileFormat]
| Creates a link:spark-sql-BaseRelation.adoc#HadoopFsRelation[HadoopFsRelation]
|===
[NOTE]
====
`resolveRelation` is used when:
* `DataSource` is requested to <<writeAndRead, write and read>> the result of a structured query (only when <<providingClass, providingClass>> is a link:spark-sql-FileFormat.adoc[FileFormat])
* `DataFrameReader` is requested to link:spark-sql-DataFrameReader.adoc#load[load data from a data source that supports multiple paths]
* `TextInputCSVDataSource` and `TextInputJsonDataSource` are requested to infer schema
* `CreateDataSourceTableCommand` runnable command is link:spark-sql-LogicalPlan-CreateDataSourceTableCommand.adoc#run[executed]
* `CreateTempViewUsing` logical command is requested to <<spark-sql-LogicalPlan-CreateTempViewUsing.adoc#run, run>>
* `FindDataSourceTable` is requested to link:spark-sql-Analyzer-FindDataSourceTable.adoc#readDataSourceTable[readDataSourceTable]
* `ResolveSQLOnFile` is requested to convert a logical plan (when <<providingClass, providingClass>> is a link:spark-sql-FileFormat.adoc[FileFormat])
* `HiveMetastoreCatalog` is requested for link:spark-sql-HiveMetastoreCatalog.adoc#convertToLogicalRelation[convertToLogicalRelation]
* Structured Streaming's `FileStreamSource` creates batches of records
====
=== [[buildStorageFormatFromOptions]] `buildStorageFormatFromOptions` Method
[source, scala]
----
buildStorageFormatFromOptions(options: Map[String, String]): CatalogStorageFormat
----
`buildStorageFormatFromOptions`...FIXME
NOTE: `buildStorageFormatFromOptions` is used when...FIXME
=== [[creating-instance]][[apply]] Creating DataSource Instance
`DataSource` takes the following when created:
* [[sparkSession]] link:spark-sql-SparkSession.adoc[SparkSession]
* [[className]] Name of the provider class (aka _input data source format_)
* [[paths]] Paths to load (default: empty)
* [[userSpecifiedSchema]] (optional) User-specified link:spark-sql-StructType.adoc[schema] (default: `None`, i.e. undefined)
* [[partitionColumns]] (optional) Names of the partition columns (default: empty)
* [[bucketSpec]] Optional link:spark-sql-BucketSpec.adoc[bucketing specification] (default: undefined)
* [[options]] Options (default: empty)
* [[catalogTable]] (optional) link:spark-sql-CatalogTable.adoc[CatalogTable] (default: undefined)
`DataSource` initializes the <<internal-registries, internal registries and counters>>.
==== [[lookupDataSource]] Looking Up Class By Name Of Data Source Provider -- `lookupDataSource` Method
[source, scala]
----
lookupDataSource(provider: String, conf: SQLConf): Class[_]
----
`lookupDataSource` looks up the class name in the <<backwardCompatibilityMap, backwardCompatibilityMap>> and then replaces the class name exclusively for the `orc` provider per link:spark-sql-properties.adoc#spark.sql.orc.impl[spark.sql.orc.impl] internal configuration property:
* For `hive` (default), `lookupDataSource` uses `org.apache.spark.sql.hive.orc.OrcFileFormat`
* For `native`, `lookupDataSource` uses the canonical class name of link:spark-sql-OrcFileFormat.adoc[OrcFileFormat], i.e. `org.apache.spark.sql.execution.datasources.orc.OrcFileFormat`
With the provider's class name (aka _provider1_ internally) `lookupDataSource` assumes another name variant of format `[provider1].DefaultSource` (aka _provider2_ internally).
`lookupDataSource` then uses Java's link:++https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html#load-java.lang.Class-java.lang.ClassLoader-++[ServiceLoader] to find all link:spark-sql-DataSourceRegister.adoc[DataSourceRegister] provider classes on the CLASSPATH.
`lookupDataSource` filters out the `DataSourceRegister` provider classes (by their link:spark-sql-DataSourceRegister.adoc#shortName[alias]) that match the _provider1_ (case-insensitive), e.g. `parquet` or `kafka`.
If a single provider class was found for the alias, `lookupDataSource` simply returns the provider class.
If no `DataSourceRegister` could be found by the short name (alias), `lookupDataSource` considers the names of the format provider as the fully-qualified class names and tries to load them instead (using Java's link:++https://docs.oracle.com/javase/8/docs/api/java/lang/ClassLoader.html#loadClass-java.lang.String-++[ClassLoader.loadClass]).
NOTE: You can reference your own custom `DataSource` in your code by link:spark-sql-DataFrameWriter.adoc#format[DataFrameWriter.format] method which is the alias or a fully-qualified class name.
CAUTION: FIXME Describe the other cases (orc and avro)
If no provider class could be found, `lookupDataSource` throws a `RuntimeException`:
[options="wrap"]
----
java.lang.ClassNotFoundException: Failed to find data source: [provider1]. Please find packages at http://spark.apache.org/third-party-projects.html
----
If however, `lookupDataSource` found multiple registered aliases for the provider name...FIXME
=== [[planForWriting]] Creating Logical Command for Writing (for CreatableRelationProvider and FileFormat Data Sources) -- `planForWriting` Method
[source, scala]
----
planForWriting(mode: SaveMode, data: LogicalPlan): LogicalPlan
----
`planForWriting` creates an instance of the <<providingClass, providingClass>> and branches off per its type as follows:
* For a <<spark-sql-CreatableRelationProvider.adoc#, CreatableRelationProvider>>, `planForWriting` creates a <<spark-sql-LogicalPlan-SaveIntoDataSourceCommand.adoc#creating-instance, SaveIntoDataSourceCommand>> (with the input `data` and `mode`, the `CreatableRelationProvider` data source and the <<caseInsensitiveOptions, caseInsensitiveOptions>>)
* For a <<spark-sql-FileFormat.adoc#, FileFormat>>, `planForWriting` <<planForWritingFileFormat, planForWritingFileFormat>> (with the `FileFormat` format and the input `mode` and `data`)
* For other types, `planForWriting` simply throws a `RuntimeException`:
+
```
[providingClass] does not allow create table as select.
```
[NOTE]
====
`planForWriting` is used when:
* `DataFrameWriter` is requested to <<spark-sql-DataFrameWriter.adoc#saveToV1Source, saveToV1Source>> (when `DataFrameWriter` is requested to <<spark-sql-DataFrameWriter.adoc#save, save the result of a structured query (a DataFrame) to a data source>> for <<spark-sql-DataSourceV2.adoc#, DataSourceV2>> with no `WriteSupport` and non-``DataSourceV2`` writers)
* `InsertIntoDataSourceDirCommand` logical command is <<spark-sql-LogicalPlan-InsertIntoDataSourceDirCommand.adoc#run, executed>>
====
=== [[planForWritingFileFormat]] `planForWritingFileFormat` Internal Method
[source, scala]
----
planForWritingFileFormat(
format: FileFormat,
mode: SaveMode,
data: LogicalPlan): InsertIntoHadoopFsRelationCommand
----
`planForWritingFileFormat`...FIXME
NOTE: `planForWritingFileFormat` is used when...FIXME
| 46.871287 | 399 | 0.786174 |
7fd5b1b86e841aa94c1daf5b8de492894831ce54 | 6,419 | adoc | AsciiDoc | posts/writing-great-javascript.adoc | Olical/blog | b5ef571a4a31105aa16bcde3b98b6821d6ee3846 | [
"Unlicense"
] | 9 | 2019-10-25T16:19:59.000Z | 2022-03-17T09:18:50.000Z | posts/writing-great-javascript.adoc | Olical/blog | b5ef571a4a31105aa16bcde3b98b6821d6ee3846 | [
"Unlicense"
] | null | null | null | posts/writing-great-javascript.adoc | Olical/blog | b5ef571a4a31105aa16bcde3b98b6821d6ee3846 | [
"Unlicense"
] | 4 | 2019-10-18T17:07:44.000Z | 2021-03-19T06:35:27.000Z | = Writing great JavaScript
Oliver Caldwell
2012-03-14
I probably could have named this post something like “Writing clean, validating and portable JavaScript”, but that would be no where near as catchy. The problem with “great” is it means different things to different people. I am going to show you my idea of great which may differ from many developers views, but I hope it helps someone improve their code.
So what’s the point in this, why can’t you just carry on writing JavaScript as you have been for ages. It works doesn’t it? Well if you took the same opinion with a car and drove on 20 year old tires which will give out any day you are just asking for something to go horrifically wrong. Want an example? Here you go.
== What could possibly go wrong
You are writing something that manipulates color in some way. Maybe it is a color pallet tool, some form of UI selector element that allows you to set the background of your twitter style profile. You have some code like this in one of your scripts.
[source]
----
var color = '#FFFFFF';
colorPicker.addEvent('change', function(selected) {
color = selected;
});
userSettings.addEvent('save', function() {
this.saveSetting('profile-background', color);
});
----
So as you move your cursor across the UI element picking your color, the variable `+color+` is updated. When you hit save the contents of that variable are saved in some way, maybe by posting it back to your server in a call to `+XMLHttpRequest+` (that’s `+ActiveXObject('Microsoft.XMLHTTP')+` to silly old Internet Explorer). Now you decide that you want more color capabilities at some point in your script. So you include a micro library called “color.js” which creates the global variable `+color+`. You can see where I am going with this.
Now your color string has been replaced by a libraries object. Hello bugs and time you did not need to spend. Obviously you could fix this by renaming every occurrence of `+color+` or you could use a function wrapper to sandbox your code.
[source]
----
;(function() {
var color = '#FFFFFF';
colorPicker.addEvent('change', function(selected) {
color = selected;
});
userSettings.addEvent('save', function() {
this.saveSetting('profile-background', color);
});
// typeof color === 'string'
}());
// typeof color === 'undefined'
----
And now the color variable is kept in the scope of our anonymous function, not the global object. Thus stopping the global `+color+` object conflicting with your string. You may be wondering what this mash of characters actually does, it is pretty simple actually. The initial semi-colon saves you from people that miss out the last semi colon in their script when you concatenate files. Without it you may end up with a situation in which you basically wrote this.
[source]
----
var myEpicObject = {} someFunction();
----
Obviously that will throw an error, `+}+` followed by `+s+` does not make sense in JavaScript. The other parts of our wrapper, `+(function() {+` and `+}());+`, simply wrap our code in an anonymous function which is called instantly. It is pretty much the same as writing this.
[source]
----
function main() {
// YOUR CODE
}
main();
----
The only difference is that `+main+` will now be in the global namespace, whereas the other method does not pollute anything.
== Portability
It is pretty standard for the newer JavaScript libraries to work on both browsers and servers with the same code these days. But how can you write something that will run in Chrome, Firefox and node.js in my terminal? First you place your code in the wrapper as shown above, then you simply create an alias to the global variable of your current environment.
[source]
----
;(function(exports) {
// First you define your class
function SomeClass() {
// code...
}
SomeClass.prototype.foo = function() {
// code...
};
// And then you expose it
exports.SomeClass = SomeClass;
}(this)); // <-- this = the global object is passed as exports
----
This will allow compressors such as https://github.com/mishoo/UglifyJS/[UglifyJS] to minify your code better, will keep any helper functions and variables private and will allow you to expose what you choose to the global object. So with the code above you could then use the class like so.
[source]
----
// This is only required for server side environments such as node.js
// In the browser you would use a script tag to load it
var SomeClass = require('someclass').SomeClass;
// Then you can call the class like this
var foo = new SomeClass();
----
== Validation
If you haven’t already I insist you read http://www.amazon.co.uk/JavaScript-Good-Parts-Douglas-Crockford/dp/0596517742[JavaScript: The Good Parts]. Alongside that I urge you to run all of your code through the amazing validation tool that is http://www.jshint.com/[JSHint]. http://www.jslint.com/[JSLint] will be referred to in the aforementioned book, but don’t use that, it will be mentioned because they are written by the same man, http://www.crockford.com/[Douglas Crockford]. JSHint is a much better fork of JSLint. This tool will show you any problems with your code. Some are purely stylistic, some will fix huge bugs. It will point out extra commas in arrays that will cause IE to complain and help you speed up your code.
I recommend ticking almost *every* box on the JSHint site, use your common sense with the last few on the far right (i.e. don’t tick jQuery unless you are using it), and adding `+/*jshint smarttabs:true*/+` to the top of your document, if you use JSDoc style function comments that is. Now if you run your code through that, I am sure you will get at least one error, that will probably be “Missing “use strict” statement.“ which is simple to fix. Just add `+'use strict';+` at the top of your function wrapper like this.
[source]
----
/*jshint smarttabs:true*/
;(function(exports) {
'use strict';
// code...
}(this));
----
If you follow the guidelines laid down by The Good Parts and JSHint you will find and fix so many errors before they bite you in the…
== Thanks
This post turned out a lot longer and wordier than I first intended. That seems to happen a lot with my posts. I hope you have learned something from it though and I hope it has helped you to write better JavaScript which is *much* less prone to errors.
Thanks for reading!
| 50.944444 | 731 | 0.737498 |
7e84f2655bd9e962d034fb009a59ec06decdb17b | 567 | adoc | AsciiDoc | documentation-documentation/index.adoc | IDohndorf/documentation | b4fc74c3288060bbb1b7652935b91813c8b0be3d | [
"Apache-2.0"
] | null | null | null | documentation-documentation/index.adoc | IDohndorf/documentation | b4fc74c3288060bbb1b7652935b91813c8b0be3d | [
"Apache-2.0"
] | 19 | 2021-07-20T14:50:39.000Z | 2021-10-18T14:57:56.000Z | documentation-documentation/index.adoc | IDohndorf/documentation | b4fc74c3288060bbb1b7652935b91813c8b0be3d | [
"Apache-2.0"
] | 6 | 2021-07-16T07:39:21.000Z | 2021-10-03T18:57:46.000Z | :copyright: Apache-2.0 License
:description: Document how the Adoptium documentation is developed
:keywords: adoptium documentation
:orgname: Eclipse Adoptium
:lang: en
:source-highlighter: highlight.js
:icons: font
:sectids:
:sectlinks:
:hide-uri-scheme:
:sectanchors:
:url-repo: https://github.com/AdoptOpenJDK/website-adoptium-documentation
= How to contribute to this documentation?
To have a common sense about how to document Adoptium Documentation has a, wait a minute... **Documentation**
include::eca-sign-off.adoc[]
include::asciidoc-attributes.adoc[]
| 27 | 109 | 0.784832 |
a58acd720ebaec8ca6a62978f01b6e9bfb86defd | 1,468 | adoc | AsciiDoc | _posts/2019-08-30-Gorgeous-Cure.adoc | GorgeousCure/gorgeouscure.github.io | f320040353adcabb5b30d26b996b3c293b5c39ff | [
"MIT"
] | null | null | null | _posts/2019-08-30-Gorgeous-Cure.adoc | GorgeousCure/gorgeouscure.github.io | f320040353adcabb5b30d26b996b3c293b5c39ff | [
"MIT"
] | null | null | null | _posts/2019-08-30-Gorgeous-Cure.adoc | GorgeousCure/gorgeouscure.github.io | f320040353adcabb5b30d26b996b3c293b5c39ff | [
"MIT"
] | null | null | null | = GorgeousCure
// See https://hubpress.gitbooks.io/hubpress-knowledgebase/content/ for information about the parameters.
:published_at: 2019-08-30
:hp-tags: HubPress, Blog, Open_Source,
:hp-alt-title: GorgeousCure
I've been informed that it's time to pass on some thoughts, processes, Ways of Doing Things, and the like. Not for posterity, not for ego, not because there's a drastic need for this information in the wild... but because there is one person somewhere who is looking for this information and needs it desperately. Maybe that's you. Probably not. But here we are just the same.
We're not going to talk about me today. We're going to talk about What Is To Come. What I'm sharing is not a manifesto, not a guidebook, not a set of skills enhancement exercises. What I'm sharing is more of a nudge toward a shift in paradigms. A way to orient yourself with the world now that the world has reoriented herself.
Here we are. There is you. There is me. A clean slate. Your history has been wiped. If not, wipe it and come back. My history has been wiped. We are two beings of pure energy out in the world, and we share this incredibly vast pool of knowledge between us. Face it, we know remarkable things.
What I bring to the table will look like exercises. They will feel like exercises. But they are escalators. Where you take them is up to you, but they are things you stand on which take you to higher places, and in my worldview that is an escalator.
| 104.857143 | 376 | 0.775204 |
d4f7191ffc4fa9b87e4387877a86e741c4e9cf8c | 1,983 | adoc | AsciiDoc | spring-ws-master/README.adoc | freyzou/Java_Back-end | 4b48668c458d09c05b6f7e863ce01696e7346d82 | [
"Apache-2.0"
] | 1 | 2017-03-21T14:04:14.000Z | 2017-03-21T14:04:14.000Z | spring-ws-master/README.adoc | WelcomeToCodingWorld/Java_Back-end | 4b48668c458d09c05b6f7e863ce01696e7346d82 | [
"Apache-2.0"
] | null | null | null | spring-ws-master/README.adoc | WelcomeToCodingWorld/Java_Back-end | 4b48668c458d09c05b6f7e863ce01696e7346d82 | [
"Apache-2.0"
] | null | null | null | = Spring Web Services
image:https://circleci.com/gh/spring-projects/spring-ws.svg?style=svg["CircleCI", link="https://circleci.com/gh/spring-projects/spring-ws"]
Spring Web Services is a product of the Spring community focused on creating
document-driven Web services. Spring Web Services aims to facilitate
contract-first SOAP service development, allowing for the creation of flexible
web services using one of the many ways to manipulate XML payloads.
== Installation
Releases of Spring Web Services are available for download from Maven Central,
as well as our own repository, http://repo.spring.io/release[http://repo.springsource.org/release].
Please visit https://projects.spring.io/spring-ws to get the right Maven/Gradle settings for your selected version.
== Building from Source
Spring Web Services uses a http://gradle.org[Gradle]-based build system. In
the instructions below, http://vimeo.com/34436402[`./gradlew`] is invoked
from the root of the source tree and serves as a cross-platform, self-contained
bootstrap mechanism for the build. The only prerequisites are
http://help.github.com/set-up-git-redirect[Git] and JDK 1.7+.
=== check out sources
`git clone git://github.com/spring-projects/spring-ws.git`
=== compile and test, build all jars, distribution zips and docs
`./gradlew build`
=== install all spring-* jars into your local Maven cache
`./gradlew install`
… and discover more commands with `./gradlew tasks`. See also the https://github.com/spring-projects/spring-framework/wiki/Gradle-build-and-release-FAQ[Gradle build and release FAQ].
== Documentation
See the current http://docs.spring.io/spring-ws/docs/current/api/[Javadoc] and http://docs.spring.io/spring-ws/docs/current/reference/htmlsingle/[reference docs].
== Issue Tracking
Spring Web Services uses https://jira.spring.io/browse/SWS[JIRA] for issue tracking purposes
== License
Spring Web Services is http://www.apache.org/licenses/LICENSE-2.0.html[Apache 2.0 licensed]. | 40.469388 | 182 | 0.777105 |
ab1a820c1f5dfc64338d2e93f77654daf3cd7efb | 599 | adoc | AsciiDoc | src/main/docs/guide/consumer/consumerMethods/consumerParameters.adoc | mauracwarner/micronaut-rabbitmq | 0ce9d01bfe7735a77e67b3d57df964836571d522 | [
"Apache-2.0"
] | 15 | 2019-02-13T19:07:52.000Z | 2022-03-19T11:05:28.000Z | src/main/docs/guide/consumer/consumerMethods/consumerParameters.adoc | mauracwarner/micronaut-rabbitmq | 0ce9d01bfe7735a77e67b3d57df964836571d522 | [
"Apache-2.0"
] | 122 | 2019-02-06T10:53:35.000Z | 2022-03-28T22:01:29.000Z | src/main/docs/guide/consumer/consumerMethods/consumerParameters.adoc | mauracwarner/micronaut-rabbitmq | 0ce9d01bfe7735a77e67b3d57df964836571d522 | [
"Apache-2.0"
] | 22 | 2019-02-08T03:42:32.000Z | 2022-03-11T21:21:50.000Z | The link:{apirabbit}client/Channel.html#basicConsume(java.lang.String,boolean,java.lang.String,boolean,boolean,java.util.Map,com.rabbitmq.client.Consumer)[basicConsume] method is used by the api:configuration.rabbitmq.intercept.RabbitMQConsumerAdvice[] to consume messages. Some of the options can be directly configured through annotations.
IMPORTANT: In order for the consumer method to be invoked, all arguments must be satisfied. To allow execution of the method with a null value, the argument *must* be declared as nullable. If the arguments cannot be satisfied, the message will be rejected. | 199.666667 | 341 | 0.823038 |
ccdd799c089fba74a9e8d73f143b32ff5507f994 | 17,440 | adoc | AsciiDoc | docs/src/main/asciidoc/stork-kubernetes.adoc | gilvansfilho/quarkus | b2a77ea1de45debb89050938c7659a823930f7a8 | [
"Apache-2.0"
] | 1 | 2019-10-18T01:11:29.000Z | 2019-10-18T01:11:29.000Z | docs/src/main/asciidoc/stork-kubernetes.adoc | gilvansfilho/quarkus | b2a77ea1de45debb89050938c7659a823930f7a8 | [
"Apache-2.0"
] | null | null | null | docs/src/main/asciidoc/stork-kubernetes.adoc | gilvansfilho/quarkus | b2a77ea1de45debb89050938c7659a823930f7a8 | [
"Apache-2.0"
] | null | null | null | ////
This guide is maintained in the main Quarkus repository
and pull requests should be submitted there:
https://github.com/quarkusio/quarkus/tree/main/docs/src/main/asciidoc
////
= Getting Started with SmallRye Stork
:extension-status: preview
include::./attributes.adoc[]
The essence of distributed systems resides in the interaction between services.
In modern architecture, you often have multiple instances of your service to share the load or improve the resilience by redundancy.
But how do you select the best instance of your service?
That's where https://smallrye.io/smallrye-stork[SmallRye Stork] helps.
Stork is going to choose the most appropriate instance.
It offers:
* Extensible service discovery mechanisms
* Built-in support for Consul and Kubernetes
* Customizable client load-balancing strategies
include::{includes}/extension-status.adoc[]
== Prerequisites
:prerequisites-docker:
include::{includes}/prerequisites.adoc[]
* Access to a Kubernetes cluster (Minikube is a viable option)
== Architecture
In this guide, we will work with a few components deployed in a Kubernetes cluster:
* A simple blue service.
* A simple red service.
* The `color-service` is the Kubernetes service which is the entry point to the Blue and Red instances.
* A client service using a REST client to call the blue or the red service. Service discovery and selection are delegated to Stork.
image::stork-kubernetes-architecture.png[Architecture of the application,width=100%, align=center]
For the sake of simplicity, everything will be deployed in the same namespace of the Kubernetes cluster.
== Solution
We recommend that you follow the instructions in the next sections and create the applications step by step.
However, you can go right to the completed example.
Clone the Git repository: `git clone {quickstarts-clone-url}`, or download an {quickstarts-archive-url}[archive].
The solution is located in the `stork-kubernetes-quickstart` {quickstarts-tree-url}/stork-kubernetes-quickstart[directory].
== Discovery and selection
Before going further, we need to discuss discovery vs. selection.
- Service discovery is the process of locating service instances.
It produces a list of service instances that is potentially empty (if no service matches the request) or contains multiple service instances.
- Service selection, also called load-balancing, chooses the best instance from the list returned by the discovery process.
The result is a single service instance or an exception when no suitable instance can be found.
Stork handles both discovery and selection.
However, it does not handle the communication with the service but only provides a service instance.
The various integrations in Quarkus extract the location of the service from that service instance.
== Bootstrapping the project
Create a Quarkus project importing the quarkus-rest-client-reactive and quarkus-resteasy-reactive extensions using your favorite approach:
:create-app-artifact-id: stork-kubernetes-quickstart
:create-app-extensions: quarkus-rest-client-reactive,quarkus-resteasy-reactive
include::{includes}/devtools/create-app.adoc[]
In the generated project, also add the following dependencies:
[source,xml,role="primary asciidoc-tabs-target-sync-cli asciidoc-tabs-target-sync-maven"]
.pom.xml
----
<dependency>
<groupId>io.smallrye.stork</groupId>
<artifactId>stork-service-discovery-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>io.smallrye.stork</groupId>
<artifactId>stork-load-balancer-random</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kubernetes-client</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-container-image-jib</artifactId>
</dependency>
----
[source,gradle,role="secondary asciidoc-tabs-target-sync-gradle"]
.build.gradle
----
implementation("io.smallrye.stork:stork-service-discovery-kubernetes")
implementation("io.smallrye.stork:stork-load-balancer-random")
implementation("io.quarkus:quarkus-kubernetes")
implementation("io.quarkus:quarkus-kubernetes-client")
implementation("io.quarkus:quarkus-container-image-jib")
----
`stork-service-discovery-kubernetes` provides an implementation of service discovery for Kubernetes. `stork-load-balancer-random` provides an implmentation of random load balancer. `quarkus-kubernetes` enables the generation of Kubernetes manifests each time we perform a build. The `quarkuks-kubernetes-client` extension enables the use of the Fabric8 Kubernetes Client in native mode. And `quarkus-container-image-jib` enables the build of a container image using https://github.com/GoogleContainerTools/jib[Jib].
== The Blue and Red services
Let's start with the very beginning: the service we will discover, select and call.
The Red and Blue are two simple REST services serving an endpoint responding `Hello from Red!` and `Hello from Blue!` respectively. The code of both applications has been developed following the https://quarkus.io/guides/getting-started[Getting Started Guide].
As the goal of this guide is to show how to use Stork Kubernetes service discovery, we won't provide the specifics steps for the Red and Blue services. Their container images are already built and available in a public registry:
* https://quay.io/repository/quarkus/blue-service[Blue service container image]
* https://quay.io/repository/quarkus/red-service[Red service container image]
== Deploy the Blue and Red services in Kubernetes
Now that we have our service container images available in a public registry, we need to deploy them into the Kubernetes cluster.
The following file contains all the Kubernetes resources needed to deploy the Blue and Red services in the cluster and make them accessible:
[source, yaml]
----
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: development
name: endpoints-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["endpoints", "pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: stork-rb
namespace: development
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: development
roleRef:
kind: Role
name: endpoints-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:36:56 +0000
labels:
app.kubernetes.io/name: color-service
app.kubernetes.io/version: "1.0"
name: color-service //<1>
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app.kubernetes.io/version: "1.0"
type: color-service
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:36:56 +0000
labels:
color: blue
type: color-service
app.kubernetes.io/name: blue-service
app.kubernetes.io/version: "1.0"
name: blue-service //<2>
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: blue-service
app.kubernetes.io/version: "1.0"
template:
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:36:56 +0000
labels:
color: blue
type: color-service
app.kubernetes.io/name: blue-service
app.kubernetes.io/version: "1.0"
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: quay.io/quarkus/blue-service:1.0
imagePullPolicy: Always
name: blue-service
ports:
- containerPort: 8080
name: http
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.quarkus.io/commit-id: 27be03414510f776ca70d70d859b33e134570443
app.quarkus.io/build-timestamp: 2022-03-31 - 10:38:54 +0000
labels:
color: red
type: color-service
app.kubernetes.io/version: "1.0"
app.kubernetes.io/name: red-service
name: red-service //<2>
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/version: "1.0"
app.kubernetes.io/name: red-service
template:
metadata:
annotations:
app.quarkus.io/commit-id: 27be03414510f776ca70d70d859b33e134570443
app.quarkus.io/build-timestamp: 2022-03-31 - 10:38:54 +0000
labels:
color: red
type: color-service
app.kubernetes.io/version: "1.0"
app.kubernetes.io/name: red-service
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: quay.io/quarkus/red-service:1.0
imagePullPolicy: Always
name: red-service
ports:
- containerPort: 8080
name: http
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress //<3>
metadata:
annotations:
app.quarkus.io/commit-id: f747f359406bedfb1a39c57392a5b5a9eaefec56
app.quarkus.io/build-timestamp: 2022-03-31 - 10:46:19 +0000
labels:
app.kubernetes.io/name: color-service
app.kubernetes.io/version: "1.0"
color: blue
type: color-service
name: color-service
spec:
rules:
- host: color-service.127.0.0.1.nip.io
http:
paths:
- backend:
service:
name: color-service
port:
name: http
path: /
pathType: Prefix
----
There are a few interesting parts in this listing:
<1> The Kubernetes Service resource, `color-service`, that Stork will discover.
<2> The Red and Blue service instances behind the `color-service` Kubernetes service.
<3> A Kubernetes Ingress resource making the `color-service` accessible from the outside of the cluster at the `color-service.127.0.0.1.nip.io` url. Not that the Ingress is not needed for Stork however, it helps to check that the architecture is in place.
Create a file named `kubernetes-setup.yml` with the content above at the root of the project and run the following commands to deploy all the resources in the Kubernetes cluster. Don't forget to create a dedicated namespace:
[source,shell script]
----
kubectl create namespace development
kubectl apply -f kubernetes-setup.yml -n=development
----
If everything went well the Color service is accessible on http://color-service.127.0.0.1.nip.io. You should have `Hello from Red!` and `Hello from Blue!` response randomly.
NOTE: Stork is not limited to Kubernetes and integrates with other service discovery mechanisms.
== The REST Client interface and the front end API
So far, we didn't use Stork; we just deployed the services we will be discovering, selecting, and calling.
We will call the services using the Reactive REST Client.
Create the `src/main/java/org/acme/MyService.java` file with the following content:
[source, java]
----
package org.acme;
import org.eclipse.microprofile.rest.client.inject.RegisterRestClient;
import javax.ws.rs.GET;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
/**
* The REST Client interface.
*
* Notice the `baseUri`. It uses `stork://` as URL scheme indicating that the called service uses Stork to locate and
* select the service instance. The `my-service` part is the service name. This is used to configure Stork discovery
* and selection in the `application.properties` file.
*/
@RegisterRestClient(baseUri = "stork://my-service")
public interface MyService {
@GET
@Produces(MediaType.TEXT_PLAIN)
String get();
}
----
It's a straightforward REST client interface containing a single method. However, note the `baseUri` attribute:
* the `stork://` suffix instructs the REST client to delegate the discovery and selection of the service instances to Stork,
* the `my-service` part of the URI is the service name we will be using in the application configuration.
It does not change how the REST client is used.
Create the `src/main/java/org/acme/FrontendApi.java` file with the following content:
[source, java]
----
package org.acme;
import org.eclipse.microprofile.rest.client.inject.RestClient;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
/**
* A frontend API using our REST Client (which uses Stork to locate and select the service instance on each call).
*/
@Path("/api")
public class FrontendApi {
@RestClient MyService service;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String invoke() {
return service.get();
}
}
----
It injects and uses the REST client as usual.
== Stork configuration
Now we need to configure Stork for using Kubernetes to discover the red and blue instances of the service.
In the `src/main/resources/application.properties`, add:
[source, properties]
----
quarkus.stork.my-service.service-discovery.type=kubernetes
quarkus.stork.my-service.service-discovery.k8s-namespace=development
quarkus.stork.my-service.service-discovery.application=color-service
quarkus.stork.my-service.load-balancer.type=random
----
`stork.my-service.service-discovery` indicates which type of service discovery we will be using to locate the `my-service` service.
In our case, it's `kubernetes`.
If your access to the Kubernetes cluster is configured via Kube config file, you don't need to configure the access to it. Otherwise, set the proper Kubernetes url using the `quarkus.stork.my-service.service-discovery.k8s-host` property.
`quarkus.stork.my-service.service-discovery.application` contains the name of the Kubernetes service Stork is going to ask for. In our case, this is the `color-service` corresponding to the kubernetes service backed by the Red and Blue instances.
Finally, `quarkus.stork.my-service.load-balancer.type` configures the service selection. In our case, we use a `random` Load Balancer.
== Deploy the REST Client interface and the front end API in the Kubernetes cluster
The system is almost complete. We only need to deploy the REST Client interface and the client service to the cluster.
In the `src/main/resources/application.properties`, add:
[source, properties]
----
quarkus.container-image.registry=<public registry>
quarkus.kubernetes-client.trust-certs=true
quarkus.kubernetes.ingress.expose=true
quarkus.kubernetes.ingress.host=my-service.127.0.0.1.nip.io
----
The `quarkus.container-image.registry` contains the container registry to use.
The `quarkus.kubernetes.ingress.expose` indicates that the service will be accessible from the outside of the cluster.
The `quarkus.kubernetes.ingress.host` contains the url to access the service. We are using https://nip.io/[nip.io] wildcard for IP address mappings.
For a more customized configuration you can check the https://quarkus.io/guides/deploying-to-kubernetes[Deploying to Kubernetes guide]
== Build and push the container image
Thanks to the extensions we are using, we can perform the build of a container image using Jib and also enabling the generation of Kubernetes manifests while building the application. For example, the following command will generate a Kubernetes manifest in the `target/kubernetes/` directory and also build and push a container image for the project:
[source,shell script]
----
./mvnw package -Dquarkus.container-image.build=true -Dquarkus.container-image.push=true
----
== Deploy client service to the Kubernetes cluster
The generated manifest can be applied to the cluster from the project root using kubectl:
[source,shell script]
----
kubectl apply -f target/kubernetes/kubernetes.yml -n=development
----
We're done!
So, let's see if it works.
Open a browser and navigate to http://my-service.127.0.0.1.nip.io/api.
Or if you prefer, in another terminal, run:
[source, shell script]
----
> curl http://my-service.127.0.0.1.nip.io/api
...
> curl http://my-service.127.0.0.1.nip.io/api
...
> curl http://my-service.127.0.0.1.nip.io/api
...
----
The responses should alternate randomly between `Hello from Red!` and `Hello from Blue!`.
You can compile this application into a native executable:
include::{includes}/devtools/build-native.adoc[]
Then, you need to build a container image based on the native executable. For this use the corresponding Dockerfile:
[source, shell script]
----
> docker build -f src/main/docker/Dockerfile.native -t quarkus/stork-kubernetes-quickstart .
----
After publishing the new image to the container registry. You can redeploy the Kubernetes manifest to the cluster.
== Going further
This guide has shown how to use SmallRye Stork to discover and select your services.
You can find more about Stork in:
- the xref:stork-reference.adoc[Stork reference guide],
- the xref:stork.adoc[Stork with Consul reference guide],
- the https://smallrye.io/smallrye-stork[SmallRye Stork website].
| 36.409186 | 515 | 0.744782 |
544039886d61d605fb837f8666fea5c96dd6c3c8 | 223 | adoc | AsciiDoc | docs/asciidoc/modules/ROOT/examples/generated-documentation/apoc.util.sha256.adoc | adam-cowley/neo4j-apoc-procedures | 3082e6b1e6f4761bea4650cde271d2a7c41fe15e | [
"Apache-2.0"
] | 1 | 2021-02-11T10:09:45.000Z | 2021-02-11T10:09:45.000Z | docs/asciidoc/modules/ROOT/examples/generated-documentation/apoc.util.sha256.adoc | adam-cowley/neo4j-apoc-procedures | 3082e6b1e6f4761bea4650cde271d2a7c41fe15e | [
"Apache-2.0"
] | 29 | 2020-09-01T15:37:42.000Z | 2022-01-13T11:09:25.000Z | docs/asciidoc/modules/ROOT/examples/generated-documentation/apoc.util.sha256.adoc | adam-cowley/neo4j-apoc-procedures | 3082e6b1e6f4761bea4650cde271d2a7c41fe15e | [
"Apache-2.0"
] | null | null | null | ¦xref::overview/apoc.util/apoc.util.sha256.adoc[apoc.util.sha256 icon:book[]] +
`apoc.util.sha256([values]) | computes the sha256 of the concatenation of all string values of the list`
¦label:function[]
¦label:apoc-core[]
| 37.166667 | 104 | 0.748879 |
1786c1f31b30ebe8ae995a2e13bbbcf6c31ced99 | 77 | adoc | AsciiDoc | docs/en-gb/modules/item/partials/position.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | null | null | null | docs/en-gb/modules/item/partials/position.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 2 | 2022-01-05T10:31:24.000Z | 2022-03-11T11:56:07.000Z | docs/en-gb/modules/item/partials/position.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 1 | 2021-03-01T09:12:18.000Z | 2021-03-01T09:12:18.000Z | Select the attribute's position for displaying it in the attribute overview.
| 38.5 | 76 | 0.831169 |
983d54eb3d42560f14a5fb9c542ca5d5887c2ec0 | 457 | adoc | AsciiDoc | modules/main-rules/pages/main-rules.adoc | GameBrains/e40k-remastered | df4e405f237074fa606e4f9f3b3c2d63ee9810f2 | [
"Apache-2.0"
] | null | null | null | modules/main-rules/pages/main-rules.adoc | GameBrains/e40k-remastered | df4e405f237074fa606e4f9f3b3c2d63ee9810f2 | [
"Apache-2.0"
] | 28 | 2021-05-23T11:02:06.000Z | 2022-03-24T09:35:04.000Z | modules/main-rules/pages/main-rules.adoc | GameBrains/er-core | df4e405f237074fa606e4f9f3b3c2d63ee9810f2 | [
"Apache-2.0"
] | null | null | null | = Main rules
This section contains the essential rules that you need to know to be able to play a game of {project-name} with vehicles and infantry.
Other sections contain rules that build on this one.
Some add giant war engines, aircraft and other things to your games.
Some describe the types of games that you can play, how to set them up, how to choose armies, and other things.
All depend upon the main rules as the foundation for your understanding. | 57.125 | 135 | 0.789934 |
015065f16606fa968853b2a645ad59f7d89084e3 | 827 | asciidoc | AsciiDoc | docs_src/index.asciidoc | tajmone/Awesome-List-Asciidoctor | 8212d7438954fd18b05d2d1cbd15be19efb1c0df | [
"CC0-1.0"
] | 4 | 2021-08-09T00:18:25.000Z | 2022-03-17T11:39:30.000Z | docs_src/index.asciidoc | tajmone/Awesome-List-Asciidoctor | 8212d7438954fd18b05d2d1cbd15be19efb1c0df | [
"CC0-1.0"
] | null | null | null | docs_src/index.asciidoc | tajmone/Awesome-List-Asciidoctor | 8212d7438954fd18b05d2d1cbd15be19efb1c0df | [
"CC0-1.0"
] | null | null | null | = Awesome Lists AsciiDoc Repository Template
Tristano Ajmone <tajmone@gmail.com>
:revnumber: 1.1.0
:lang: en
// Sections & Numbering:
:sectanchors:
// TOC Settings:
:toc-title: Contents
:toclevels: 3
:sectnums!:
ifdef::IsHTML[]
:toc: left
endif::[]
ifdef::env-github[]
:toc: macro
:caution-caption: :fire:
:important-caption: :heavy_exclamation_mark:
:note-caption: :information_source:
:tip-caption: :bulb:
:warning-caption: :warning:
endif::[]
// Misc Settings:
:experimental: true
:reproducible: true
:icons: font
:linkattrs: true
:idprefix:
:idseparator: -
ifdef::IsHTML[]
++++
<!--
include::warn-editing.txt[]
-->
++++
endif::[]
ifdef::IsADoc[]
////
include::warn-editing.txt[]
////
endif::[]
include::preamble.adoc[]
ifdef::env-github[]
'''
toc::[]
'''
endif::[]
:leveloffset: +1
include::sample1.adoc[]
// EOF //
| 14.508772 | 44 | 0.678356 |
a411cc0c3e87d6efe90ad25bc0cd316bd9af6987 | 7,129 | adoc | AsciiDoc | docs/concepts-summary.adoc | rocketraman/gitworkflows | 531365124d8f53e179739036a858c93042dea396 | [
"MIT"
] | 73 | 2018-04-09T17:07:56.000Z | 2022-02-12T15:13:28.000Z | docs/concepts-summary.adoc | rocketraman/gitworkflows | 531365124d8f53e179739036a858c93042dea396 | [
"MIT"
] | 14 | 2018-04-09T21:14:06.000Z | 2021-12-22T18:35:16.000Z | docs/concepts-summary.adoc | rocketraman/gitworkflows | 531365124d8f53e179739036a858c93042dea396 | [
"MIT"
] | 6 | 2018-05-19T13:53:57.000Z | 2022-01-22T02:20:19.000Z | = Gitworkflow: Concepts Summary
:toc: macro
toc::[]
== Introduction
An earlier article described https://hackernoon.com/how-the-creators-of-git-do-branches-e6fcc57270fb[Gitworkflow]. This
document expands on some of the concepts in a shorter, more digestible form.
== Concepts
=== Topic Branches
Topic branches, sometimes called feature branches, are where almost all development happens. Topics represent something
being worked on, almost always in a branch, like a bug fix, hot fix, or new functionality.
Name topic branches according to some convention. A good one is your initials, a slash, the issue tracker ID, a dash,
and a (very) brief description in camel-case e.g.:
rg/SP-1234-NewFrobnik
The initials ease communication by immediately identifying the owner. The issue and description are short but also
provide useful context.
By branching from `maint` or `master`, we have the flexibility to merge the topic into branches like `next` or `pu`
because those branches will share the branch point as a common ancestor).
If topic B depends on another topic A that has not yet graduated to `master`, topic A may be merged into topic B. This
complicates interactive branch rebases so this should be avoided when possible, however if necessary, git should handle
this situation without too much problem.
Smaller bugfixes and features may be merged directly into `maint` or `master` without going through a stabilization
period in `next` or `pu`. Changes on `maint` are merged upwards into `master`, and changes on `master` are merged
upwards into `next` and `proposed` (though these branches are more often simply rewound and rebuilt).
=== Integration Branches
The integration branches are described here.
Stable releases are cut from `master`, beta or acceptance test releases from `next`, and alpha or preview releases from
`pu`. `maint` simply represents fixes to the previous release. The most stable up-to-date code is on `master`.
Integration branches are exactly like any other branch in git. What makes them “special” is solely in how we have
defined their use.
==== master
The `master` branch is:
* the code that is most up-to-date and stable
and it has the following properties:
* when creating a named new release, it would come from `master`
* in a continuous deployment scenario, production would always run the tip of `master`
* usually deployed to production, released to users
* at some point, the tip of `master` is tagged with vX.Y.0
* the `maint` branch is always based on some commit on `master`
* never rewound or rebased, *always safe to base topics on*
** you may want to add a check in your git server to ensure force push does not happen on this branch
* new topics are almost always based on `master`
* usually contains all of `maint` (and must before a new release, since otherwise there are changes currently in prod
that won’t be included in the new release)
==== next
The `next` branch is:
* the code currently being developed and stabilized for release
and it has the following properties:
* code merged to `next` is generally in fairly good shape, though perhaps there are regressions or other non-obvious
issues
* usually deployed to a testing environment
at the beginning of every development cycle, rewound to `master`, but otherwise never rewound
* usually contains all of `master` i.e. usually based on the head of `master`
* when a release is made, if a topic branch currently in `next` is not stable enough promotion to `master`, it is
simply not merged to `master` -- instead, it is merged to the next iteration of `next`
* may be rewound to `master` at any time, and rebuilt with topics still in `next` -- but usually after a release
* it is *never* merged into any another branch
* new topics are not usually based on `next`
** one exception: if a topic is not expected to be stable for the next release, and the creator understands that
the branch will need to be rebased when `next` is rewound and rebuilt, this is ok and may result in fewer conflicts
during future rebase than if the topic started from `master`
==== pu
The `pu` branch is:
* “proposed” updates for temporary analysis, experimentation, or initial testing/review of one or more features
** anything else that doesn’t yet belong in `next`
and it has the following properties:
* to test / provide early warning whether the unmerged topic branches integrate cleanly -- if not, some communication
to coordinate changes on topics is necessary
* may be rewound to `master` at any time -- but usually once every day or every second day
* it is *never* merged into any another branch
* new topics are not usually based on `pu`
==== maint (and maint-X.Y)
The `maint` branch is:
* code that is the newest maintained production release
The `maint-X.Y` branches are:
* code that are older, but still maintained, production releases
and they have the following properties:
* usually deployed directly into production, perhaps with some but not extensive testing elsewhere
* after release of `vX.Y.0` is made, `maint` is set to that commit
* releases of `vX.Y.n` are made from `maint` if `X.Y` is current, or `maint-X.Y` if `X.Y` is an older maintained release
* never rewound or rebased, *always safe to base topics on*
** you may want to add a check in your git server to ensure force push does not happen on this branch, with an exception
for when the `maint` branch is moved to the new tip of `master` after a release
* hotfix topics are merged to `maint` directly
* new topics may be based on `maint` (or `maint-X.Y`) if the fix in the topic needs to be applied to that older release
* can be merged to `master` to propagate fixes forward
== The Life of a Topic Branch in Gitworkflow
Development on a topic branch might go something like the following. There are a lot of variations on this basic
structure -- this is just an example.
The following includes topic rebasing to produce a clean series of commits. However, this is not required by
gitworkflow -- just enabled by it.
. Create a topic branch starting from master
. commit - commit - commit
. Push to a remote branch as necessary to back up the work or facilitate discussions with colleagues
. `merge --no-ff` topic to the `pu` integration branch (manually or scripted and scheduled)
.. check for conflicts and do initial review/testing of `pu` build
. Cleanup topic history with an interactive rebase
. Force push to origin topic branch
. `merge --no-ff` topic to a rebuilt `pu` integration branch (manually or scripted and scheduled)
. Code review
. Fix code review comments in separate commits
.. Don't rebase to simplify work of reviewer's so they know what you have changed, but use `--squash` and `--fixup`
liberally
. After review is completed, interactively rebase the squash and fixup commits to cleanup the topic history
.. Use `--autosquash` to apply the fixup/squash commits
. `merge --no-ff` to the `next` integration branch
. Test, validate and change the topic as necessary
. Merge topic branch to `master` (when release done), tag `master` with release version
. Remove the topic branch locally and remotely
| 47.526667 | 120 | 0.767008 |
a5375b3643c6118f44bf27e2f5d60fc813e3fa88 | 309 | adoc | AsciiDoc | docs/topics/secured-spring-boot-clone-booster.adoc | animuk/launcher-documentation | 6941cbc3d02eb198df820696ac836385034c4312 | [
"CC-BY-4.0"
] | null | null | null | docs/topics/secured-spring-boot-clone-booster.adoc | animuk/launcher-documentation | 6941cbc3d02eb198df820696ac836385034c4312 | [
"CC-BY-4.0"
] | 5 | 2017-09-18T13:11:02.000Z | 2018-11-16T13:22:07.000Z | docs/topics/secured-spring-boot-clone-booster.adoc | animuk/launcher-documentation | 6941cbc3d02eb198df820696ac836385034c4312 | [
"CC-BY-4.0"
] | null | null | null | = Cloning the {SpringBoot} Secured Booster
You need to clone the secured booster, which includes a submodule for the {RHSSO} portion of the booster.
[source,bash,options="nowrap",subs="attributes+"]
----
$ git clone --recurse-submodules https://github.com/snowdrop/spring-boot-http-secured-booster.git
----
| 34.333333 | 105 | 0.754045 |
78aafa9a8eebd122d674653aa2774768d03016e0 | 2,486 | adoc | AsciiDoc | docs/partner_editable/product_description.adoc | AutomateVersionControl/quickstart-github-enterprise | bf27d5d4b247e6abb3c406ba731d7c5a24683c27 | [
"Apache-2.0"
] | 19 | 2017-11-09T06:53:09.000Z | 2022-02-16T02:16:40.000Z | docs/partner_editable/product_description.adoc | AutomateVersionControl/quickstart-github-enterprise | bf27d5d4b247e6abb3c406ba731d7c5a24683c27 | [
"Apache-2.0"
] | 14 | 2017-06-21T22:20:32.000Z | 2022-03-03T20:02:24.000Z | docs/partner_editable/product_description.adoc | AutomateVersionControl/quickstart-github-enterprise | bf27d5d4b247e6abb3c406ba731d7c5a24683c27 | [
"Apache-2.0"
] | 21 | 2017-06-02T01:58:43.000Z | 2021-10-30T16:16:32.000Z | // Replace the content in <>
// Briefly describe the software. Use consistent and clear branding.
// Include the benefits of using the software on AWS, and provide details on usage scenarios.
GitHub Enterprise is a development and collaboration platform that enables developers to
build and share software easily and effectively. Development teams of all sizes, from small
startups to teams of thousands, use GitHub Enterprise to facilitate their software
development and deployment tasks.
GitHub Enterprise provides the following features:
* *The GitHub Flow*: Developers can use the same asynchronous workflow created by
the open source community to collaborate on projects. This workflow encourages a
culture of experimentation without risk. For more information about the GitHub
Flow, see the GitHub Enterprise website.
* *Integrated platform*: At GitHub, we use GitHub Enterprise across the entire
development process, which enables us to release and deploy our code dozens of
times per day. This platform for continuous integration and deployment enables you
to build and ship better software faster.
* *Transparent collaboration*: Pull requests let developers interactively learn from
one another during the development process. Whether they’re discussing the whole
project or a single line of code, GitHub Enterprise displays the relevant information
in a clean, timeline-style interface.
* *Advanced monitoring*: You can use GitHub Pulse to see a snapshot of everything
that’s happened in your project repository during the past week, or visit the Activity
Dashboard to view graphs that illustrate work across projects. Advanced monitoring
can include Simple Network Management Protocol (SNMP), collectd, and log
forwarding on the appliance as well. For details, see the GitHub Enterprise
documentation.
* *Auditing and compliance*: Over time, your organization might have developed
crucial policies around permissions and security auditing. You can use the Commit
Amazon Web Services – GitHub Enterprise on the AWS Cloud June 2017
Status API in GitHub Enterprise to specify the unique merge conditions necessary
for your organization’s compliance requirements. GitHub Enterprise also provides
in-depth monitoring and auditing for administrators. For details, see the GitHub
Enterprise documentation.
* *Smarter version control*: GitHub Enterprise is built on Git, which is a distributed
version control system that supports non-linear workflows on projects of all sizes.
| 63.74359 | 93 | 0.8214 |
36bebd2fb9f99b0df9a53520f0306c85714d7c14 | 1,245 | adoc | AsciiDoc | website/content/en/status/report-2021-10-2021-12/openssh.adoc | EngrRezCab/freebsd-doc | b2364e0d8f5cc3d57c8be7eed928aba42c72009f | [
"BSD-2-Clause"
] | 1 | 2022-03-30T13:36:12.000Z | 2022-03-30T13:36:12.000Z | website/content/en/status/report-2021-10-2021-12/openssh.adoc | EngrRezCab/freebsd-doc | b2364e0d8f5cc3d57c8be7eed928aba42c72009f | [
"BSD-2-Clause"
] | null | null | null | website/content/en/status/report-2021-10-2021-12/openssh.adoc | EngrRezCab/freebsd-doc | b2364e0d8f5cc3d57c8be7eed928aba42c72009f | [
"BSD-2-Clause"
] | null | null | null | === Base System OpenSSH Update
Links: +
link:https://www.openssh.com/[OpenSSH] URL: link:https://www.openssh.com/[https://www.openssh.com/] +
link:https://lists.freebsd.org/pipermail/freebsd-security/2021-September/010473.html[Announcement to freebsd-security@] URL: link:https://lists.freebsd.org/pipermail/freebsd-security/2021-September/010473.html[https://lists.freebsd.org/pipermail/freebsd-security/2021-September/010473.html]
Contact: Ed Maste <emaste@freebsd.org>
OpenSSH, a suite of remote login and file transfer tools, was updated from
version 8.7p1 to 8.8p1 in the FreeBSD base system.
*NOTE*:
OpenSSH 8.8p1 disables the ssh-rsa signature scheme by default.
For more information please see the
link:https://lists.freebsd.org/pipermail/freebsd-security/2021-September/010473.html[Important
note for future FreeBSD base system OpenSSH update] mailing list post.
OpenSSH supports
link:https://en.wikipedia.org/wiki/FIDO2_Project[FIDO]/link:https://en.wikipedia.org/wiki/Universal_2nd_Factor[U2F]
devices, and support is now enabled in the base system.
Next steps include integrating U2F key devd rules into the base system,
and merging the updated OpenSSH and FIDO/U2F support to stable branches.
Sponsor: The FreeBSD Foundation
| 47.884615 | 290 | 0.79759 |
5f6788a17487fd63eead3f19e39a127d85cd9157 | 766 | asciidoc | AsciiDoc | heartbeat/docs/autodiscover-kubernetes-config.asciidoc | tetianakravchenko/beats | 6aec024e0ab8239791be20885d6d3c58697d18cd | [
"ECL-2.0",
"Apache-2.0"
] | 9,729 | 2015-12-02T12:44:19.000Z | 2022-03-31T13:26:12.000Z | heartbeat/docs/autodiscover-kubernetes-config.asciidoc | tetianakravchenko/beats | 6aec024e0ab8239791be20885d6d3c58697d18cd | [
"ECL-2.0",
"Apache-2.0"
] | 25,281 | 2015-12-02T08:46:55.000Z | 2022-03-31T23:26:12.000Z | heartbeat/docs/autodiscover-kubernetes-config.asciidoc | tetianakravchenko/beats | 6aec024e0ab8239791be20885d6d3c58697d18cd | [
"ECL-2.0",
"Apache-2.0"
] | 5,239 | 2015-12-02T09:22:33.000Z | 2022-03-31T15:11:58.000Z | {beatname_uc} supports templates for modules:
["source","yaml",subs="attributes"]
-------------------------------------------------------------------------------------
heartbeat.autodiscover:
providers:
- type: kubernetes
include_annotations: ["prometheus.io.scrape"]
templates:
- condition:
contains:
kubernetes.annotations.prometheus.io/scrape: "true"
config:
- type: http
hosts: ["${data.host}:${data.port}"]
schedule: "@every 1s"
timeout: 1s
-------------------------------------------------------------------------------------
This configuration launches an `http` module for all containers of pods annotated with `prometheus.io/scrape=true`.
| 36.47619 | 115 | 0.476501 |
96deeaa7d73bf4204a09da918235babe3832ef13 | 8,583 | adoc | AsciiDoc | src/spec/doc/_working-with-numbers.adoc | bio170/groovy | be6e3f9cfa0fcce02d9d28caf6cf091bc404c0e5 | [
"Apache-2.0"
] | 1 | 2021-11-10T01:59:00.000Z | 2021-11-10T01:59:00.000Z | src/spec/doc/_working-with-numbers.adoc | seanpm2001-libraries/groovy | 44e0a5ecb56fa53cc2ad66ee74e9fcf11bcbb69b | [
"Apache-2.0"
] | null | null | null | src/spec/doc/_working-with-numbers.adoc | seanpm2001-libraries/groovy | 44e0a5ecb56fa53cc2ad66ee74e9fcf11bcbb69b | [
"Apache-2.0"
] | null | null | null | //////////////////////////////////////////
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
//////////////////////////////////////////
= Numbers
:gdk: http://www.groovy-lang.org/gdk.html[Groovy development kit]
Groovy supports different kinds of integral literals and decimal literals, backed by the usual `Number` types of Java.
== Integral literals
The integral literal types are the same as in Java:
* `byte`
* `char`
* `short`
* `int`
* `long`
* `java.math.BigInteger`
You can create integral numbers of those types with the following declarations:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=int_decl,indent=0]
----
If you use optional typing by using the `def` keyword, the type of the integral number will vary:
it'll adapt to the capacity of the type that can hold that number.
For positive numbers:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=wide_int_positive,indent=0]
----
As well as for negative numbers:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=wide_int_negative,indent=0]
----
=== Alternative non-base 10 representations
Numbers can also be represented in binary, octal, hexadecimal and decimal bases.
==== Binary literal
Binary numbers start with a `0b` prefix:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=binary_literal_example,indent=0]
----
==== Octal literal
Octal numbers are specified in the typical format of `0` followed by octal digits.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=octal_literal_example,indent=0]
----
==== Hexadecimal literal
Hexadecimal numbers are specified in the typical format of `0x` followed by hex digits.
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=hexadecimal_literal_example,indent=0]
----
== Decimal literals
The decimal literal types are the same as in Java:
* `float`
* `double`
* `java.math.BigDecimal`
You can create decimal numbers of those types with the following declarations:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=float_decl,indent=0]
----
Decimals can use exponents, with the `e` or `E` exponent letter, followed by an optional sign,
and a integral number representing the exponent:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=float_exp,indent=0]
----
Conveniently for exact decimal number calculations, Groovy choses `java.math.BigDecimal` as its decimal number type.
In addition, both `float` and `double` are supported, but require an explicit type declaration, type coercion or suffix.
Even if `BigDecimal` is the default for decimal numbers, such literals are accepted in methods or closures taking `float` or `double` as parameter types.
NOTE: Decimal numbers can't be represented using a binary, octal or hexadecimal representation.
== Underscore in literals
When writing long literal numbers, it’s harder on the eye to figure out how some numbers are grouped together, for example with groups of thousands, of words, etc. By allowing you to place underscore in number literals, it’s easier to spot those groups:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=underscore_in_number_example,indent=0]
----
== Number type suffixes
We can force a number (including binary, octals and hexadecimals) to have a specific type by giving a suffix (see table below), either uppercase or lowercase.
[cols="1,2" options="header"]
|====
|Type
|Suffix
|BigInteger
|`G` or `g`
|Long
|`L` or `l`
|Integer
|`I` or `i`
|BigDecimal
|`G` or `g`
|Double
|`D` or `d`
|Float
|`F` or `f`
|====
Examples:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=number_type_suffixes_example,indent=0]
----
== Math operations
Although <<{core-operators}#groovy-operators,operators>> are covered in more detail elsewhere, it's important to discuss the behavior of math operations
and what their resulting types are.
Division and power binary operations aside (covered below),
* binary operations between `byte`, `char`, `short` and `int` result in `int`
* binary operations involving `long` with `byte`, `char`, `short` and `int` result in `long`
* binary operations involving `BigInteger` and any other integral type result in `BigInteger`
* binary operations involving `BigDecimal` with `byte`, `char`, `short`, `int` and `BigInteger` result in `BigDecimal`
* binary operations between `float`, `double` and `BigDecimal` result in `double`
* binary operations between two `BigDecimal` result in `BigDecimal`
The following table summarizes those rules:
[cols="10" options="header"]
|====
|
| byte
| char
| short
| int
| long
| BigInteger
| float
| double
| BigDecimal
| *byte*
| int
| int
| int
| int
| long
| BigInteger
| double
| double
| BigDecimal
| *char*
|
| int
| int
| int
| long
| BigInteger
| double
| double
| BigDecimal
| *short*
|
|
| int
| int
| long
| BigInteger
| double
| double
| BigDecimal
| *int*
|
|
|
| int
| long
| BigInteger
| double
| double
| BigDecimal
| *long*
|
|
|
|
| long
| BigInteger
| double
| double
| BigDecimal
| *BigInteger*
|
|
|
|
|
| BigInteger
| double
| double
| BigDecimal
| *float*
|
|
|
|
|
|
| double
| double
| double
| *double*
|
|
|
|
|
|
|
| double
| double
| *BigDecimal*
|
|
|
|
|
|
|
|
| BigDecimal
|====
[NOTE]
Thanks to Groovy's operator overloading, the usual arithmetic operators work as well with `BigInteger` and `BigDecimal`,
unlike in Java where you have to use explicit methods for operating on those numbers.
[[integer_division]]
=== The case of the division operator
The division operators `/` (and `/=` for division and assignment) produce a `double` result
if either operand is a `float` or `double`, and a `BigDecimal` result otherwise
(when both operands are any combination of an integral type `short`, `char`, `byte`, `int`, `long`,
`BigInteger` or `BigDecimal`).
`BigDecimal` division is performed with the `divide()` method if the division is exact
(i.e. yielding a result that can be represented within the bounds of the same precision and scale),
or using a `MathContext` with a http://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#precision()[precision]
of the maximum of the two operands' precision plus an extra precision of 10,
and a http://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#scale()[scale]
of the maximum of 10 and the maximum of the operands' scale.
[NOTE]
For integer division like in Java, you should use the `intdiv()` method,
as Groovy doesn't provide a dedicated integer division operator symbol.
[[power_operator]]
=== The case of the power operator
The power operation is represented by the `**` operator, with two parameters: the base and the exponent.
The result of the power operation depends on its operands, and the result of the operation
(in particular if the result can be represented as an integral value).
The following rules are used by Groovy's power operation to determine the resulting type:
* If the exponent is a decimal value
** if the result can be represented as an `Integer`, then return an `Integer`
** else if the result can be represented as a `Long`, then return a `Long`
** otherwise return a `Double`
* If the exponent is an integral value
** if the exponent is strictly negative, then return an `Integer`, `Long` or `Double` if the result value fits in that type
** if the exponent is positive or zero
*** if the base is a `BigDecimal`, then return a `BigDecimal` result value
*** if the base is a `BigInteger`, then return a `BigInteger` result value
*** if the base is an `Integer`, then return an `Integer` if the result value fits in it, otherwise a `BigInteger`
*** if the base is a `Long`, then return a `Long` if the result value fits in it, otherwise a `BigInteger`
We can illustrate those rules with a few examples:
[source,groovy]
----
include::../test/SyntaxTest.groovy[tags=number_power,indent=0]
----
| 24.878261 | 253 | 0.727252 |
6dbda9731562bc857d3e08fe99ef07f1d2dbf640 | 1,328 | adoc | AsciiDoc | workshop/content/building-and-testing.adoc | redhat-gpte-labs/bookbag-demo | 6b771564e59b3158cdc9e737d86c2085675c26aa | [
"Apache-2.0"
] | null | null | null | workshop/content/building-and-testing.adoc | redhat-gpte-labs/bookbag-demo | 6b771564e59b3158cdc9e737d86c2085675c26aa | [
"Apache-2.0"
] | null | null | null | workshop/content/building-and-testing.adoc | redhat-gpte-labs/bookbag-demo | 6b771564e59b3158cdc9e737d86c2085675c26aa | [
"Apache-2.0"
] | null | null | null | :markup-in-source: verbatim,attributes,quotes
== Building your Bookbag Image
Create the bookbag BuildConfig and ImageStream:
[source,subs="{markup-in-source}"]
----
$ *oc process -f build-template.yaml | oc apply -f -*
----
Build your image from local source or directly from Git source.
To build from local source:
[source,subs="{markup-in-source}"]
----
$ *oc start-build bookbag --follow --from-dir=.*
----
Build your Bookbag Image from Git (make sure you have set `GIT_REPO` in your build template!):
[source,subs="{markup-in-source}"]
----
$ *oc start-build bookbag --follow*
----
== Test Deploy of the Bookbag image
. Define a variables file, `workshop-vars.js`, to define variables for testing your lab content as discussed previously.
Process the deploy template with your `workshop-vars.json`:
+
[source,subs="{markup-in-source}"]
----
$ *oc process -f deploy-template.yaml -p WORKSHOP_VARS="$(cat workshop-vars.json)" | oc apply -f -*
----
Get the Bookbag's Route:
+
[source,subs="{markup-in-source}"]
----
$ *oc get route bookbag*
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
bookbag bookbag-bookbag-demo.apps.ocp.example.com bookbag 10080-tcp edge/Redirect None
----
Use the Route hostname to open the Bookbag page in your browser.
| 27.102041 | 121 | 0.685241 |
a5f3b8c60f928416854b9dba54e6446ceebcfe8f | 7,701 | asciidoc | AsciiDoc | docs/client-concepts/high-level/covariant-hits/covariant-search-results.asciidoc | BedeGaming/elasticsearch-net | c2d2a349045ebc06880da58fedf05cc040a9cc54 | [
"Apache-2.0"
] | null | null | null | docs/client-concepts/high-level/covariant-hits/covariant-search-results.asciidoc | BedeGaming/elasticsearch-net | c2d2a349045ebc06880da58fedf05cc040a9cc54 | [
"Apache-2.0"
] | null | null | null | docs/client-concepts/high-level/covariant-hits/covariant-search-results.asciidoc | BedeGaming/elasticsearch-net | c2d2a349045ebc06880da58fedf05cc040a9cc54 | [
"Apache-2.0"
] | 1 | 2020-05-07T20:54:44.000Z | 2020-05-07T20:54:44.000Z | :ref_current: https://www.elastic.co/guide/en/elasticsearch/reference/2.4
:xpack_current: https://www.elastic.co/guide/en/x-pack/2.4
:github: https://github.com/elastic/elasticsearch-net
:nuget: https://www.nuget.org/packages
////
IMPORTANT NOTE
==============
This file has been generated from https://github.com/elastic/elasticsearch-net/tree/2.x/src/Tests/ClientConcepts/HighLevel/CovariantHits/CovariantSearchResults.doc.cs.
If you wish to submit a PR for any spelling mistakes, typos or grammatical errors for this file,
please modify the original csharp file found at the link and submit the PR with that change. Thanks!
////
[[covariant-search-results]]
=== Covariant search results
NEST directly supports returning covariant result sets.
Meaning a result can be typed to an interface or base class
but the actual instance type of the result can be that of the subclass directly
Let's look at an example; Imagine we want to search over multiple types that all implement
`ISearchResult`
[source,csharp]
----
public interface ISearchResult
{
string Name { get; set; }
}
----
We have three implementations of `ISearchResult` namely `A`, `B` and `C`
[source,csharp]
----
public class A : ISearchResult
{
public string Name { get; set; }
public int PropertyOnA { get; set; }
}
public class B : ISearchResult
{
public string Name { get; set; }
public int PropertyOnB { get; set; }
}
public class C : ISearchResult
{
public string Name { get; set; }
public int PropertyOnC { get; set; }
}
----
==== Using types
The most straightforward way to search over multiple types is to
type the response to the parent interface or base class
and pass the actual types we want to search over using `.Type()`
[source,csharp]
----
var result = this._client.Search<ISearchResult>(s => s
.Type(Types.Type(typeof(A), typeof(B), typeof(C)))
.Size(100)
);
----
NEST will translate this to a search over `/index/a,b,c/_search`;
hits that have `"_type" : "a"` will be serialized to `A` and so forth
Here we assume our response is valid and that we received the 100 documents
we are expecting. Remember `result.Documents` is an `IReadOnlyCollection<ISearchResult>`
[source,csharp]
----
result.ShouldBeValid();
result.Documents.Count().Should().Be(100);
----
To prove the returned result set is covariant we filter the documents based on their
actual type and assert the returned subsets are the expected sizes
[source,csharp]
----
var aDocuments = result.Documents.OfType<A>();
var bDocuments = result.Documents.OfType<B>();
var cDocuments = result.Documents.OfType<C>();
aDocuments.Count().Should().Be(25);
bDocuments.Count().Should().Be(25);
cDocuments.Count().Should().Be(50);
----
and assume that properties that only exist on the subclass itself are properly filled
[source,csharp]
----
aDocuments.Should().OnlyContain(a => a.PropertyOnA > 0);
bDocuments.Should().OnlyContain(a => a.PropertyOnB > 0);
cDocuments.Should().OnlyContain(a => a.PropertyOnC > 0);
----
==== Using ConcreteTypeSelector
A more low level approach is to inspect the hit yourself and determine the CLR type to deserialize to
[source,csharp]
----
var result = this._client.Search<ISearchResult>(s => s
.ConcreteTypeSelector((d, h) => h.Type == "a" ? typeof(A) : h.Type == "b" ? typeof(B) : typeof(C))
.Size(100)
);
----
here for each hit we'll call the delegate passed to `ConcreteTypeSelector` where
* `d` is a representation of the `_source` exposed as a `dynamic` type
* a typed `h` which represents the encapsulating hit of the source i.e. `Hit<dynamic>`
Here we assume our response is valid and that we received the 100 documents
we are expecting. Remember `result.Documents` is an `IReadOnlyCollection<ISearchResult>`
[source,csharp]
----
result.ShouldBeValid();
result.Documents.Count().Should().Be(100);
----
To prove the returned result set is covariant we filter the documents based on their
actual type and assert the returned subsets are the expected sizes
[source,csharp]
----
var aDocuments = result.Documents.OfType<A>();
var bDocuments = result.Documents.OfType<B>();
var cDocuments = result.Documents.OfType<C>();
aDocuments.Count().Should().Be(25);
bDocuments.Count().Should().Be(25);
cDocuments.Count().Should().Be(50);
----
and assume that properties that only exist on the subclass itself are properly filled
[source,csharp]
----
aDocuments.Should().OnlyContain(a => a.PropertyOnA > 0);
bDocuments.Should().OnlyContain(a => a.PropertyOnB > 0);
cDocuments.Should().OnlyContain(a => a.PropertyOnC > 0);
----
==== Using CovariantTypes
The Scroll API is a continuation of the previous Search example so Types() are lost.
You can hint at the types using `.CovariantTypes()`
[source,csharp]
----
var result = this._client.Scroll<ISearchResult>(TimeSpan.FromMinutes(60), "scrollId", s => s
.CovariantTypes(Types.Type(typeof(A), typeof(B), typeof(C)))
);
----
NEST will translate this to a search over `/index/a,b,c/_search`;
hits that have `"_type" : "a"` will be serialized to `A` and so forth
Here we assume our response is valid and that we received the 100 documents
we are expecting. Remember `result.Documents` is an `IReadOnlyCollection<ISearchResult>`
[source,csharp]
----
result.ShouldBeValid();
result.Documents.Count().Should().Be(100);
----
To prove the returned result set is covariant we filter the documents based on their
actual type and assert the returned subsets are the expected sizes
[source,csharp]
----
var aDocuments = result.Documents.OfType<A>();
var bDocuments = result.Documents.OfType<B>();
var cDocuments = result.Documents.OfType<C>();
aDocuments.Count().Should().Be(25);
bDocuments.Count().Should().Be(25);
cDocuments.Count().Should().Be(50);
----
and assume that properties that only exist on the subclass itself are properly filled
[source,csharp]
----
aDocuments.Should().OnlyContain(a => a.PropertyOnA > 0);
bDocuments.Should().OnlyContain(a => a.PropertyOnB > 0);
cDocuments.Should().OnlyContain(a => a.PropertyOnC > 0);
----
The more low level concrete type selector can also be specified on scroll
[source,csharp]
----
var result = this._client.Scroll<ISearchResult>(TimeSpan.FromMinutes(1), "scrollid", s => s
.ConcreteTypeSelector((d, h) => h.Type == "a" ? typeof(A) : h.Type == "b" ? typeof(B) : typeof(C))
);
----
As before, within the delegate passed to `.ConcreteTypeSelector`
* `d` is the `_source` typed as `dynamic`
* `h` is the encapsulating typed hit
Here we assume our response is valid and that we received the 100 documents
we are expecting. Remember `result.Documents` is an `IReadOnlyCollection<ISearchResult>`
[source,csharp]
----
result.ShouldBeValid();
result.Documents.Count().Should().Be(100);
----
To prove the returned result set is covariant we filter the documents based on their
actual type and assert the returned subsets are the expected sizes
[source,csharp]
----
var aDocuments = result.Documents.OfType<A>();
var bDocuments = result.Documents.OfType<B>();
var cDocuments = result.Documents.OfType<C>();
aDocuments.Count().Should().Be(25);
bDocuments.Count().Should().Be(25);
cDocuments.Count().Should().Be(50);
----
and assume that properties that only exist on the subclass itself are properly filled
[source,csharp]
----
aDocuments.Should().OnlyContain(a => a.PropertyOnA > 0);
bDocuments.Should().OnlyContain(a => a.PropertyOnB > 0);
cDocuments.Should().OnlyContain(a => a.PropertyOnC > 0);
----
| 30.681275 | 169 | 0.702506 |
4db732cd9ba07a00c4516d079d2ace6f2e677370 | 9,685 | adoc | AsciiDoc | modules/ROOT/pages/migration-api-gateways-proxies.adoc | patrickXing-au/docs-mule-runtime | 355537aece6a8ec6dd61564a3133c48cc662f56f | [
"BSD-3-Clause"
] | 16 | 2018-11-24T01:37:45.000Z | 2022-01-20T22:32:48.000Z | modules/ROOT/pages/migration-api-gateways-proxies.adoc | patrickXing-au/docs-mule-runtime | 355537aece6a8ec6dd61564a3133c48cc662f56f | [
"BSD-3-Clause"
] | 377 | 2018-09-24T21:19:04.000Z | 2022-03-29T20:36:33.000Z | modules/ROOT/pages/migration-api-gateways-proxies.adoc | patrickXing-au/docs-mule-runtime | 355537aece6a8ec6dd61564a3133c48cc662f56f | [
"BSD-3-Clause"
] | 187 | 2018-09-20T14:53:17.000Z | 2022-03-11T12:30:23.000Z | = Migrating API Proxies
ifndef::env-site,env-github[]
include::_attributes.adoc[]
endif::[]
[[performing_migration]]
== Performing a Proxy Migration
To perform a proxy migration, this example uses the Mule Migration Assistant
(MMA).
The procedure requires that you meet the https://github.com/mulesoft/mule-migration-assistant/blob/master/docs/user-docs/migration-tutorial.adoc#prerequisites[Prerequisites] for the tool. For complete user documentation on MMA, see https://github.com/mulesoft/mule-migration-assistant/blob/master/docs/migration-intro.adoc[Migration to Mule 4 (on GitHub)].
The MMA command for migrating proxies uses the standard
https://github.com/mulesoft/mule-migration-assistant/blob/master/docs/user-docs/migration-tool-procedure.adoc#options[command-line options (on GitHub)] for migrating
a Mule app:
.Command-line Invocation:
[source,console,linenums]
----
$ java -jar mule-migration-assistant-runner-0.5.1.jar \
-projectBasePath /Users/me/AnypointStudio/workspace-studio/my-mule3-proxy/ \
-muleVersion 4.1.5 \
-destinationProjectBasePath /Users/me/my-dir/my-migrated-proxy
----
Note that the MMA creates the directory for the migrated project through
the `-destinationProjectBasePath` option. The `my-migrated-proxy` must _not_
exist before you invoke the command. If you point to a folder that exists
already, the migration will fail an error like this:
`Exception: Destination folder already exists.`
When the migrator runs successfully, you see a message something like this:
.Successful Migration
[source,console,linenums]
----
Executing migration...
...
========================================================
MIGRATION SUCCESS
========================================================
Total time: 11.335 s
Migration report:
/Users/me/my-dir/my-migrated-proxy/report/summary.html
----
After migration completes successfully, the destination folder contains:
* A proxy POM file.
* The `report` directory containing the
https://github.com/mulesoft/mule-migration-assistant/blob/master/docs/user-docs/migration-report.adoc[Mule Migration Report (on GitHub)] (`summary.html`).
Note that the same information provided in the report can be found as comments
in the proxy XML file.
* The `mule-artifact.json` file, with a `minMuleVersion` value that matches the
`-muleVersion` value set in the MMA command.
* The `src` directory, which contains the migrated content.
The `src` directory contains subdirectories `main` and `test`. Inside `main`, the `mule` directory
is the proxy XML file. At the same level as the `mule` directory, MMA will create a
`resources` directory for any DataWeave files or other files that the migrated proxy needs.
Note that the configuration file defined in the proxy XML must be present in the `resources` directory
for the artifact to be deployed correctly. The `test` directory contains test configuration files.
After a successful migration, you need to modify the POM file as explained in
<<pom_migration>>. Once the POM file has the correct organization ID, you can
compile with `mvn clean install`. If the compilation is successful, you can
upload the migrated proxy to Exchange using `maven clean deploy`. You can find
a more detailed explanation of uploading a custom app in
https://docs.mulesoft.com/exchange/to-publish-assets-maven[Publish and Deploy Exchange Assets Using Maven].
[[pom_migration]]
=== POM Migration
The POM file migration modifies the file to include the elements necessary for
uploading the custom proxy to Exchange.
_After_ the migration:
* Replace the `{orgId}` value in the `<groupId/>` and `<exchange.url/>` elements with the organization ID found in Anypoint Platform.
For more information on how to obtain the organization ID, see
xref:api-manager::custom-policy-uploading-to-exchange.adoc[Uploading a Custom Policy to Exchange].
By default, the Exchange URL is set to the production environment.
[source,xml,linenums]
----
<groupId>{orgId}</groupId>
<properties>
<exchange.url>https://maven.anypoint.mulesoft.com/api/v1/organizations/{orgId}/maven</exchange.url>
</properties>
----
Note that for the EU version of the URL, you need to set the URL manually
after the migration:
* URL template for EU:
`+https://maven.eu1.anypoint.mulesoft.com/api/v1/organizations/{orgId}/maven+`
If the MMA does not find a POM file for the proxy, MMA will create a new POM
file. In this file, the `artifactId` for the proxy is the name of the
base directory for the proxy app. For example, for a proxy in
`mytestproxy/src/main/app/proxy.xml`, the `artifactId` is `mytestproxy`.
[[dependency_and_plugin_versions]]
=== Dependency and Plugin Versions
Dependency and plugin versions are set by default by the MMA, and you can change
them manually, as needed.
[[not_migrated_elements]]
=== Un-Migrated Elements
Several elements are not migrated from Mule 3 to Mule 4:
[cols="2,5",options="header"]
|===
|Element | Reason
| `api-platform-gw:tags` | API auto-create is disabled in Mule 4.
| `api-platform-gw:description` | API auto-create is disabled in Mule 4.
|===
For each of these elements, the child elements are removed, as well.
[[common_migration_issues]]
=== Common Migration Issues
If proxy files are not found during the migration, the MMA prints a message
like this one:
.Unsuccessful Migration
[source,console,linenums]
----
Executing migration...
...
===============================================================================
MIGRATION FAILED
===============================================================================
Total time: 3.008 s
Exception: Cannot read mule project. Is it a Mule Studio project?
com.mulesoft.tools.migration.engine.exception.MigrationJobException: Cannot read mule project. Is it a Mule Studio project?
at com.mulesoft.tools.migration.engine.project.MuleProjectFactory.getMuleProject(MuleProjectFactory.java:50)
at com.mulesoft.tools.migration.engine.MigrationJob.generateSourceApplicationModel(MigrationJob.java:116)
at com.mulesoft.tools.migration.engine.MigrationJob.execute(MigrationJob.java:80)
at com.mulesoft.tools.migration.MigrationRunner.main(MigrationRunner.java:83)
===============================================================================
----
[[pom_migration_issues]]
=== POM Migration Issues
If the MMA does not find the POM model for the proxy, the MMA will
either generate the model from an existing POM in Mule 3, or if there is no
Mule 3 POM, the MMA will create the model. If MMA uses an existing POM, any
incorrect POM definition that the MMA encounters will cause POM model creation
process to fail. For information about a POM model failure, you need to
check for any preceding error messages regarding MMA steps on modifying the
POM model.
[[raml_proxy_migration]]
=== RAML Proxy Migration
RAML proxy elements are migrated to REST proxy elements:
[cols="2,5",options="header"]
|===
|Mule 3 | Mule 4
| `proxy:raml-proxy-config`. | `rest-validator:config`.
| `proxy:raml`. | `rest-validator:validate-request`.
| `apikit:mapping-exception-strategy`. | `error-handler` with `on-error-continue` elements in the flow containing the `rest-validator:validate-request` element.
|===
The `error-handler` element contains the following types for
`on-error-continue` elements:
[cols="2,5",options="header"]
|===
|Type | Status Code
| REST-VALIDATOR:BAD_REQUEST | 400
| REST-VALIDATOR:RESOURCE_NOT_FOUND | 404
| REST-VALIDATOR:METHOD_NOT_ALLOWED | 405
| HTTP:TIMEOUT | 504
|===
Exceptions that are not found in the Mule 3 element are autocompleted
by the Mule Migration Assistant.
[[wsdl_proxy_migration]]
=== WSDL Proxy Migration
WSDL proxy migration consists of migrating the attribute values for WSDL properties. In Mule 3, most values are extracted using the function defined as attribute value. In Mule 4, there is no equivalent process, so most properties are migrated to a default value and require manual configuration with values from the WSDL file, which start with the `service` keyword.
Some properties are renamed to extract the value in a more transparent way, for example:
[cols="2,5",options="header"]
|===
|Mule 3 | Mule 4
| `![wsdl(p['wsdl.uri']).services[0].preferredPort.name]` | `${service.port}`
| `![wsdl(p['wsdl.uri']).targetNamespace]` | `${service.namespace}`
| `![wsdl(p['wsdl.uri']).services[0].name]` | `${service.name}`
|===
Other properties, such as the following, are migrated to a DataWeave expression that uses the WSDL Functions Extension dependency, provided by MMA, to extract the expected values:
[cols="2,5",options="header"]
|===
|Mule 3 | Mule 4
| `![wsdl(p['wsdl.uri']).services[0].preferredPort.addresses[0].port]` | `#[Wsdl::getPort('${wsdl.uri}','${service.name}','${service.port}')]`
| `![wsdl(p['wsdl.uri']).services[0].preferredPort.addresses[0].host]` | `#[Wsdl::getHost('${wsdl.uri}','${service.name}','${service.port}')]`
| `![wsdl(p['wsdl.uri']).services[0].preferredPort.addresses[0].path]` |`#[Wsdl::getPath('${wsdl.uri}','${service.name}','${service.port}')]`
|===
== See Also
xref:migration-api-gateways.adoc[Migrating API Gateways]
xref:migration-api-gateways-autodiscovery.adoc[Migrating API Gateways: Autodiscovery]
xref:migration-api-gateways-runtime-config.adoc[Migrating API Gateways: Mule Runtime Configuration]
xref:migration-core.adoc[Core Components Migration]
https://docs.mulesoft.com/exchange/to-publish-assets-maven[Publish and Deploy Exchange Assets Using Maven]
| 41.74569 | 367 | 0.718534 |
d60ce25e37a5dee246124eb744d51ae9a6b2bf38 | 505 | adoc | AsciiDoc | Tenduke.Client.AspNetSampleOwin/README.adoc | 10Duke/10duke-dotnet-client | e2bdc7443ac5adc469f8ce80677ebc9e52226840 | [
"MIT"
] | null | null | null | Tenduke.Client.AspNetSampleOwin/README.adoc | 10Duke/10duke-dotnet-client | e2bdc7443ac5adc469f8ce80677ebc9e52226840 | [
"MIT"
] | 27 | 2019-12-04T14:16:21.000Z | 2022-03-02T02:40:12.000Z | Tenduke.Client.AspNetSampleOwin/README.adoc | 10Duke/10duke-dotnet-client | e2bdc7443ac5adc469f8ce80677ebc9e52226840 | [
"MIT"
] | 2 | 2019-03-15T20:24:09.000Z | 2020-11-04T09:03:08.000Z | == ASP.NET Core sample application with Nowin
This is an ASP.NET Core sample application with an Angular frontend. The sample application demonstrates authenticating against the 10Duke Identity Service and querying for user info. The backend is a basic ASP.NET Core application without MVC. It uses https://github.com/Bobris/Nowin[Nowin] as an OWIN web server. Data requests from the browser frontend are handled by a simple OWIN middleware that calls the 10Duke Identity Service for querying user info.
| 126.25 | 457 | 0.815842 |
8352a03822401e4585296775a57e327211226d48 | 4,839 | asciidoc | AsciiDoc | docs/plugins.asciidoc | ohsu-computational-biology/kibana | b3e81b8ded1b15e4692f086a6f14266c162aa2d9 | [
"Apache-2.0"
] | null | null | null | docs/plugins.asciidoc | ohsu-computational-biology/kibana | b3e81b8ded1b15e4692f086a6f14266c162aa2d9 | [
"Apache-2.0"
] | null | null | null | docs/plugins.asciidoc | ohsu-computational-biology/kibana | b3e81b8ded1b15e4692f086a6f14266c162aa2d9 | [
"Apache-2.0"
] | null | null | null | [[kibana-plugins]]
== Kibana Plugins added[4.2]
Add-on functionality for Kibana is implemented with plug-in modules. You can use the `bin/kibana plugin`
command to manage these modules. You can also install a plugin manually by moving the plugin file to the
`installedPlugins` directory and unpacking the plugin files into a new directory.
[float]
=== Installing Plugins
Use the following command to install a plugin:
[source,shell]
bin/kibana plugin --install <org>/<package>/<version>
You can also use `-i` instead of `--install`, as in the following example:
[source,shell]
bin/kibana plugin -i elasticsearch/marvel/latest
Because the organization given is `elasticsearch`, the plugin management tool automatically downloads the
plugin from `download.elastic.co`.
[float]
=== Installing Plugins from Github
When the specified plugin is not found at `download.elastic.co`, the plugin management tool parses the element
as a Github user name, as in the following example:
[source,shell]
bin/kibana plugin --install github-user/sample-plugin
Installing sample-plugin
Attempting to extract from https://download.elastic.co/github-user/sample-plugin/sample-plugin-latest.tar.gz
Attempting to extract from https://github.com/github-user/sample-plugin/archive/master.tar.gz
Downloading <some number> bytes....................
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete
[float]
=== Installing Plugins from an Arbitrary URL
You can specify a URL to a plugin with the `-u` or `--url` options after the `-i` or `--install` option, as in the
following example:
[source,shell]
bin/kibana plugin -i sample-plugin -u https://some.sample.url/directory
Installing sample-plugin
Attempting to extract from https://some.sample.url/directory
Downloading <some number> bytes....................
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete
You can specify URLs that use the HTTP, HTTPS, or `file` protocols.
[float]
=== Installing Plugins to an Arbitrary Directory
Use the `-d` or `--plugin-dir` option to specify a directory for plugins, as in the following example:
[source,shell]
bin/kibana plugin -i elasticsearch/sample-plugin/latest -d <path/to/directory>
Installing sample-plugin
Attempting to extract from https://download.elastic.co/elasticsearch/sample-plugin/sample-plugin-latest.tar.gz
Downloading <some number> bytes....................
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete
NOTE: This command creates the specified directory if it does not already exist.
[float]
=== Removing Plugins
Use the `--remove` or `-r` option to remove a plugin, including any configuration information, as in the following
example:
[source,shell]
bin/kibana plugin --remove marvel
You can also remove a plugin manually by deleting the plugin's subdirectory under the `installedPlugins` directory.
[float]
=== Updating Plugins
To update a plugin, remove the current version and reinstall the plugin.
[float]
=== Configuring the Plugin Manager
By default, the plugin manager provides you with feedback on the status of the activity you've asked the plugin manager
to perform. You can control the level of feedback with the `--quiet` and `--silent` options. Use the `--quiet` option to
suppress all non-error output. Use the `--silent` option to suppress all output.
By default, plugin manager requests do not time out. Use the `--timeout` option, followed by a time, to change this
behavior, as in the following examples:
[source,shell]
.Waits for 30 seconds before failing
bin/kibana plugin --install username/sample-plugin --timeout 30s
[source,shell]
.Waits for 1 minute before failing
bin/kibana plugin --install username/sample-plugin --timeout 1m
[float]
==== Plugins and Custom Kibana Configurations
Use the `-c` or `--config` options to specify the path to the configuration file used to start Kibana. By default, Kibana
uses the configuration file `config/kibana.yml`. When you change your installed plugins, the `bin/kibana plugin` command
restarts the Kibana server. When you are using a customized configuration file, you must specify the
path to that configuration file each time you use the `bin/kibana plugin` command.
[float]
=== Plugin Manager Exit Codes
[horizontal]
0:: Success
64:: Unknown command or incorrect option parameter
74:: I/O error
70:: Other error
[[plugin-switcher]]
== Switching Plugin Functionality
The Kibana UI serves as a framework that can contain several different plugins. You can switch between these
plugins by clicking the image:images/app-button.png[Plugin Chooser] *Plugin chooser* button to display icons for the
installed plugins:
image::images/app-picker.png[]
Click a plugin's icon to switch to that plugin's functionality.
| 36.11194 | 122 | 0.771234 |
0d4318bd2b63bbe2b8b668d89ad3af7e33618a46 | 645 | adoc | AsciiDoc | content/bpm/adoc/en/quick_start/qs_data_model_creation.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | content/bpm/adoc/en/quick_start/qs_data_model_creation.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | content/bpm/adoc/en/quick_start/qs_data_model_creation.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | :sourcesdir: ../../../source
[[qs_data_model_creation]]
=== Creating the Data Model
Go to the *Data model* tab and press *New > Entity*. The class name is `Contract`.
.Create Contract Entity
image::CreateContractEntity.png[align="center"]
Create the following entity attributes:
* `number` (`String` type)
* `date` (`Date` type)
* `state` (`String` type)
.Contract Entity Attributes
image::ContractEntityAttributes.png[align="center"]
Go to the *Instance name* tab and add the `number` attribute to *Name pattern attributes*.
.Name Pattern
image::ContractEntityNamePattern.png[align="center"]
Press the *OK* button to save the entity.
| 23.888889 | 90 | 0.734884 |
2d1d09212c4adadd33cf791e03a7f0f0f65cd3f8 | 1,212 | adoc | AsciiDoc | modules/ROOT/pages/ts-no-T1-acks.adoc | mulesoft/docs-partner-manager | 1d66af001bf10dcad72042f5656d8ce04f0b4419 | [
"BSD-3-Clause"
] | 2 | 2019-04-14T22:43:09.000Z | 2021-03-31T09:00:25.000Z | modules/ROOT/pages/ts-no-T1-acks.adoc | mulesoft/docs-partner-manager | 1d66af001bf10dcad72042f5656d8ce04f0b4419 | [
"BSD-3-Clause"
] | 24 | 2018-09-28T19:04:14.000Z | 2022-02-10T15:10:44.000Z | modules/ROOT/pages/ts-no-T1-acks.adoc | mulesoft/docs-partner-manager | 1d66af001bf10dcad72042f5656d8ce04f0b4419 | [
"BSD-3-Clause"
] | 10 | 2018-09-27T22:05:08.000Z | 2022-02-25T10:46:18.000Z | = Missing TA1 acknowledgments for my transmissions
ifndef::env-site,env-github[]
include::_attributes.adoc[]
endif::[]
Anypoint Partner Manager sends TA1 acknowledgments for a received transmission if configured to do so.
Your trading partner might contact you to tell you that they are not receiving TA1 acknowledgments.
== Causes
Any of the following can lead to this type of error:
* The X12 settings for your trading partners are not configured to send TA1’s back to the partner.
* The endpoint to which you are sending that TA1’s back may be incorrectly configured.
== Solution
If you are not receiving TA1 configurations, review the following configuration:
. From within your Anypoint Partner Manager environment (either Sandbox or Production), select *Partners* and *Receive from <Partner_Name>*.
. Scroll to Validation and Acknowledgment settings and click *X12*.
. Within the Acknowledgments section, ensure that the option for _Send TA1 for each transmission from <partner name>_ is selected.
. Configure the acknowledgment endpoint as in xref:endpoints.adoc[Endpoints].
. Click *Save* when done.
This enables TA1 acknowledgments for all transmissions received by the specific partner.
| 35.647059 | 140 | 0.792904 |
67ed18dbcbf3f7932c601da656fdeef394246c57 | 1,281 | adoc | AsciiDoc | docs/preliminaries.adoc | Ladicek/arquillian-cube | 99ffd1cd9b26eaf0148ae5d69ea43c5d991fc3dc | [
"Apache-2.0"
] | null | null | null | docs/preliminaries.adoc | Ladicek/arquillian-cube | 99ffd1cd9b26eaf0148ae5d69ea43c5d991fc3dc | [
"Apache-2.0"
] | null | null | null | docs/preliminaries.adoc | Ladicek/arquillian-cube | 99ffd1cd9b26eaf0148ae5d69ea43c5d991fc3dc | [
"Apache-2.0"
] | null | null | null | == Preliminaries
*Arquillian Cube* relies on https://github.com/docker-java/docker-java[docker-java] API.
To use *Arquillian Cube* you need a _Docker_ daemon running on a computer (it can be local or not), but probably it will be at local.
By default the _Docker_ server uses UNIX sockets for communicating with the _Docker_ client. *Arquillian Cube* will attempt to detect the operating system it is running on and either set _docker-java_ to use UNIX socket on _Linux_ or to <<Boot2Docker>> on _Windows_/_Mac_ as the default URI.
If you want to use TCP/IP to connect to the Docker server, you'll need to make sure that your _Docker_ server is listening on TCP port.
To allow _Docker_ server to use TCP add the following line to +/etc/default/docker+:
+DOCKER_OPTS="-H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock"+
After restarting the _Docker_ daemon you need to make sure that _Docker_ is up and listening on TCP.
[source, terminal]
----
$ docker -H tcp://127.0.0.1:2375 version
Client version: 0.8.0
Go version (client): go1.2
Git commit (client): cc3a8c8
Server version: 1.2.0
Git commit (server): fa7b24f
Go version (server): go1.3.1
----
If you cannot see the client and server versions then it means that something is wrong with the _Docker_ installation.
| 44.172414 | 291 | 0.760343 |
e3a4d7687c3e504d342e233ab45b6e319177eb49 | 8,509 | adoc | AsciiDoc | doc_source/basics-async.adoc | djKianoosh/aws-java-developer-guide | 63fc7d90f192e2b2caed6288feb9c54d4aba3f09 | [
"MIT-0"
] | 112 | 2016-02-23T21:22:56.000Z | 2022-01-27T06:56:57.000Z | doc_source/basics-async.adoc | djKianoosh/aws-java-developer-guide | 63fc7d90f192e2b2caed6288feb9c54d4aba3f09 | [
"MIT-0"
] | 36 | 2016-03-07T12:25:35.000Z | 2022-03-16T00:19:07.000Z | doc_source/basics-async.adoc | djKianoosh/aws-java-developer-guide | 63fc7d90f192e2b2caed6288feb9c54d4aba3f09 | [
"MIT-0"
] | 110 | 2016-03-04T18:10:20.000Z | 2022-03-31T07:10:21.000Z | //!!NODE_ROOT <section>
include::../../includes.txt[]
:java-oracle-future: https://docs.oracle.com/javase/8/docs/api/index.html?java/util/concurrent/Future.html
:java-oracle-executorservice: https://docs.oracle.com/javase/8/docs/api/index.html?java/util/concurrent/ExecutorService.html
:java-oracle-threadfactory: https://docs.oracle.com/javase/8/docs/api/index.html?java/util/concurrent/ThreadFactory.html
[."topic"]
[[basics-async,basics-async.title]]
= [[asynchronous-programming, Asynchronous Programming]]Asynchronous Programming
:info_doctype: section
:info_title: Asynchronous Programming
:info_abstract: How asynchronous programming works in the {sdk-java} and best practices for \
handling exceptions
[abstract]
--
How asynchronous programming works in the {sdk-java} and best practices for handling exceptions
--
You can use either _synchronous_ or _asynchronous_ methods to call operations on {AWS-services}.
Synchronous methods block your thread's execution until the client receives a response from the
service. Asynchronous methods return immediately, giving control back to the calling thread
without waiting for a response.
Because an asynchronous method returns before a response is available, you need a way to get the
response when it's ready. The {sdk-java} provides two ways: _Future objects_ and __callback
methods__.
[[basics-async-future,basics-async-future.title]]
== Java Futures
Asynchronous methods in the {sdk-java} return a
{java-oracle-future}[Future]
object that contains the results of the asynchronous operation __in the future__.
Call the `Future` ``isDone()`` method to see if the service has provided a response object yet.
When the response is ready, you can get the response object by calling the `Future` ``get()``
method. You can use this mechanism to periodically poll for the asynchronous operation's results
while your application continues to work on other things.
Here is an example of an asynchronous operation that calls a {LAM} function, receiving a
`Future` that can hold an
link:sdk-for-java/v1/reference/com/amazonaws/services/lambda/model/InvokeResult.html["InvokeResult", type="documentation"]
object. The `InvokeResult` object is retrieved only after `isDone()` is ``true``.
[source,java]
----
import com.amazonaws.services.lambda.AWSLambdaAsyncClient;
import com.amazonaws.services.lambda.model.InvokeRequest;
import com.amazonaws.services.lambda.model.InvokeResult;
import java.nio.ByteBuffer;
import java.util.concurrent.Future;
import java.util.concurrent.ExecutionException;
public class InvokeLambdaFunctionAsync
{
public static void main(String[] args)
{
String function_name = "HelloFunction";
String function_input = "{\"who\":\"SDK for Java\"}";
AWSLambdaAsync lambda = AWSLambdaAsyncClientBuilder.defaultClient();
InvokeRequest req = new InvokeRequest()
.withFunctionName(function_name)
.withPayload(ByteBuffer.wrap(function_input.getBytes()));
Future<InvokeResult> future_res = lambda.invokeAsync(req);
System.out.print("Waiting for future");
while (future_res.isDone() == false) {
System.out.print(".");
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
System.err.println("\nThread.sleep() was interrupted!");
System.exit(1);
}
}
try {
InvokeResult res = future_res.get();
if (res.getStatusCode() == 200) {
System.out.println("\nLambda function returned:");
ByteBuffer response_payload = res.getPayload();
System.out.println(new String(response_payload.array()));
}
else {
System.out.format("Received a non-OK response from {AWS}: %d\n",
res.getStatusCode());
}
}
catch (InterruptedException | ExecutionException e) {
System.err.println(e.getMessage());
System.exit(1);
}
System.exit(0);
}
}
----
[[basics-async-callback,basics-async-callback.title]]
== Asynchronous Callbacks
In addition to using the Java `Future` object to monitor the status of asynchronous requests,
the SDK also enables you to implement a class that uses the
link:sdk-for-java/v1/reference/com/amazonaws/handlers/AsyncHandler.html["AsyncHandler", type="documentation"]
interface. `AsyncHandler` provides two methods that are called depending on how the request
completed: `onSuccess` and ``onError``.
The major advantage of the callback interface approach is that it frees you from having to poll
the `Future` object to find out when the request has completed. Instead, your code can
immediately start its next activity, and rely on the SDK to call your handler at the right time.
[source,java]
----
import com.amazonaws.services.lambda.AWSLambdaAsync;
import com.amazonaws.services.lambda.AWSLambdaAsyncClientBuilder;
import com.amazonaws.services.lambda.model.InvokeRequest;
import com.amazonaws.services.lambda.model.InvokeResult;
import com.amazonaws.handlers.AsyncHandler;
import java.nio.ByteBuffer;
import java.util.concurrent.Future;
public class InvokeLambdaFunctionCallback
{
private class AsyncLambdaHandler implements AsyncHandler<InvokeRequest, InvokeResult>
{
public void onSuccess(InvokeRequest req, InvokeResult res) {
System.out.println("\nLambda function returned:");
ByteBuffer response_payload = res.getPayload();
System.out.println(new String(response_payload.array()));
System.exit(0);
}
public void onError(Exception e) {
System.out.println(e.getMessage());
System.exit(1);
}
}
public static void main(String[] args)
{
String function_name = "HelloFunction";
String function_input = "{\"who\":\"SDK for Java\"}";
AWSLambdaAsync lambda = AWSLambdaAsyncClientBuilder.defaultClient();
InvokeRequest req = new InvokeRequest()
.withFunctionName(function_name)
.withPayload(ByteBuffer.wrap(function_input.getBytes()));
Future<InvokeResult> future_res = lambda.invokeAsync(req, new AsyncLambdaHandler());
System.out.print("Waiting for async callback");
while (!future_res.isDone() && !future_res.isCancelled()) {
// perform some other tasks...
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
System.err.println("Thread.sleep() was interrupted!");
System.exit(0);
}
System.out.print(".");
}
}
}
----
[[basics-async-tips,basics-async-tips.title]]
== Best Practices
[[callback-execution,callback-execution.title]]
=== Callback Execution
Your implementation of `AsyncHandler` is executed inside the thread pool owned by the
asynchronous client. Short, quickly executed code is most appropriate inside your `AsyncHandler`
implementation. Long-running or blocking code inside your handler methods can cause contention
for the thread pool used by the asynchronous client, and can prevent the client from executing
requests. If you have a long-running task that needs to begin from a callback, have the callback
run its task in a new thread or in a thread pool managed by your application.
[[thread-pool-configuration,thread-pool-configuration.title]]
=== Thread Pool Configuration
The asynchronous clients in the {sdk-java} provide a default thread pool that should work for
most applications. You can implement a custom
{java-oracle-executorservice}[ExecutorService]
and pass it to {sdk-java} asynchronous clients for more control over how the thread pools are
managed.
For example, you could provide an `ExecutorService` implementation that uses a custom
{java-oracle-threadfactory}[ThreadFactory]
to control how threads in the pool are named, or to log additional information about thread usage.
[[s3-asynchronous-access,s3-asynchronous-access.title]]
=== Asynchronous Access
The
link:sdk-for-java/v1/reference/com/amazonaws/services/s3/transfer/TransferManager.html["TransferManager", type="documentation"]
class in the SDK offers asynchronous support for working with {S3}. `TransferManager` manages
asynchronous uploads and downloads, provides detailed progress reporting on transfers, and
supports callbacks into different events.
| 41.305825 | 127 | 0.718651 |
869f6fbd5f07bf251681a596314c610f9576fc2c | 3,202 | adoc | AsciiDoc | docs/modules/cli/pages/cbcli/couchbase-cli-setting-rebalance.adoc | mgroves/couchbase-cli | 1d3f0133a909168a28848923104719044298e3e8 | [
"Apache-2.0"
] | 2 | 2020-02-28T03:26:29.000Z | 2020-02-28T03:32:05.000Z | docs/modules/cli/pages/cbcli/couchbase-cli-setting-rebalance.adoc | ScienceLogic/couchbase-cli | eb2384ac7db2b902afe21988d08dc8f2dd6a2724 | [
"Apache-2.0"
] | null | null | null | docs/modules/cli/pages/cbcli/couchbase-cli-setting-rebalance.adoc | ScienceLogic/couchbase-cli | eb2384ac7db2b902afe21988d08dc8f2dd6a2724 | [
"Apache-2.0"
] | null | null | null | = couchbase-cli-setting-rebalance(1)
ifndef::doctype-manpage[:doctitle: setting-rebalance]
ifdef::doctype-manpage[]
== NAME
couchbase-cli-setting-rebalance -
endif::[]
Modifies rebalance retry settings
== SYNOPSIS
[verse]
_couchbase-cli setting-rebalance [--cluster <url>] [--username <user>]
[--set] [--get] [--cancel] [--pending-info] [--enable <1|0>]
[--wait-for <sec>] [--max-attempts <num>] [--rebalance-id <id>]
== DESCRIPTION
This command allows configuring and retrieving automatic rebalance retry
settings as well as canceling and retrieving information of pending rebalance
retries.
== OPTIONS
include::{partialsdir}/cbcli/part-common-options.adoc[]
--set::
Specify to configure the automatic rebalance retry settings.
--get::
Specify to retrieve the automatic rebalance retry settings.
--cancel::
Specify to cancel a pending rebalance retry, use --rebalance-id together
with this option to provide the rebalance id.
--pending-info::
Specify to retrieve information of pending rebalance retries.
--enable <1|0>::
Enable (1) or disable (0) automatic rebalance retry. This flag is required
when using --set. By default automatic rebalance retry is disabled.
--wait-for <sec>::
Specify the amount of time to wait after a failed rebalance before retrying.
Time must be a value between 5 abd 3600 seconds. By default the wait time is
300 seconds.
--max-attempts <num>::
Specify the number of times a failed rebalance will be retried. The value
provided must be between 1 and 3, the default is 1.
--rebalance-id <id>::
Specify the rebalance id of a failed rebalance. Use together with --cancel,
to cancel a pending retry.
include::{partialsdir}/cbcli/part-host-formats.adoc[]
== EXAMPLES
To retrieve the current automatic rebalance retry configuration use:
$ couchbase-cli setting-rebalance -c 127.0.0.1:8091 -u Administrator \
-p password --get
To enable automatic rebalance retry use the command bellow.
$ couchbase-cli setting-rebalance -c 127.0.0.1:8091 -u Administrator \
-p password --set --enabled 1
You can also set the the wait for period and the maximum number of retries. The
command above enables automatic rebalance retry as well as setting the wait
time before retrying to 60 seconds and the maximum number of retires to 2.
$ couchbase-cli setting-rebalance -c 127.0.0.1:8091 -u Administrator \
-p password --set --enabled 1 --wait-for 60 --retries 2
To retrieve information of the pending rebalance retries run the command bellow.
$ couchbase-cli setting-rebalance -c 127.0.0.1:8091 -u Administrator \
-p password --pending-info
To cancel a pending rebalance retry run the command bellow where
`4198f4b1564a800223271af76edd4f98` is the rebalance id, this can be retrieved using
the `--pending-info` flag above.
$ couchbase-cli setting-rebalance -c 127.0.0.1:8091 -u Administrator \
-p password --pending-info --rebalance-id 4198f4b1564a800223271af76edd4f98
== ENVIRONMENT AND CONFIGURATION VARIABLES
include::{partialsdir}/cbcli/part-common-env.adoc[]
== SEE ALSO
man:couchbase-cli-rebalance[1],
man:couchbase-cli-rebalance-status[1]
include::{partialsdir}/cbcli/part-footer.adoc[]
| 31.392157 | 83 | 0.746721 |
f0c93383bebfd53c954c4914d86cce5a483d77fa | 101 | asciidoc | AsciiDoc | processors/tidb_slow_query/docs/tidb_slow_query.asciidoc | SabaPing/filebeat-tidb-plugin | f0eb028141b790a24bf8045ea570499ba2608b7a | [
"Apache-2.0"
] | null | null | null | processors/tidb_slow_query/docs/tidb_slow_query.asciidoc | SabaPing/filebeat-tidb-plugin | f0eb028141b790a24bf8045ea570499ba2608b7a | [
"Apache-2.0"
] | 1 | 2021-07-16T04:05:20.000Z | 2021-09-10T03:28:13.000Z | libbeat/processors/tidb_slow_query/docs/tidb_slow_query.asciidoc | tidbcloud/beats | cf6cf4972def0f51ed0c80b81e9d1655b5ec5c10 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | [[tidb_slow_query]]
=== TiDB Slow Query Parser
++++
<titleabbrev>tidb_slow_query</titleabbrev>
++++
| 14.428571 | 42 | 0.70297 |
92741c5b0a717602b34242c22c6b9fe1a8d764cb | 493 | adoc | AsciiDoc | documentation/modules/ref-zookeeper-replicas.adoc | codemagicprotege/strimzi-kafka-operator | 23548c30aeb00b4d2a2df07592661b9fa9f399f1 | [
"Apache-2.0"
] | 2,978 | 2018-06-09T18:20:00.000Z | 2022-03-31T03:33:27.000Z | documentation/modules/ref-zookeeper-replicas.adoc | codemagicprotege/strimzi-kafka-operator | 23548c30aeb00b4d2a2df07592661b9fa9f399f1 | [
"Apache-2.0"
] | 4,066 | 2018-06-09T23:08:28.000Z | 2022-03-31T22:40:29.000Z | documentation/modules/ref-zookeeper-replicas.adoc | codemagicprotege/strimzi-kafka-operator | 23548c30aeb00b4d2a2df07592661b9fa9f399f1 | [
"Apache-2.0"
] | 895 | 2018-06-13T18:03:22.000Z | 2022-03-31T11:22:11.000Z | // Module included in the following assemblies:
//
// assembly-zookeeper-replicas.adoc
[id='ref-zookeeper-replicas-{context}']
= Number of ZooKeeper nodes
The number of ZooKeeper nodes can be configured using the `replicas` property in `Kafka.spec.zookeeper`.
.An example showing replicas configuration
[source,yaml,subs="attributes+"]
----
apiVersion: {KafkaApiVersion}
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
# ...
zookeeper:
# ...
replicas: 3
# ...
---- | 20.541667 | 104 | 0.699797 |
7376a7ef10a19507b9b61f9b43892351a4162528 | 796 | adoc | AsciiDoc | genie-web/src/docs/asciidoc/api/clusters/_deleteTagsForCluster.adoc | irontable/genie | 5ebaf842bb04ee32fe8647dbca01c01adf809ba7 | [
"Apache-2.0"
] | 1 | 2021-07-29T01:59:35.000Z | 2021-07-29T01:59:35.000Z | genie-web/src/docs/asciidoc/api/clusters/_deleteTagsForCluster.adoc | irontable/genie | 5ebaf842bb04ee32fe8647dbca01c01adf809ba7 | [
"Apache-2.0"
] | 8 | 2018-05-24T17:22:26.000Z | 2018-06-18T18:01:03.000Z | genie-web/src/docs/asciidoc/api/clusters/_deleteTagsForCluster.adoc | irontable/genie | 5ebaf842bb04ee32fe8647dbca01c01adf809ba7 | [
"Apache-2.0"
] | 1 | 2022-01-09T07:43:40.000Z | 2022-01-09T07:43:40.000Z | === Remove All Tags From Cluster
==== Description
Remove all the tags for an existing cluster
IMPORTANT: The `genie.id:{id}` and `genie.name:{name}` tags will **NOT** be removed by this operation
==== Endpoint
`DELETE /api/v3/clusters/{id}/tags`
:snippet-base: {snippets}/cluster-rest-controller-integration-tests/can-delete-tags-for-cluster/3
:id-base: remove-all-tags-from-cluster
:!request-headers:
:request-path-params: {snippet-base}/path-parameters.adoc
:!request-query-params:
:!request-fields:
:curl-request: {snippet-base}/curl-request.adoc
:httpie-request: {snippet-base}/httpie-request.adoc
:!response-headers:
:!response-fields:
:!response-links:
:http-request: {snippet-base}/http-request.adoc
:http-response: {snippet-base}/http-response.adoc
include::../_apiTemplate.adoc[]
| 28.428571 | 101 | 0.748744 |
4eb18dd859e491e41d6db6e8f6b6c590ad0f5ad6 | 1,207 | adoc | AsciiDoc | docs/modules/ROOT/pages/getting-started/assembly-installing-registry-storage-openshift.adoc | obabec/apicurio-registry | 6963705fa46a0433fde3b08c4faa9be2b859b1d5 | [
"Apache-2.0"
] | 2 | 2021-09-10T15:34:08.000Z | 2021-09-11T04:29:59.000Z | docs/modules/ROOT/pages/getting-started/assembly-installing-registry-storage-openshift.adoc | obabec/apicurio-registry | 6963705fa46a0433fde3b08c4faa9be2b859b1d5 | [
"Apache-2.0"
] | 83 | 2020-11-16T17:21:38.000Z | 2022-03-31T09:13:41.000Z | docs/modules/ROOT/pages/getting-started/assembly-installing-registry-storage-openshift.adoc | jsenko/apicurio-registry | b56353ee4113792b0a83430b619f1da24fbe63cf | [
"Apache-2.0"
] | 1 | 2020-03-17T20:23:18.000Z | 2020-03-17T20:23:18.000Z | // Metadata created by nebel
include::{mod-loc}shared/all-attributes.adoc[]
[id="installing-registry-storage"]
= Installing {registry} storage on OpenShift
[role="_abstract"]
This chapter explains how to install and configure your chosen registry storage option:
.Kafka storage
* xref:installing-kafka-streams-operatorhub_{context}[]
* xref:setting-up-kafka-streams-storage_{context}[]
* xref:registry-kafka-topic-names_{context}[]
.PostgreSQL database storage
* xref:installing-postgresql-operatorhub_{context}[]
* xref:setting-up-postgresql-storage_{context}[]
.Prerequisites
* {installing-the-registry-openshift}
//INCLUDES
//include::{mod-loc}getting-started/proc_installing-registry-kafka-streams-template-storage.adoc[leveloffset=+1]
include::{mod-loc}getting-started/proc-installing-kafka-streams-operatorhub.adoc[leveloffset=+1]
include::{mod-loc}getting-started/proc-setting-up-kafka-streams-storage.adoc[leveloffset=+1]
include::{mod-loc}getting-started/ref-registry-kafka-topic-names.adoc[leveloffset=+2]
include::{mod-loc}getting-started/proc-installing-postgresql-operatorhub.adoc[leveloffset=+1]
include::{mod-loc}getting-started/proc-setting-up-postgresql-storage.adoc[leveloffset=+1]
| 41.62069 | 112 | 0.799503 |
9363b4a55509588c375b07fd8f56f051afc0fa78 | 421 | adoc | AsciiDoc | CONTRIBUTING.adoc | acco32/Operations-Research | 805f319a0983d5803c038ca613cdcd0cdfc0076a | [
"MIT"
] | 17 | 2020-02-15T22:27:43.000Z | 2022-02-27T11:08:53.000Z | CONTRIBUTING.adoc | acco32/Operations-Research | 805f319a0983d5803c038ca613cdcd0cdfc0076a | [
"MIT"
] | null | null | null | CONTRIBUTING.adoc | acco32/Operations-Research | 805f319a0983d5803c038ca613cdcd0cdfc0076a | [
"MIT"
] | 1 | 2021-11-28T04:16:33.000Z | 2021-11-28T04:16:33.000Z | = Contributing
. Fork it
. Download your fork
. Create your feature branch (`git checkout -b my-new-feature`)
. Create your changelog entry (`tools/changelog add new --author [me] --title my-new-feature`)
. Make changes and add them (`git add .`) (include the unfinished folder)
. Commit your changes (`git commit -m 'Add some feature'`)
. Push to the branch (`git push origin my-new-feature`)
. Create new pull request
| 38.272727 | 94 | 0.72209 |
090f55ca584507ad7a9b56fca79f27f96a453468 | 428 | asciidoc | AsciiDoc | documentation/build-and-deployment.asciidoc | sujith-mn/scm | 1840b55e73226ecbecfd9bedd641928e2505c5d0 | [
"Apache-2.0"
] | null | null | null | documentation/build-and-deployment.asciidoc | sujith-mn/scm | 1840b55e73226ecbecfd9bedd641928e2505c5d0 | [
"Apache-2.0"
] | null | null | null | documentation/build-and-deployment.asciidoc | sujith-mn/scm | 1840b55e73226ecbecfd9bedd641928e2505c5d0 | [
"Apache-2.0"
] | 2 | 2021-08-31T13:03:36.000Z | 2022-01-04T14:32:25.000Z | :toc: macro
toc::[]
= Build & Deployment
_Build & deploymnet is the link:scm.asciidoc[SCM] domain of building a complete software product out of its source artefacts and deploying the product on according target environments._
== Disciplines
The build & deploymnet domain consists of the following disciplines:
* link:build-management.asciidoc[build-management]
* link:deployment-management.asciidoc[deployment-management]
| 30.571429 | 185 | 0.799065 |
fa5297eb983156c49f4cefca550475e5c678b9ca | 115 | adoc | AsciiDoc | docs/jbpm-docs/src/main/asciidoc/TaskService/LDAPIntegration-section.adoc | jlheard/kie-docs | 577d09ec2a57d77d1eb8e60a49c44fe6f620b910 | [
"Apache-2.0"
] | null | null | null | docs/jbpm-docs/src/main/asciidoc/TaskService/LDAPIntegration-section.adoc | jlheard/kie-docs | 577d09ec2a57d77d1eb8e60a49c44fe6f620b910 | [
"Apache-2.0"
] | 4 | 2018-03-23T08:29:35.000Z | 2018-05-14T13:40:17.000Z | docs/jbpm-docs/src/main/asciidoc/TaskService/LDAPIntegration-section.adoc | kspokas/kie-docs | 6a0fe2a9d436e211f3087021f7f7b8cb75ba18be | [
"Apache-2.0"
] | null | null | null | [[_jbpmtaskldap]]
= LDAP Integration
:imagesdir: ..
TBD
image::TaskService/WSHT-lifecycle.png[align="center"]
| 11.5 | 53 | 0.721739 |
dfb29b52827d74f74fed60b35dcf7375bcdda0cb | 3,499 | adoc | AsciiDoc | content/manual/adoc/en/framework/gui_framework/gui_vcl/gui_components/gui_DatePicker.adoc | phoenix110/documentation | 8d8eb7c394de8b92777726992957d604b495fb24 | [
"CC-BY-4.0"
] | null | null | null | content/manual/adoc/en/framework/gui_framework/gui_vcl/gui_components/gui_DatePicker.adoc | phoenix110/documentation | 8d8eb7c394de8b92777726992957d604b495fb24 | [
"CC-BY-4.0"
] | null | null | null | content/manual/adoc/en/framework/gui_framework/gui_vcl/gui_components/gui_DatePicker.adoc | phoenix110/documentation | 8d8eb7c394de8b92777726992957d604b495fb24 | [
"CC-BY-4.0"
] | null | null | null | :sourcesdir: ../../../../../../source
[[gui_DatePicker]]
====== DatePicker
++++
<div class="manual-live-demo-container">
<a href="https://demo.cuba-platform.com/sampler/open?screen=simple-datepicker" class="live-demo-btn" target="_blank">LIVE DEMO</a>
</div>
++++
++++
<div class="manual-live-demo-container">
<a href="http://files.cuba-platform.com/javadoc/cuba/7.0/com/haulmont/cuba/gui/components/DatePicker.html" class="api-docs-btn" target="_blank">API DOCS</a>
</div>
++++
`DatePicker` is a field to display and choose a date. It has the same view as the drop-down calendar in <<gui_DateField,DateField>>.
image::gui_datepicker_mini.png[align="center"]
XML name of the component: `datePicker`.
The `DatePicker` component is implemented for *Web Client*.
* To create a date picker associated with data, you should use the <<gui_attr_dataContainer,dataContainer>>/<<gui_attr_datasource,datasource>> and <<gui_attr_property,property>> attributes:
+
[source, xml]
----
include::{sourcesdir}/gui_vcl/datepicker_1.xml[]
----
+
In the example above, the screen has the `orderDc` data container for the `Order` entity, which has the `date` property. The reference to the data container is specified in the <<gui_attr_dataContainer,dataContainer>> attribute of the `datePicker` component; the name of the entity attribute which value should be displayed in the field is specified in the <<gui_attr_property,property>> attribute.
[[gui_DatePicker_range]]
* You can specify available dates to select by using `rangeStart` and `rangeEnd` attributes. If you set them, all the dates that are outside the range will be disabled.
+
[source, xml]
----
include::{sourcesdir}/gui_vcl/datepicker_4.xml[]
----
+
image::gui_datepicker_month_range.png[align="center"]
[[gui_DatePicker_resolution]]
* Date accuracy can be defined using a `resolution` attribute. An attribute value should match the `DatePicker.Resolution` enumeration − `DAY`, `MONTH`, `YEAR`. Default resolution is `DAY`.
+
[source, xml]
----
include::{sourcesdir}/gui_vcl/datepicker_2.xml[]
----
+
image::gui_datepicker_month_resolution.png[align="center"]
+
[source, xml]
----
include::{sourcesdir}/gui_vcl/datepicker_3.xml[]
----
+
image::gui_datepicker_year_resolution.png[align="center"]
* Today's date in the calendar is determined against current timestamp in a user's web browser, which depends on the OS time zone settings. User's <<timeZone,time zone>> doesn't affect this behaviour.
'''
Attributes of datePicker::
<<gui_attr_align,align>> -
<<gui_attr_caption,caption>> -
<<gui_attr_captionAsHtml,captionAsHtml>> -
<<gui_attr_contextHelpText,contextHelpText>> -
<<gui_attr_contextHelpTextHtmlEnabled,contextHelpTextHtmlEnabled>> -
<<gui_attr_css,css>> -
<<gui_attr_dataContainer,dataContainer>> -
<<gui_attr_datasource,datasource>> -
<<gui_DateField_datatype,datatype>> -
<<gui_attr_description,description>> -
<<gui_attr_descriptionAsHtml,descriptionAsHtml>> -
<<gui_attr_editable,editable>> -
<<gui_attr_enable,enable>> -
<<gui_attr_expandRatio,box.expandRatio>> -
<<gui_attr_height,height>> -
<<gui_attr_id,id>> -
<<gui_attr_property,property>> -
<<gui_DatePicker_range,rangeEnd>> -
<<gui_DatePicker_range,rangeStart>> -
<<gui_DatePicker_resolution,resolution>> -
<<gui_attr_stylename,stylename>> -
<<gui_attr_tabIndex,tabIndex>> -
<<gui_attr_visible,visible>> -
<<gui_attr_width,width>>
API::
<<gui_api_addValueChangeListener,addValueChangeListener>> -
<<gui_api_contextHelp,setContextHelpIconClickHandler>>
'''
| 36.072165 | 398 | 0.755644 |
583ed61e2e312fdc643d18a65b762c6b3d381cb8 | 6,320 | adoc | AsciiDoc | book/03-sharing-objects.adoc | diguage/java-concurrency-notes | 511aacaf60bf9dfabd753065d4c19a624af2aa66 | [
"Apache-2.0"
] | 7 | 2017-05-08T03:38:00.000Z | 2018-11-07T01:20:08.000Z | book/03-sharing-objects.adoc | diguage/java-concurrency-notes | 511aacaf60bf9dfabd753065d4c19a624af2aa66 | [
"Apache-2.0"
] | null | null | null | book/03-sharing-objects.adoc | diguage/java-concurrency-notes | 511aacaf60bf9dfabd753065d4c19a624af2aa66 | [
"Apache-2.0"
] | 3 | 2018-04-24T02:06:28.000Z | 2018-08-21T05:44:00.000Z | [[sharing-objects]]
== 共享对象
要编写正确的并发程序,关键问题在于:在访问共享的可变状态时需要进行正确的管理。
同步代码块和同步方法可以确保以原子的方式执行操作
一种常见的误解是,认为关键字synchronized只能用于实现原子性或者确定“临界区(Critical Section)”。同步还有另一个重要的方面:内存可见性(Memory Visibility)。
同步还有另一个重要的方面:内存可见性(Memory Visibility)。
我们不仅希望防止某个线程正在使用对象状态而另一个线程在同时修改该状态,而且希望确保当一个线程修改了对象状态后,其他线程能够看到发生的状态变化。如果没有同步,那么这种情况就无法实现。
一种更奇怪的现象是,NoVisibility可能会输出0,因为读线程可能看到了写入ready的值,但却没有看到之后写入number的值,这种现象被称为“重排序(Reordering)”。
在没有同步的情况下,编译器、处理器以及运行时等都可能对操作的执行顺序进行一些意想不到的调整。在缺乏足够同步的多线程程序中,要想对内存操作的执行顺序进行判断,几乎无法得出正确的结论。
要对那些缺乏足够同步的并发程序的执行情况进行推断是十分困难的。
有一种简单的方法能避免这些复杂的问题:只要有数据在多个线程之间共享,就使用正确的同步。
失效数据。当读线程查看ready变量时,可能会得到一个已经失效的值。除非在每次访问变量时都使用同步,否则很可能获得该变量的一个失效值。更糟糕的是,失效值可能不会同时出现:一个线程可能获得某个变量的最新值,而获得另一个变量的失效值。
最低安全性(out-of-thin-air safety)
最低安全性适用于绝大多数变量,但是存在一个例外:非volatile类型的64位数值变量(double和long
Java内存模型要求,变量的读取操作和写入操作都必须是原子操作,但对于非volatile类型的long和double变量,JVM允许将64位的读操作或写操作分解为两个32位的操作。
即使不考虑失效数据问题,在多线程程序中使用共享且可变的long和double等类型的变量也是不安全的,除非用关键字volatile来声明它们,或者用锁保护起来。
内置锁可以用于确保某个线程以一种可预测的方式来查看另一个线程的执行结果
在访问某个共享且可变的变量时要求所有线程在同一个锁上同步,就是为了确保某个线程写入该变量的值对于其他线程来说都是可见的。否则,如果一个线程在未持有正确锁的情况下读取某个变量,那么读到的可能是一个失效值。
加锁的含义不仅仅局限于互斥行为,还包括内存可见性。为了确保所有线程都能看到共享变量的最新值,所有执行读操作或者写操作的线程都必须在同一个锁上同步。
Java语言提供了一种稍弱的同步机制,即volatile变量,用来确保将变量的更新操作通知到其他线程。
volatile变量不会被缓存在寄存器或者对其他处理器不可见的地方,因此在读取volatile类型的变量时总会返回最新写入的值。
在读取volatile类型的变量时总会返回最新写入的值。
volatile变量对可见性的影响比volatile变量本身更为重要。
不建议过度依赖volatile变量提供的可见性。
仅当volatile变量能简化代码的实现以及对同步策略的验证时,才应该使用它们。
如果在验证正确性时需要对可见性进行复杂的判断,那么就不要使用volatile变量。
volatile变量的正确使用方式包括:确保它们自身状态的可见性,确保它们所引用对象的状态的可见性,以及标识一些重要的程序生命周期事件的发生
volatile变量通常用做某个操作完成、发生中断或者状态的标志
volatile的语义不足以确保递增操作(count++)的原子性,除非你能确保只有一个线程对变量执行写操作。
加锁机制既可以确保可见性又可以确保原子性,而volatile变量只能确保可见性。
当且仅当满足以下所有条件时,才应该使用volatile变量: 对变量的写入操作不依赖变量的当前值,或者你能确保只有单个线程更新变量的值。 该变量不会与其他状态变量一起纳入不变性条件中。 在访问变量时不需要加锁。
“发布(Publish)”一个对象的意思是指,使对象能够在当前作用域之外的代码中使用。
发布内部状态可能会破坏封装性,并使得程序难以维持不变性条件。例如,如果在对象构造完成之前就发布该对象,就会破坏线程安全性。
当某个不应该发布的对象被发布时,这种情况就被称为逸出(Escape)。
发布对象的最简单方法是将对象的引用保存到一个公有的静态变量中,以便任何类和线程都能看见该对象
当发布一个对象时,在该对象的非私有域中引用的所有对象同样会被发布。
一般来说,如果一个已经发布的对象能够通过非私有的变量引用和方法调用到达其他的对象,那么这些对象也都会被发布。
当把一个对象传递给某个外部方法时,就相当于发布了这个对象。
当某个对象逸出后,你必须假设有某个类或线程可能会误用该对象。这正是需要使用封装的最主要原因:封装能够使得对程序的正确性进行分析变得可能,并使得无意中破坏设计约束条件变得更难。
最后一种发布对象或其内部状态的机制就是发布一个内部的类实例
public class ThisEscape { public ThisEscape(EventSource source) { source.registerListener( new EventListener() { public void onEvent(Event e) { doSomething(e); } }); } }
注: 请问,怎么从 EventListener 对象上获取 ThisEscape 对象?
线程封闭技术的另一种常见应用是JDBC(Java Database Connectivity)的Connection对象。JDBC规范并不要求Connection对象必须是线程安全的。在典型的服务器应用程序中,线程从连接池中获得一个Connection对象,并且用该对象来处理请求,使用完后再将对象返还给连接池。由于大多数请求(例如Servlet请求或EJB调用等)都是由单个线程采用同步的方式来处理,并且在Connection对象返回之前,连接池不会再将它分配给其他线程,因此,这种连接管理模式在处理请求时隐含地将Connection对象封闭在线程中。
栈封闭是线程封闭的一种特例,在栈封闭中,只能通过局部变量才能访问对象。
局部变量的固有属性之一就是封闭在执行线程中。它们位于执行线程的栈中,其他线程无法访问这个栈。栈封闭(也被称为线程内部使用或者线程局部使用,不要与核心类库中的ThreadLocal混淆)比Ad-hoc线程封闭更易于维护,也更加健壮。
如果在线程内部(Within-Thread)上下文中使用非线程安全的对象,那么该对象仍然是线程安全的。
维持线程封闭性的一种更规范方法是使用ThreadLocal,这个类能使线程中的某个值与保存值的对象关联起来。ThreadLocal提供了get与set等访问接口或方法,这些方法为每个使用该变量的线程都存有一份独立的副本,因此get总是返回由当前执行线程在调用set时设置的最新值。
ThreadLocal对象通常用于防止对可变的单实例变量(Singleton)或全局变量进行共享。
当某个频繁执行的操作需要一个临时对象,例如一个缓冲区,而同时又希望避免在每次执行时都重新分配该临时对象,就可以使用这项技术。
当某个线程初次调用ThreadLocal.get方法时,就会调用initialValue来获取初始值。
在实现应用程序框架时大量使用了ThreadLocal。
ThreadLocal变量类似于全局变量,它能降低代码的可重用性,并在类之间引入隐含的耦合性,因此在使用时要格外小心。
满足同步需求的另一种方法是使用不可变对象(Immutable Object)[EJ Item 13]。
如果对象的状态不会改变,那么这些问题与复杂性也就自然消失了。
如果某个对象在被创建后其状态就不能被修改,那么这个对象就称为不可变对象。线程安全性是不可变对象的固有属性之一,它们的不变性条件是由构造函数创建的,只要它们的状态不改变,那么这些不变性条件就能得以维持。
不可变对象一定是线程安全的。
不可变对象很简单。
在程序设计中,一个最困难的地方就是判断复杂对象的可能状态。
不可变性并不等于将对象中所有的域都声明为final类型,即使对象中所有的域都是final类型的,这个对象也仍然是可变的,因为在final类型的域中可以保存对可变对象的引用。
当满足以下条件时,对象才是不可变的: 对象创建以后其状态就不能修改。 对象的所有域都是final类型。 对象是正确创建的(在对象的创建期间,this引用没有逸出)。
在“不可变的对象”与“不可变的对象引用”之间存在着差异。保存在不可变对象中的程序状态仍然可以更新,即通过将一个保存新状态的实例来“替换”原有的不可变对象。
final类型的域是不能修改的(但如果final域所引用的对象是可变的,那么这些被引用的对象是可以修改的)。然而,在Java内存模型中,final域还有着特殊的语义。final域能确保初始化过程的安全性,从而可以不受限制地访问不可变对象,并在共享这些对象时无须同步。
正如“除非需要更高的可见性,否则应将所有的域都声明为私有域”[EJ Item 12]是一个良好的编程习惯,“除非需要某个域是可变的,否则应将其声明为final域”也是一个良好的编程习惯。
在某些情况下我们希望在多个线程间共享对象,此时必须确保安全地进行共享。
你不能指望一个尚未被完全创建的对象拥有完整性。某个观察该对象的线程将看到对象处于不一致的状态,然后看到对象的状态突然发生变化,即使线程在对象发布后还没有修改过它。
由于没有使用同步来确保Holder对象对其他线程可见,因此将Holder称为“未被正确发布”。
在未被正确发布的对象中存在两个问题。首先,除了发布对象的线程外,其他线程可以看到的Holder域是一个失效值,因此将看到一个空引用或者之前的旧值。然而,更糟糕的情况是,线程看到Holder引用的值是最新的,但Holder状态的值却是失效的。
情况变得更加不可预测的是,某个线程在第一次读取域时得到失效值,而再次读取这个域时会得到一个更新值,这也是assertSainty抛出AssertionError的原因。
Java内存模型为不可变对象的共享提供了一种特殊的初始化安全性保证。
为了确保对象状态能呈现出一致的视图,就必须使用同步。
即使在发布不可变对象的引用时没有使用同步,也仍然可以安全地访问该对象。为了维持这种初始化安全性的保证,必须满足不可变性的所有需求:状态不可修改,所有域都是final类型,以及正确的构造过程。
任何线程都可以在不需要额外同步的情况下安全地访问不可变对象,即使在发布这些对象时没有使用同步。
这种保证还将延伸到被正确创建对象中所有final类型的域。在没有额外同步的情况下,也可以安全地访问final类型的域。然而,如果final类型的域所指向的是可变对象,那么在访问这些域所指向的对象的状态时仍然需要同步。
可变对象必须通过安全的方式来发布,这通常意味着在发布和使用该对象的线程时都必须使用同步。
要安全地发布一个对象,对象的引用以及对象的状态必须同时对其他线程可见。一个正确构造的对象可以通过以下方式来安全地发布: 在静态初始化函数中初始化一个对象引用。 将对象的引用保存到volatile类型的域或者AtomicReferance对象中。 将对象的引用保存到某个正确构造对象的final类型域中。 将对象的引用保存到一个由锁保护的域中。
在线程安全容器内部的同步意味着,在将对象放入到某个容器,例如Vector或synchronizedList时,将满足上述最后一条需求。
通过将一个键或者值放入Hashtable、synchronizedMap或者ConcurrentMap中,可以安全地将它发布给任何从这些容器中访问它的线程(无论是直接访问还是通过迭代器访问)。 通过将某个元素放入Vector、CopyOnWriteArrayList、CopyOnWriteArraySet、synchronizedList或synchronizedSet中,可以将该元素安全地发布到任何从这些容器中访问该元素的线程。 通过将某个元素放入BlockingQueue或者ConcurrentLinkedQueue中,可以将该元素安全地发布到任何从这些队列中访问该元素的线程。
通常,要发布一个静态构造的对象,最简单和最安全的方式是使用静态的初始化器: public static Holder holder = new Holder(42); 静态初始化器由JVM在类的初始化阶段执行。由于在JVM内部存在着同步机制,因此通过这种方式初始化的任何对象都可以被安全地发布[JLS 12.4.2]。
所有的安全发布机制都能确保,当对象的引用对所有访问该对象的线程可见时,对象发布时的状态对于所有线程也将是可见的,并且如果对象状态不会再改变,那么就足以确保任何访问都是安全的。
如果对象从技术上来看是可变的,但其状态在发布后不会再改变,那么把这种对象称为“事实不可变对象(Effectively Immutable Object)”。
通过使用事实不可变对象,不仅可以简化开发过程,而且还能由于减少了同步而提高性能。
在没有额外的同步的情况下,任何线程都可以安全地使用被安全发布的事实不可变对象。
如果对象在构造后可以修改,那么安全发布只能确保“发布当时”状态的可见性。对于可变对象,不仅在发布对象时需要使用同步,而且在每次对象访问时同样需要使用同步来确保后续修改操作的可见性。要安全地共享可变对象,这些对象就必须被安全地发布,并且必须是线程安全的或者由某个锁保护起来。
对象的发布需求取决于它的可变性: 不可变对象可以通过任意机制来发布。 事实不可变对象必须通过安全方式来发布。 可变对象必须通过安全方式来发布,并且必须是线程安全的或者由某个锁保护起来。
许多并发错误都是由于没有理解共享对象的这些“既定规则”而导致的。当发布一个对象时,必须明确地说明对象的访问方式。
在并发程序中使用和共享对象时,可以使用一些实用的策略,包括: 线程封闭。线程封闭的对象只能由一个线程拥有,对象被封闭在该线程中,并且只能由这个线程修改。 只读共享。在没有额外同步的情况下,共享的只读对象可以由多个线程并发访问,但任何线程都不能修改它。共享的只读对象包括不可变对象和事实不可变对象。 线程安全共享。线程安全的对象在其内部实现同步,因此多个线程可以通过对象的公有接口来进行访问而不需要进一步的同步。 保护对象。被保护的对象只能通过持有特定的锁来访问。保护对象包括封装在其他线程安全对象中的对象,以及已发布的并且由某个特定锁保护的对象。
| 37.176471 | 294 | 0.897627 |
2e3290482b727a86c0e6e4f0e77c416e846409b6 | 3,391 | adoc | AsciiDoc | doc-content/jbpm-docs/src/main/asciidoc/Persistence/transaction-cmt-proc.adoc | tkobayas/kie-docs | bc895d59b7dcbf473ad6deee6c179d246e1b506f | [
"Apache-2.0"
] | 35 | 2017-03-20T06:05:47.000Z | 2022-01-17T19:06:21.000Z | doc-content/jbpm-docs/src/main/asciidoc/Persistence/transaction-cmt-proc.adoc | tkobayas/kie-docs | bc895d59b7dcbf473ad6deee6c179d246e1b506f | [
"Apache-2.0"
] | 2,306 | 2017-03-13T15:02:48.000Z | 2022-03-31T12:49:12.000Z | doc-content/jbpm-docs/src/main/asciidoc/Persistence/transaction-cmt-proc.adoc | tkobayas/kie-docs | bc895d59b7dcbf473ad6deee6c179d246e1b506f | [
"Apache-2.0"
] | 170 | 2017-03-13T12:51:20.000Z | 2022-02-25T13:46:45.000Z | [id='transaction-cmt-proc_{context}']
= Configuring container-managed transactions
If you embed the {PROCESS_ENGINE} in an application that executes in container-managed transaction (CMT) mode, for example, EJB beans, you must complete additional configuration. This configuration is especially important if the application runs on an application server that does not allow a CMT application to access a `UserTransaction` instance from JNDI, for example, WebSphere Application Server.
The default transaction manager implementation in the {PROCESS_ENGINE} relies on `UserTransaction` to query transaction status and then uses the status to determine whether to start a transaction. In environments that prevent access to a `UserTransaction` instance, this implementation fails.
To enable proper execution in CMT environments, the {PROCESS_ENGINE} provides a dedicated transaction manager implementation:
`org.jbpm.persistence.jta.ContainerManagedTransactionManager`. This transaction manager expects that the transaction is active and always returns `ACTIVE` when the `getStatus()` method is invoked. Operations such as `begin`, `commit`, and `rollback` are no-op methods, because the transaction manager cannot affect these operations in container-managed transaction mode.
[NOTE]
====
During process execution your code must propagate any exceptions thrown by the engine to the container to ensure that the container rolls transactions back when necessary.
====
To configure this transaction manager, complete the steps in this procedure.
.Procedure
. In your code, insert the transaction manager and persistence context manager into the environment before creating or loading a session:
+
.Inserting the transaction manager and persistence context manager into the environment
[source,java]
----
Environment env = EnvironmentFactory.newEnvironment();
env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, emf);
env.set(EnvironmentName.TRANSACTION_MANAGER, new ContainerManagedTransactionManager());
env.set(EnvironmentName.PERSISTENCE_CONTEXT_MANAGER, new JpaProcessPersistenceContextManager(env));
env.set(EnvironmentName.TASK_PERSISTENCE_CONTEXT_MANAGER, new JPATaskPersistenceContextManager(env));
----
+
. In the `persistence.xml` file, configure the JPA provider. The following example uses `hibernate` and WebSphere Application Server.
+
.Configuring the JPA provider in the `persistence.xml` file
[source,java]
----
<property name="hibernate.transaction.factory_class" value="org.hibernate.transaction.CMTTransactionFactory"/>
<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.WebSphereJtaPlatform"/>
----
+
. To dispose a KIE session, do not dispose it directly. Instead, execute the `org.jbpm.persistence.jta.ContainerManagedTransactionDisposeCommand` command. This commands ensures that the session is disposed at the completion of the current transaction. In the following example, `ksession` is the `KieSession` object that you want to dispose.
+
.Disposing a KIE session using the `ContainerManagedTransactionDisposeCommand` command
[source,java]
----
ksession.execute(new ContainerManagedTransactionDisposeCommand());
----
+
Directly disposing the session causes an exception at the completion of the transaction, because the {PROCESS_ENGINE} registers transaction synchronization to clean up the session state.
| 66.490196 | 401 | 0.817458 |
29edbf97cef3e75210e886802d9f0fca86bf2e24 | 5,811 | adoc | AsciiDoc | docs/team/lowginwee.adoc | peiying98/main | 14421801a0761bedc25d1e4f21d6ba2d8a48bb82 | [
"MIT"
] | null | null | null | docs/team/lowginwee.adoc | peiying98/main | 14421801a0761bedc25d1e4f21d6ba2d8a48bb82 | [
"MIT"
] | null | null | null | docs/team/lowginwee.adoc | peiying98/main | 14421801a0761bedc25d1e4f21d6ba2d8a48bb82 | [
"MIT"
] | null | null | null | = Low Gin Wee - Project Portfolio
:site-section: AboutUs
:imagesDir: ../images
:stylesDir: ../stylesheets
== PROJECT: CorpPro
---
== Overview
*CorpPro* is a desktop address book application which targets Corporate Users. It helps its users better manage
their information to increase efficiency and to produce effective results. *CorpPro*’s features aid users to find relevant
details quickly, in addition to being able to create a schedule to plan and set goals. Users can create and update entries
in *CorpPro* through the command-line interface (CLI), as well as having an uncluttered user interface (UI) to display corresponding
information. It is made using Java, a widely used programming platform.
== Summary of contributions
* *Major enhancement*: Added *additional attributes* to each contact in the address book
** What it does: The following attributes have been added:
*** Position/rank
*** Tag priority
*** Key Performance Index (KPI)
*** Note/Description
** Justification: As a corporate user with many contacts, there is a need have additional fields and attributes to catagorize or
search for them quickly. Users can use tags to assign an importance level to each contact. The Key Performing Index (KPI)
is also included in the attributes to enable managers or supervisors to rate or rank their employees.
** Highlights: This feature required the creation of additional *application programming interfaces* (API) to facilitate other features
such as listing or finding contacts via these additional attributes.
* *Major enhancement*: *Added schedule*
** What it does: Users are able to create activities or tasks and add them to their schedule in *CorpPro*. These activities
are sorted by date and can be edited or deleted when completed. The schedule is also saved when the user closes the application
** Justification: Corporate users may need to organise their assignments in a schedule to maintain a methodical work style
to increase efficiency and not to neglect any important tasks.
** Highlights: This feature required the creation of additional commands and the implementation of exporting the schedule to save
its contents after the user has exit the application. In addition, a unique data structure had to be used to maintain a sorted schedule (by date).
<<<<<<< HEAD
* *Minor enhancement*: Enhanced the edit command to edit all listed entries in one command. This enables users to quickly update contacts in a similar category all at once, saving time and increasing
efficiency.
* *Code contributed*: [https://nuscs2113-ay1819s1.github.io/dashboard/#=undefined&search=lowginwee[RepoSense]] [https://github.com/CS2113-AY1819S1-W12-3/main/blob/master/collated/functional/LowGinWee.md[Functional code]] [https://github.com/CS2113-AY1819S1-W12-3/main/blob/master/collated/test/LowGinWee.md[Test code]]
=======
* *Minor enhancement*: The *add command* automatically parsers the contact's name to capitalize the first letter of each word and remove additional spaces. This is to standardize contacts in the address book for a quick and easy reference.
* *Minor enhancement*: Enhanced the *edit command* to edit all listed entries in one command. This enables users to quickly update contacts, listed by a similar category, all at once, saving time and increasing
efficiency.
* *Code contributed*: [https://nuscs2113-ay1819s1.github.io/dashboard/#=undefined&search=lowginwee[RepoSense]] [https://github.com/CS2113-AY1819S1-W12-3/main/blob/master/collated/functional/LowGinWee.md[Functional code]] [https://github.com/CS2113-AY1819S1-W12-3/main/blob/master/collated/test/LowGinWee.md[Test code]] [https://github.com/CS2113-AY1819S1-W12-3/main/pulls?q=is%3Apr+is%3Aclosed+author%3ALowGinWee[Pull Requests]]
>>>>>>> 701b2102d87cbdf8bfc0db19fbf51b0be905971c
* *Other contributions*:
** Project management:
*** Managed releases `v1.2` - `v1.4` (3 releases) on GitHub
** Enhancements to existing features:
<<<<<<< HEAD
**** Updated the colour scheme, made cosmetic tweaks of UI, Icon and labels (Pull Requests: https://github.com/CS2113-AY1819S1-W12-3/main/pull/118[#118], https://github.com/CS2113-AY1819S1-W12-3/main/pull/120[#120], https://github.com/CS2113-AY1819S1-W12-3/main/pull/130[#130], https://github.com/CS2113-AY1819S1-W12-3/main/pull/138[#138])
=======
**** Updated the colour scheme, made cosmetic tweaks of UI, icon and labels (Pull Requests: https://github.com/CS2113-AY1819S1-W12-3/main/pull/118[#118], https://github.com/CS2113-AY1819S1-W12-3/main/pull/120[#120], https://github.com/CS2113-AY1819S1-W12-3/main/pull/130[#130], https://github.com/CS2113-AY1819S1-W12-3/main/pull/138[#138])
** Issues resolved: https://github.com/CS2113-AY1819S1-W12-3/main/issues/143[1], https://github.com/CS2113-AY1819S1-W12-3/main/issues/148[2], https://github.com/CS2113-AY1819S1-W12-3/main/issues/153[3]
>>>>>>> 701b2102d87cbdf8bfc0db19fbf51b0be905971c
** Community:
*** PRs reviewed (with non-trivial review comments): (Pull Requests: https://github.com/CS2113-AY1819S1-W12-3/main/pull/70[#70], https://github.com/CS2113-AY1819S1-W12-3/main/pull/86[#86])
*** Reported bugs and suggestions for other teams in the class (examples: https://github.com/CS2113-AY1819S1-T16-2/main/issues/105[1], https://github.com/CS2113-AY1819S1-T16-2/main/issues/101[2])
== Contributions to the User Guide
|===
|_Given below are sections I contributed to the User Guide. They showcase my ability to write documentation targeting end-users._
|===
include::../UserGuide.adoc[tag=schedule]
== Contributions to the Developer Guide
|===
|_Given below are sections I contributed to the Developer Guide. They showcase my ability to write technical documentation and the technical depth of my contributions to the project._
|===
include::../DeveloperGuide.adoc[tag=schedule]
| 66.793103 | 428 | 0.771124 |
9787f2accdeecd39d0a189f83e15260c5995e35d | 5,115 | adoc | AsciiDoc | workshop/content/exercises/deploy-sample-app.adoc | volaka/lab-tekton-pipelines | 275d97ab3c8bbf1b724b05169abfd8df55874d77 | [
"Apache-2.0"
] | 1 | 2019-09-11T18:48:26.000Z | 2019-09-11T18:48:26.000Z | workshop/content/exercises/deploy-sample-app.adoc | volaka/lab-tekton-pipelines | 275d97ab3c8bbf1b724b05169abfd8df55874d77 | [
"Apache-2.0"
] | null | null | null | workshop/content/exercises/deploy-sample-app.adoc | volaka/lab-tekton-pipelines | 275d97ab3c8bbf1b724b05169abfd8df55874d77 | [
"Apache-2.0"
] | null | null | null | For this tutorial, you're going to use a simple Node.js application that interacts with a
MongoDB database. The workshop is configured to provide you a pre-created OpenShift project
(i.e. Kubernetes namespace) where you will set up supplementary resources for your
application for its eventual deployment.
We can verify the name of the project with:
[source,bash,role=execute-1]
----
oc project -q
----
This project name corresponds to the section of the OpenShift image registry discussed
in the last section.
You will use the link:https://github.com/sclorg/nodejs-ex[nodejs-ex] sample application
during this workshop (i.e. the Node.js application).
To prepare for `nodejs-ex's` eventual deployment, you will create Kubernetes objects that
are supplementary to the application, such as a route (i.e. url). The deployment will not
complete since there are no container images built for the `nodejs-ex` application yet.
That you will do in the following sections through a CI/CD pipeline.
Create the supplementary Kubernetes objects by running the command below:
[source,bash,role=execute-1]
----
oc create -f sampleapp/sampleapp.yaml
----
`nodejs-ex` also needs a MongoDB database. You can deploy a container with MongoDB
to your OpenShift project by running the following command:
[source,bash,role=execute-1]
----
oc new-app centos/mongodb-36-centos7 -e MONGODB_USER=admin MONGODB_DATABASE=mongodb MONGODB_PASSWORD=secret MONGODB_ADMIN_PASSWORD=super-secret
----
You should see `-> Success` in the output of the command, which verifies the successful
deployment of the container image.
The command above uses a container image with a CentOS 7 operating system and MongoDB 3.6
installed. It also sets environment variables using the `-e` option. These environment
variables are needed by MongoDB for its deployment, such as the username, database name,
password, and the admin password.
The last step is to set an environment variable for `nodejs-ex` that will provide the
application the url to connect to the MongoDB. This will use the service name of the
MongoDB (i.e. `mongodb-36-centos7`) stored as part of an environment variable.
A service is an abstract way to expose an application running on a set of pods as a network
service. Using this service name will allow `nodejs-ex` to reference a consistent endpoint in
the event the pod hosting your MongoDB container is updated from events such as scaling
pods up or down or re-deploying your MongoDB container image with updates.
You can see all the services, including the one for `nodejs-ex` in your OpenShift project
by running the following command:
[source,bash,role=execute-1]
----
oc get services
----
Now that you are familiar with Kubernetes services, go ahead and connect `nodejs-ex` to
the MongoDB. To do this, run the following command:
[source,bash,role=execute-1]
----
oc set env dc/nodejs-ex MONGO_URL="mongodb://admin:secret@mongodb-36-centos7:27017/mongodb"
----
To verify the resources needed to support `nodejs-ex` and the MongoDB have been created,
you can head out to the OpenShift web console.
**NOTE:** An error message may appear when you first visit the console due to the fact that your
user for this workshop does not have permissions to view projects other than your own. The web console
may start on a different project other than the one you are using for this workshop, but all you will need
to do is follow the instructions below to navigate to your project.
You can make your way to the web console by clicking on the **Console** tab next to the
**Terminal** tab at the center top of the workshop in your browser.
Make sure the **Developer** option from the dropdown in the top left corner of the web console
is selected as shown below:
image:../images/developer-view.png[Developer View]
Next, select the Project dropdown menu shown below and choose the project namespace you have
been working with. As a reminder, the project you are using is **%project_namespace%** and this
will differ from what is shown in the photo below:
image:../images/web-console-project.png[Web Console Project]
Next, click on the **Topology** tab on the left side of the web console if you don't
see the what's in the image below. Once in the **Topology** view, you can see the deployment
config for the `nodejs-ex` application and the MongoDB, which will look similar to what
is shown in the image below:
image:../images/topology-view.png[Topology View]
You'll notice the white circle around the `nodejs-ex` deployment config. This means
that `nodejs-ex` isn't running yet. More specifically, no container hosting the `nodejs-ex`
application has been created, built, and deployed yet.
The `mongodb-36-centos7` deployment config has a dark blue circle around it, meaning that
a pod is running with a MongoDB container on it. The MongoDB should be all set
to support the `nodejs-ex` application at this point.
In the next section, you'll learn how to use Tekton tasks. Clear your terminal before continuing.
Running the command below will also return you to the terminal:
[source,bash,role=execute-1]
----
clear
----
| 44.094828 | 143 | 0.781036 |
ee3ae6038208d631392bc30edd2d1994418b0f50 | 8,923 | adoc | AsciiDoc | subprojects/building-spring-boot-2-projects-with-gradle/contents/index.adoc | kathirsv/guides | ae00eebcff824d9d63ba8767360b46057d32c040 | [
"Apache-2.0"
] | null | null | null | subprojects/building-spring-boot-2-projects-with-gradle/contents/index.adoc | kathirsv/guides | ae00eebcff824d9d63ba8767360b46057d32c040 | [
"Apache-2.0"
] | null | null | null | subprojects/building-spring-boot-2-projects-with-gradle/contents/index.adoc | kathirsv/guides | ae00eebcff824d9d63ba8767360b46057d32c040 | [
"Apache-2.0"
] | null | null | null | = Building Spring Boot 2 Applications with Gradle
This guide shows how to build a new Gradle project for Spring Boot 2.0.
First we show some noteworthy features of Spring Boot and its Gradle plugin.
Next we’ll setup the Gradle project, apply the Spring Boot plugin, use the Gradle BOM support to define the dependencies and create an example project.
== Noteworthy Spring Boot 2 features
As Spring Boot uses the Spring Framework 5.x, the minimum Java version was bumped to 8 with support for Java 9.
With this release Spring also includes support for Kotlin 1.2.x.
In addition to that, it now fully supports Reactive Spring with which you are able to build reactive applications.
The whole autoconfiguration mechanism provided by Spring Boot has been enriched as well with several new reactive versions of for example MongoDB, Redis and others.
The Spring Boot Gradle plugin went through a major overhaul with the following improvements:
* To build executable jars and wars, the `bootRepackage` task has been replaced with `bootJar` and `bootWar` respectively.
* The plugin itself does not automatically apply the Spring Dependency Management plugin anymore.
Instead it does react to the Spring Dependency Management plugin being applied and configured with the `spring-boot-dependencies` BOM (bill of materials. We will go into more detail about the BOM support later in this post.)
== What you'll need
* About +++<span class="time-to-complete-text"></span>+++
* A text editor or IDE
* The Java Development Kit (JDK), version 1.8 or higher
* A https://gradle.org/install[Gradle distribution], version 4.6 or better
== Initializing the Gradle project
First we need to initialize the Gradle project.
For that we use Gradle’s `init` task which creates a template project with an empty build file.
The generated project includes the Gradle wrapper out of the box such that you can easily share the project with users that do not have Gradle locally installed.
It also adds the default source directories, test dependencies and JCenter as default dependency repository.
Please have a look at its link:{user-manual}build_init_plugin.html[documentation] to read more about the `init` task.
First we need to create the sample project folder in our home directory and initialize the project:
[listing.terminal.sample-command]
----
$ mkdir gradle-spring-boot-project
$ cd gradle-spring-boot-project
$ gradle init --type java-application
> Task :wrapper
Select build script DSL:
1: Groovy
2: Kotlin
Enter selection (default: Groovy) [1..2]
Select test framework:
1: JUnit 4
2: TestNG
3: Spock
4: JUnit Jupiter
Enter selection (default: JUnit 4) [1..4]
Project name (default: gradle-spring-boot-project):
Source package (default: gradle.spring.boot.project):
> Task :init
Get more help with your project: https://docs.gradle.org/6.0.1/userguide/tutorial_java_projects.html
BUILD SUCCESSFUL
2 actionable tasks: 2 executed
----
The generated project has the following structure:
[listing]
----
gradle-spring-boot-project
├── build.gradle
├── gradle
│ └── wrapper
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
├── gradlew
├── gradlew.bat
├── settings.gradle
└── src
├── main
│ └── java
│ └── App.java
└── test
└── java
└── AppTest.java
----
Next we need to apply the Spring Boot plugin and define the dependencies.
== Applying the Spring Boot plugin and configuring the dependencies
Spring provides a standalone Spring Boot Gradle plugin which adds some tasks and configurations to ease the work with Spring Boot based projects.
To start off we first need to apply the plugin.
For that open the `build.gradle` file and adapt the `plugin` block such that it looks like the following snippet:
.build.gradle
[source,groovy]
----
include::{samplescodedir}/sample-project/build.gradle[tags=plugins]
----
Next we need to add the dependencies needed to compile and run our example as we are not using Spring’s dependency management plugin.
For that we use the Gradle's BOM support and load the Spring Boot BOM file to be able to resolve all required dependencies with the proper version.
NOTE: If you’d like to read more about Gradle’s BOM support please visit link:{user-manual}managing_transitive_dependencies.html#sec:bom_import[this page].
To define the dependencies adapt the `dependencies` block as shown below.
This snippet will add the Spring Boot BOM file as first dependency with the specified Spring Boot version.
The other dependencies do not need to have a specific version as these are implicitly defined in the BOM file.
.build.gradle
[source,groovy]
----
include::{samplescodedir}/sample-project/build.gradle[tags=dependencies]
----
To comply with the Spring Boot BOM the `components` block is needed to strictly use the `snakeyaml` dependency with version `1.19` as the `spring-beans` dependency has version `1.20` as transitive dependency.
If using a Gradle version before 5.0, we need to enable this by adding the following line to the `settings.gradle` file in the root of the project:
.settings.gradle
[source,groovy]
----
include::{samplescodedir}/sample-project/settings.gradle[]
----
If you would like to explore the versions of the dependencies used, which transitive dependencies are included or see where you have conflicts you can find this information in a https://gradle.com/build-scans[build scan].
The following screenshot shows an example of the dependencies section of the build scan:
image::dependencies.png[]
== Creating a "Hello Gradle" sample application
For the example application we create a simple "Hello Gradle" application.
First we need to move the `App` and `AppTest` classes to a `hello` package to facilitate Spring’s component scan.
For that create the `src/main/java/hello` and `src/test/java/hello` directories, move the respective classes to the folders.
Next adapt the `App` class located in the `src/main/java/hello` folder and replace its content with the following:
.App.java
[source,java]
----
include::{samplescodedir}/sample-project/src/main/java/hello/App.java[]
----
.HelloGradleController.java
[source,java]
----
include::{samplescodedir}/sample-project/src/main/java/hello/HelloGradleController.java[]
----
In the above snippets we create a new Spring Boot application and a `HelloGradleController` which returns `Hello Gradle!` when a `GET` request is processed on the root path of the application.
To test this functionality we need to create an integration test.
For that adapt the `AppTest` class located in the `src/test/java/hello` folder and replace its content with the following:
.AppTest.java
[source,java]
----
include::{samplescodedir}/sample-project/src/test/java/hello/AppTest.java[]
----
The `helloGradle` test method spins up the `App` Spring Boot application and asserts the returned content when doing a `GET` request on the root path.
As a last step we need to define the main class name for the Spring Boot jar file.
For that we need to define the `mainClassName` attribute on the `bootJar` configuration closure.
Add the following snippet to your `build.gradle` and then we are ready to run the Spring Boot application.
.build.gradle
[source,groovy]
----
include::{samplescodedir}/sample-project/build.gradle[tags=mainClassName]
----
== Building and running the Spring Boot application
To build the executable jar you can execute the following command:
[listing]
----
$ ./gradlew bootJar
----
The executable jar is located in the `build/libs` directory and you can run it by executing the following command:
[listing]
----
$ java -jar build/libs/gradle-spring-boot-project.jar
----
Another way to run the application is by executing the following Gradle command:
[listing]
----
$ ./gradlew bootRun
----
This command will run the Spring Boot application on the default port `8080` directly.
After a successful startup you can open your browser and access ++http://localhost:8080++ and you should see the `Hello Gradle!` message in the browser window.
== Migrate from an existing Spring Boot 1.5 project
If you already have an existing 1.5.x Spring Boot project and want to migrate to the newer 2.x version you can follow https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0-Migration-Guide#spring-boot-gradle-plugin[this guide].
Please read the upgrade notes carefully to successfully upgrade to the newest Spring Boot Gradle plugin.
== Next Steps
Now that you know the basics of the new Spring Boot Gradle plugin, you can read https://docs.spring.io/spring-boot/docs/2.0.3.RELEASE/gradle-plugin/reference/html[its documentation] for further details.
Please also have a look at https://gradle.com[Gradle Enterprise] if you are interested in build scans and even more metrics and tools for your builds on premise.
include::contribute[repo-path="gradle-guides/building-spring-boot-2-projects-with-gradle"]
| 41.119816 | 240 | 0.769808 |
adec83af6f9d734c54a25f1b1f2c866378bac1b2 | 2,964 | adoc | AsciiDoc | doc-content/enterprise-only/project-data/project-deploying-con.adoc | tiagodolphine/kie-docs | f24afcfaa538b1f74769ef9b2b526dd6d8ef371c | [
"Apache-2.0"
] | null | null | null | doc-content/enterprise-only/project-data/project-deploying-con.adoc | tiagodolphine/kie-docs | f24afcfaa538b1f74769ef9b2b526dd6d8ef371c | [
"Apache-2.0"
] | 37 | 2017-10-09T12:38:44.000Z | 2022-03-24T14:32:41.000Z | doc-content/enterprise-only/project-data/project-deploying-con.adoc | tiagodolphine/kie-docs | f24afcfaa538b1f74769ef9b2b526dd6d8ef371c | [
"Apache-2.0"
] | 6 | 2017-12-06T18:18:14.000Z | 2020-07-23T15:47:37.000Z | [id='project-deploying-con_{context}']
= Deploying a project
The deployment process can vary based on the requirements of your infrastructure.
In a simple deployment of {PRODUCT}, you have one {CENTRAL} and one {KIE_SERVER}. You can use {CENTRAL} to develop your business assets and services and also to manage the {KIE_SERVER}. You can build your project in {CENTRAL} and deploy it automatically onto the {KIE_SERVER}. To enable automatic deployment, {CENTRAL} includes a built-in Maven repository. You can use {CENTRAL} to manage the {KIE_SERVER}, deploying, removing, starting, and stopping any of the services and their project versions that you have built.
You can also connect several {KIE_SERVERS} to the same {CENTRAL} and group them into different server configurations (in *Menu* -> *Deploy* -> *Execution Servers*). Servers belonging to the same server configuration run the same services, but you can deploy different projects or different versions of projects on different configurations. For example, you could have test servers in the `Test` configuration and production servers in a `Production` configuration. As you develop business assets and services in a project, deploy the project on the `Test` server configuration and then, when a version of the project is sufficiently tested, you can deploy it on the `Production` server configuration.
In this case, to keep developing the project, change the version in the project settings. Then the new version and the old version are seen as different artifacts in the built-in Maven repository. You can deploy the new version on the `Test` server configuration and keep running the old version on the `Production` server configuration. This deployment process is simple but has significant limitations. Notably, there is not enough access control: a developer can deploy a project directly into production.
If you require a proper integration process, you can use an external Maven repository (for example, Nexus). You can configure {CENTRAL} to push project files into the external repository instead of the built-in repository. You can still use {CENTRAL} to deploy projects on a test {KIE_SERVER}. However, in this process, other {KIE_SERVERS} (for example, staging and production) are not connected to {CENTRAL}. Instead, they retrieve the project KJAR files and any necessary dependencies from your Maven repository. You can progress the KJAR versions through your repository as necessary, in line with the integration process that you want to implement.
When you set up a {KIE_SERVER}, you can configure access to a Maven repository and then you can use the REST API of the server to load and start a KIE container (deployment unit) with the services from the project. If you deploy a {KIE_SERVER} on OpenShift, you can configure it to load and start a service from a Maven repository automatically. A Maven project and a Java client library for automating the API calls are available.
| 197.6 | 700 | 0.799933 |
647f2b352406df6a4a39decca5512a0d5f4d1d8e | 274 | adoc | AsciiDoc | common/lintchecker/vale/fixtures/scopes/table/test.adoc | kamryn-v/learning-library | 5da55c082ccc41772ade8cd08af6c275737925b2 | [
"UPL-1.0"
] | 3 | 2020-04-03T17:57:44.000Z | 2020-05-31T06:50:06.000Z | common/lintchecker/vale/fixtures/scopes/table/test.adoc | kamryn-v/learning-library | 5da55c082ccc41772ade8cd08af6c275737925b2 | [
"UPL-1.0"
] | null | null | null | common/lintchecker/vale/fixtures/scopes/table/test.adoc | kamryn-v/learning-library | 5da55c082ccc41772ade8cd08af6c275737925b2 | [
"UPL-1.0"
] | 1 | 2020-06-29T15:51:11.000Z | 2020-06-29T15:51:11.000Z | === Document Title XXX
== Level 1 Section Title TODO
TODO
[cols="3*"]
|===
|TODO
|Cell in column 2, row 1
|Cell in column 3, row 1
|Cell in column 1, row 2
|Cell in column 2, row 2
|Cell in column 3, XXX
|===
* level 1 TODO
* level 2
* level 3 TODO
| 13.047619 | 30 | 0.587591 |
f6329c4dd1ef853481559e5b9ac04f8dfbfb7d18 | 4,389 | adoc | AsciiDoc | docs/architects-roles.adoc | jfjoyeux/Continuous-Architecture-Toolkit | 8e1d1cee3172c8786263745f7dbe1d7a1066525b | [
"Apache-2.0"
] | null | null | null | docs/architects-roles.adoc | jfjoyeux/Continuous-Architecture-Toolkit | 8e1d1cee3172c8786263745f7dbe1d7a1066525b | [
"Apache-2.0"
] | null | null | null | docs/architects-roles.adoc | jfjoyeux/Continuous-Architecture-Toolkit | 8e1d1cee3172c8786263745f7dbe1d7a1066525b | [
"Apache-2.0"
] | null | null | null |
== Abundance of architect roles
For the last 10 years or so, different architect roles raised: enterprise architects, platform architects, infrastructure architects, integration architects, security architects, data architects ... Behind this trend there was the idea that you needed to be specialized in a given area and master it from top to bottom. There were two main consequences we noticed
1. It's no surprise that this siloed approach ended up with many architects trying to collaborate on a project. From a delivery team perspectives, a swarm of architects was melting on them. Each of them with a different perspective, skill set, objectives ... Many of the teams we interviewed told us it could have been a challenge to align everyone on a common vision.
2. We lost the perspective of the overall system. This very specialized approach dit not help to consider all the stacks on which products were built: from deep roots within infrastructure to highly functional layers.
3. Last but not least, architects were mostly staffed outside of teams in a transverse position making it difficult to be really involved within teams.
With Continuous Architecture, we try to make the life of delivery teams easier and ensure a fullstack design. We ended up with 3 roles: Enterprise architect, Product architect and Fullstack architect.
[cols=3*]
|===
|Enterprise Architect
|Product Architect
|Fullstack Architect
| image:./img/EA_role.png[Click to enlarge]
| image:./img/PA_role.png[Click to enlarge]
| image:./img/FA_role.png[Click to enlarge]
|===
== Why only describing three roles?
First, let it be clear: by no means we are saying that all other architect roles do not exist anymore but they were not serving our purpose. Let's take several examples:
* network architects: it's an interesting case as the network itself can be considered as a product (a big one though) and as such could have a product architect. But each team nowadays is building distributed systems and as such should consider network when designing its product. Network architects can be consulted in specific cases when it exceed the skill of the team.
* integration or data architects: while they can be usefull to help teams on these tough topics, they are not staffed within the teams creating a dependency and preventing the teams to take over.
* security architects: no one can say security does not matter. So what's wrong with having security architects? it's the same problem than with integration architects: you can always rely on an external security architect to deal with security questions but we do prefer to have teams empowered on this matter and possibly relies on security architects when it goes beyond their knowledge or expertise.
Do you see a pattern emerging? specialized architects are still usefull either to manage their own product (network, data centers, integration middlewares ...) or to help teams when they are reaching their knowledge & expertise domain.
While we accept these different architect roles still exists and are useful, we wanted to insist on three different horizons:
* linking the business strategy & architecture with the Information System: that's the purpose of Enterprise architects.
* designing products to meet end user expectations and fix their problems. The idea here was to focus on the value created by products and the usage from end users. It means a real proximity of the Product Architect with end users and the product owners
* designing the product as a distributed systems. It's the purpose of the Fullstack architect and here we got our inspiration from the fullstack developer.
== Staff the teams with appropriate skills
Product delivery teams get asked to do a lot of different things, each of which require different skillsets. And as you understood it now, we're asking the team to perform architecture activities too.
Roles, job positions and persons are different things:
* A role is a set of activities that needs to be done in a team.
* A person can have one or more roles depending on the person's skill sets and appetence. We've seen several cases where both Product Architect and Fullstack Architect were fullfiled by the same person.
* It's perfectly acceptable to have some architecture activites performed by team members
* Depending on the roles a person can take, it defines its job position. | 95.413043 | 403 | 0.797676 |
336642158e42617a816619ab6381c6577ed991f2 | 2,715 | adoc | AsciiDoc | content/posts/2018/openshift-remove-stuck-serviceinstance.adoc | briantward/blog | f5405f69ad9183b8d18dffc348d80e1c148db317 | [
"MIT"
] | null | null | null | content/posts/2018/openshift-remove-stuck-serviceinstance.adoc | briantward/blog | f5405f69ad9183b8d18dffc348d80e1c148db317 | [
"MIT"
] | 8 | 2019-12-06T21:19:25.000Z | 2019-12-06T21:36:58.000Z | content/posts/2018/openshift-remove-stuck-serviceinstance.adoc | briantward/blog | f5405f69ad9183b8d18dffc348d80e1c148db317 | [
"MIT"
] | null | null | null | ---
title: OpenShift Remove Stuck ServiceInstance
date: 2018-12-15
categories: ["openshift"]
tags: ["Red Hat","container","serviceinstance","service catalog","servicebinding","kubernetes"]
language: en
slug: openshift-remove-stuck-serviceinstance
---
== OpenShift Remove Stuck ServiceInstance
To delete a stuck serviceinstance where a project namespace no longer exists:
[source]
----
$ oc get serviceinstance --all-namespaces -o wide
NAMESPACE NAME CLASS PLAN STATUS AGE
test cakephp-mysql-example-vfzkq ClusterServiceClass/cakephp-mysql-example default Failed 113d
test cakephp-mysql-persistent-f75gl ClusterServiceClass/cakephp-mysql-persistent default Failed 113d
webconsole-extensions httpd-example-6fxx5 ClusterServiceClass/httpd-example default DeprovisionCallFailed 10d
----
. Create the project namespace again
$ oc new-project test
. Now delete the serviceinstance
$ oc delete serviceinstance test -n cakephp-mysql-example-vfzkq
. If that doesn't delete it, then remove the finalizer
$ oc edit serviceinstance test -n cakephp-mysql-example-vfzkq
Delete:
finalizers:
- kubernetes-incubator/service-catalog
. If that doesn't delete it, or you are told you cannot write without further changes such as:
# serviceinstances.servicecatalog.k8s.io "cakephp-mysql-example-vfzkq" was not valid:
# * status.inProgressProperties: Required value: inProgressProperties is required when currentOperation is "Provision", "Update" or "Deprovision"
Then wait for my update, when I figure out how to get rid of it. I suspect that this serviceinstance object was built prior to later changes and restrictions from API updates, causing this problem saving the object.
I tried hacking it by placing this block where it asked, copied from a different serviceinstance block, but it didn't quite work:
[source]
----
inProgressProperties: "none"
clusterServicePlanExternalID: e8628b24-2157-11e8-97ea-001a4a16015f
clusterServicePlanExternalName: dev
parameterChecksum: 0e9965e95b0127174b3a349ade9ec80a5e98cc9c4ea4938ebd2e947d6ee297ef
parameters:
DATABASE_SERVICE_NAME: <redacted>
MEMORY_LIMIT: <redacted>
MONGODB_DATABASE: <redacted>
MONGODB_VERSION: <redacted>
NAMESPACE: <redacted>
VOLUME_CAPACITY: <redacted>
userInfo:
extra:
scopes.authorization.openshift.io:
- user:full
groups:
- system:authenticated:oauth
- system:authenticated
uid: ""
username: admin
----
| 37.708333 | 216 | 0.703499 |
59e4e815b1b5185f97f05c8170c63547f16209e3 | 330 | adoc | AsciiDoc | docs/modules/ROOT/pages/runbooks/CephOSDDiskUnavailable.adoc | projectsyn/component-rook-ceph | 26104e0d5d73fc6effd6005eb4f2bc52fca23ab7 | [
"BSD-3-Clause"
] | null | null | null | docs/modules/ROOT/pages/runbooks/CephOSDDiskUnavailable.adoc | projectsyn/component-rook-ceph | 26104e0d5d73fc6effd6005eb4f2bc52fca23ab7 | [
"BSD-3-Clause"
] | 12 | 2021-07-02T13:43:17.000Z | 2022-02-21T09:28:43.000Z | docs/modules/ROOT/pages/runbooks/CephOSDDiskUnavailable.adoc | projectsyn/component-rook-ceph | 26104e0d5d73fc6effd6005eb4f2bc52fca23ab7 | [
"BSD-3-Clause"
] | null | null | null | = Alert rule: CephOSDDiskUnavailable
include::partial$runbooks/contribution_note.adoc[]
== icon:glasses[] Overview
This alert fires if an OSD has been removed from the cluster with `ceph osd out` and its associated OSD pod isn't running.
== icon:bug[] Steps for debugging
// Add detailed steps to debug and resolve the issue
| 27.5 | 122 | 0.769697 |
50e09ee9399fc21d44d9d90145f744e05ec8eddb | 14,057 | adoc | AsciiDoc | docs/modules/ROOT/pages/reactive/integrations/rsocket.adoc | kdombeck/spring-security | ed8887e0fcb8986b1e6dbb56f0cd9a6a7c22f8c4 | [
"Apache-2.0"
] | 2 | 2019-09-02T05:12:19.000Z | 2019-12-13T07:53:44.000Z | docs/modules/ROOT/pages/reactive/integrations/rsocket.adoc | kdombeck/spring-security | ed8887e0fcb8986b1e6dbb56f0cd9a6a7c22f8c4 | [
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/pages/reactive/integrations/rsocket.adoc | kdombeck/spring-security | ed8887e0fcb8986b1e6dbb56f0cd9a6a7c22f8c4 | [
"Apache-2.0"
] | null | null | null | [[rsocket]]
= RSocket Security
Spring Security's RSocket support relies on a `SocketAcceptorInterceptor`.
The main entry point into security is in `PayloadSocketAcceptorInterceptor`, which adapts the RSocket APIs to allow intercepting a `PayloadExchange` with `PayloadInterceptor` implementations.
The following example shows a minimal RSocket Security configuration:
* Hello RSocket {gh-samples-url}/reactive/rsocket/hello-security[hellorsocket]
* https://github.com/rwinch/spring-flights/tree/security[Spring Flights]
== Minimal RSocket Security Configuration
You can find a minimal RSocket Security configuration below:
====
.Java
[source,java,role="primary"]
----
@Configuration
@EnableRSocketSecurity
public class HelloRSocketSecurityConfig {
@Bean
public MapReactiveUserDetailsService userDetailsService() {
UserDetails user = User.withDefaultPasswordEncoder()
.username("user")
.password("user")
.roles("USER")
.build();
return new MapReactiveUserDetailsService(user);
}
}
----
.Kotlin
[source,kotlin,role="secondary"]
----
@Configuration
@EnableRSocketSecurity
open class HelloRSocketSecurityConfig {
@Bean
open fun userDetailsService(): MapReactiveUserDetailsService {
val user = User.withDefaultPasswordEncoder()
.username("user")
.password("user")
.roles("USER")
.build()
return MapReactiveUserDetailsService(user)
}
}
----
====
This configuration enables <<rsocket-authentication-simple,simple authentication>> and sets up <<rsocket-authorization,rsocket-authorization>> to require an authenticated user for any request.
== Adding SecuritySocketAcceptorInterceptor
For Spring Security to work, we need to apply `SecuritySocketAcceptorInterceptor` to the `ServerRSocketFactory`.
Doing so connects our `PayloadSocketAcceptorInterceptor` with the RSocket infrastructure.
In a Spring Boot application, you can do this automatically by using `RSocketSecurityAutoConfiguration` with the following code:
====
.Java
[source,java,role="primary"]
----
@Bean
RSocketServerCustomizer springSecurityRSocketSecurity(SecuritySocketAcceptorInterceptor interceptor) {
return (server) -> server.interceptors((registry) -> registry.forSocketAcceptor(interceptor));
}
----
.Kotlin
[source,kotlin,role="secondary"]
----
@Bean
fun springSecurityRSocketSecurity(interceptor: SecuritySocketAcceptorInterceptor): RSocketServerCustomizer {
return RSocketServerCustomizer { server ->
server.interceptors { registry ->
registry.forSocketAcceptor(interceptor)
}
}
}
----
====
[[rsocket-authentication]]
== RSocket Authentication
RSocket authentication is performed with `AuthenticationPayloadInterceptor`, which acts as a controller to invoke a `ReactiveAuthenticationManager` instance.
[[rsocket-authentication-setup-vs-request]]
=== Authentication at Setup versus Request Time
Generally, authentication can occur at setup time or at request time or both.
Authentication at setup time makes sense in a few scenarios.
A common scenarios is when a single user (such as a mobile connection) uses an RSocket connection.
In this case, only a single user uses the connection, so authentication can be done once at connection time.
In a scenario where the RSocket connection is shared, it makes sense to send credentials on each request.
For example, a web application that connects to an RSocket server as a downstream service would make a single connection that all users use.
In this case, if the RSocket server needs to perform authorization based on the web application's users credentials, authentication for each request makes sense.
In some scenarios, authentication at both setup and for each request makes sense.
Consider a web application, as described previously.
If we need to restrict the connection to the web application itself, we can provide a credential with a `SETUP` authority at connection time.
Then each user can have different authorities but not the `SETUP` authority.
This means that individual users can make requests but not make additional connections.
[[rsocket-authentication-simple]]
=== Simple Authentication
Spring Security has support for the https://github.com/rsocket/rsocket/blob/5920ed374d008abb712cb1fd7c9d91778b2f4a68/Extensions/Security/Simple.md[Simple Authentication Metadata Extension].
[NOTE]
====
Basic Authentication evolved into Simple Authentication and is only supported for backward compatibility.
See `RSocketSecurity.basicAuthentication(Customizer)` for setting it up.
====
The RSocket receiver can decode the credentials by using `AuthenticationPayloadExchangeConverter`, which is automatically setup by using the `simpleAuthentication` portion of the DSL.
The following example shows an explicit configuration:
====
.Java
[source,java,role="primary"]
----
@Bean
PayloadSocketAcceptorInterceptor rsocketInterceptor(RSocketSecurity rsocket) {
rsocket
.authorizePayload(authorize ->
authorize
.anyRequest().authenticated()
.anyExchange().permitAll()
)
.simpleAuthentication(Customizer.withDefaults());
return rsocket.build();
}
----
.Kotlin
[source,kotlin,role="secondary"]
----
@Bean
open fun rsocketInterceptor(rsocket: RSocketSecurity): PayloadSocketAcceptorInterceptor {
rsocket
.authorizePayload { authorize -> authorize
.anyRequest().authenticated()
.anyExchange().permitAll()
}
.simpleAuthentication(withDefaults())
return rsocket.build()
}
----
====
The RSocket sender can send credentials by using `SimpleAuthenticationEncoder`, which you can add to Spring's `RSocketStrategies`.
====
.Java
[source,java,role="primary"]
----
RSocketStrategies.Builder strategies = ...;
strategies.encoder(new SimpleAuthenticationEncoder());
----
.Kotlin
[source,kotlin,role="secondary"]
----
var strategies: RSocketStrategies.Builder = ...
strategies.encoder(SimpleAuthenticationEncoder())
----
====
You can then use it to send a username and password to the receiver in the setup:
====
.Java
[source,java,role="primary"]
----
MimeType authenticationMimeType =
MimeTypeUtils.parseMimeType(WellKnownMimeType.MESSAGE_RSOCKET_AUTHENTICATION.getString());
UsernamePasswordMetadata credentials = new UsernamePasswordMetadata("user", "password");
Mono<RSocketRequester> requester = RSocketRequester.builder()
.setupMetadata(credentials, authenticationMimeType)
.rsocketStrategies(strategies.build())
.connectTcp(host, port);
----
.Kotlin
[source,kotlin,role="secondary"]
----
val authenticationMimeType: MimeType =
MimeTypeUtils.parseMimeType(WellKnownMimeType.MESSAGE_RSOCKET_AUTHENTICATION.string)
val credentials = UsernamePasswordMetadata("user", "password")
val requester: Mono<RSocketRequester> = RSocketRequester.builder()
.setupMetadata(credentials, authenticationMimeType)
.rsocketStrategies(strategies.build())
.connectTcp(host, port)
----
====
Alternatively or additionally, a username and password can be sent in a request.
====
.Java
[source,java,role="primary"]
----
Mono<RSocketRequester> requester;
UsernamePasswordMetadata credentials = new UsernamePasswordMetadata("user", "password");
public Mono<AirportLocation> findRadar(String code) {
return this.requester.flatMap(req ->
req.route("find.radar.{code}", code)
.metadata(credentials, authenticationMimeType)
.retrieveMono(AirportLocation.class)
);
}
----
.Kotlin
[source,kotlin,role="secondary"]
----
import org.springframework.messaging.rsocket.retrieveMono
// ...
var requester: Mono<RSocketRequester>? = null
var credentials = UsernamePasswordMetadata("user", "password")
open fun findRadar(code: String): Mono<AirportLocation> {
return requester!!.flatMap { req ->
req.route("find.radar.{code}", code)
.metadata(credentials, authenticationMimeType)
.retrieveMono<AirportLocation>()
}
}
----
====
[[rsocket-authentication-jwt]]
=== JWT
Spring Security has support for the https://github.com/rsocket/rsocket/blob/5920ed374d008abb712cb1fd7c9d91778b2f4a68/Extensions/Security/Bearer.md[Bearer Token Authentication Metadata Extension].
The support comes in the form of authenticating a JWT (determining that the JWT is valid) and then using the JWT to make authorization decisions.
The RSocket receiver can decode the credentials by using `BearerPayloadExchangeConverter`, which is automatically setup by using the `jwt` portion of the DSL.
The following listing shows an example configuration:
====
.Java
[source,java,role="primary"]
----
@Bean
PayloadSocketAcceptorInterceptor rsocketInterceptor(RSocketSecurity rsocket) {
rsocket
.authorizePayload(authorize ->
authorize
.anyRequest().authenticated()
.anyExchange().permitAll()
)
.jwt(Customizer.withDefaults());
return rsocket.build();
}
----
.Kotlin
[source,kotlin,role="secondary"]
----
@Bean
fun rsocketInterceptor(rsocket: RSocketSecurity): PayloadSocketAcceptorInterceptor {
rsocket
.authorizePayload { authorize -> authorize
.anyRequest().authenticated()
.anyExchange().permitAll()
}
.jwt(withDefaults())
return rsocket.build()
}
----
====
The configuration above relies on the existence of a `ReactiveJwtDecoder` `@Bean` being present.
An example of creating one from the issuer can be found below:
====
.Java
[source,java,role="primary"]
----
@Bean
ReactiveJwtDecoder jwtDecoder() {
return ReactiveJwtDecoders
.fromIssuerLocation("https://example.com/auth/realms/demo");
}
----
.Kotlin
[source,kotlin,role="secondary"]
----
@Bean
fun jwtDecoder(): ReactiveJwtDecoder {
return ReactiveJwtDecoders
.fromIssuerLocation("https://example.com/auth/realms/demo")
}
----
====
The RSocket sender does not need to do anything special to send the token, because the value is a simple `String`.
The following example sends the token at setup time:
====
.Java
[source,java,role="primary"]
----
MimeType authenticationMimeType =
MimeTypeUtils.parseMimeType(WellKnownMimeType.MESSAGE_RSOCKET_AUTHENTICATION.getString());
BearerTokenMetadata token = ...;
Mono<RSocketRequester> requester = RSocketRequester.builder()
.setupMetadata(token, authenticationMimeType)
.connectTcp(host, port);
----
.Kotlin
[source,kotlin,role="secondary"]
----
val authenticationMimeType: MimeType =
MimeTypeUtils.parseMimeType(WellKnownMimeType.MESSAGE_RSOCKET_AUTHENTICATION.string)
val token: BearerTokenMetadata = ...
val requester = RSocketRequester.builder()
.setupMetadata(token, authenticationMimeType)
.connectTcp(host, port)
----
====
Alternatively or additionally, you can send the token in a request:
====
.Java
[source,java,role="primary"]
----
MimeType authenticationMimeType =
MimeTypeUtils.parseMimeType(WellKnownMimeType.MESSAGE_RSOCKET_AUTHENTICATION.getString());
Mono<RSocketRequester> requester;
BearerTokenMetadata token = ...;
public Mono<AirportLocation> findRadar(String code) {
return this.requester.flatMap(req ->
req.route("find.radar.{code}", code)
.metadata(token, authenticationMimeType)
.retrieveMono(AirportLocation.class)
);
}
----
.Kotlin
[source,kotlin,role="secondary"]
----
val authenticationMimeType: MimeType =
MimeTypeUtils.parseMimeType(WellKnownMimeType.MESSAGE_RSOCKET_AUTHENTICATION.string)
var requester: Mono<RSocketRequester>? = null
val token: BearerTokenMetadata = ...
open fun findRadar(code: String): Mono<AirportLocation> {
return this.requester!!.flatMap { req ->
req.route("find.radar.{code}", code)
.metadata(token, authenticationMimeType)
.retrieveMono<AirportLocation>()
}
}
----
====
[[rsocket-authorization]]
== RSocket Authorization
RSocket authorization is performed with `AuthorizationPayloadInterceptor`, which acts as a controller to invoke a `ReactiveAuthorizationManager` instance.
You can use the DSL to set up authorization rules based upon the `PayloadExchange`.
The following listing shows an example configuration:
====
.Java
[source,java,role="primary"]
----
rsocket
.authorizePayload(authz ->
authz
.setup().hasRole("SETUP") // <1>
.route("fetch.profile.me").authenticated() // <2>
.matcher(payloadExchange -> isMatch(payloadExchange)) // <3>
.hasRole("CUSTOM")
.route("fetch.profile.{username}") // <4>
.access((authentication, context) -> checkFriends(authentication, context))
.anyRequest().authenticated() // <5>
.anyExchange().permitAll() // <6>
);
----
.Kotlin
[source,kotlin,role="secondary"]
----
rsocket
.authorizePayload { authz ->
authz
.setup().hasRole("SETUP") // <1>
.route("fetch.profile.me").authenticated() // <2>
.matcher { payloadExchange -> isMatch(payloadExchange) } // <3>
.hasRole("CUSTOM")
.route("fetch.profile.{username}") // <4>
.access { authentication, context -> checkFriends(authentication, context) }
.anyRequest().authenticated() // <5>
.anyExchange().permitAll()
} // <6>
----
<1> Setting up a connection requires the `ROLE_SETUP` authority.
<2> If the route is `fetch.profile.me`, authorization only requires the user to be authenticated.
<3> In this rule, we set up a custom matcher, where authorization requires the user to have the `ROLE_CUSTOM` authority.
<4> This rule uses custom authorization.
The matcher expresses a variable with a name of `username` that is made available in the `context`.
A custom authorization rule is exposed in the `checkFriends` method.
<5> This rule ensures that a request that does not already have a rule requires the user to be authenticated.
A request is where the metadata is included.
It would not include additional payloads.
<6> This rule ensures that any exchange that does not already have a rule is allowed for anyone.
In this example, it means that payloads that have no metadata also have no authorization rules.
====
Note that authorization rules are performed in order.
Only the first authorization rule that matches is invoked.
| 32.614849 | 195 | 0.748595 |
e761d70ac281368943a885a9dcc9f27f7136235e | 2,620 | adoc | AsciiDoc | kits/igor/README.adoc | HebiRobotics/hebi-matlab-examples | af7043fef3aacdb997aa33f493f59fb01df2d682 | [
"Apache-2.0"
] | 26 | 2017-11-01T07:06:45.000Z | 2021-12-18T06:40:39.000Z | kits/igor/README.adoc | HebiRobotics/hebi-matlab-examples | af7043fef3aacdb997aa33f493f59fb01df2d682 | [
"Apache-2.0"
] | 43 | 2017-11-20T20:46:08.000Z | 2020-03-20T10:40:37.000Z | kits/igor/README.adoc | HebiRobotics/hebi-matlab-examples | af7043fef3aacdb997aa33f493f59fb01df2d682 | [
"Apache-2.0"
] | 14 | 2017-10-09T16:43:16.000Z | 2021-12-31T17:45:24.000Z | # Igor Balancing Robot MATLAB Demo
HEBI Robotics
March 2019
Matlab 2013b (or later)
## Requirements
### Controller
The demo can be run using the Mobile IO app connected to the robot's wireless network or computer running the demo.
* Mobile IO (Requires HEBI's Mobile IO app on an Android or iOS device)
### Software Requirements
* http://docs.hebi.us/tools.html#matlab-api[HEBI MATLAB API]
### Firmware
The demo is tuned to run using actuators that have http://docs.hebi.us/downloads_changelogs.html#firmware-changelog[firmware version] 15.2.0 or greater. You can update actuators to the most recent version of firmware using http://docs.hebi.us/tools.html#scope-gui[Scope].
## Running
To run the main demo code for Igor from Matlab:
- Open Matlab
- Navigate to this folder hebi-matlab-examples/kits/igor/
- Run startup.m
- Run igor2StartupIntegrated.m
Note: By default, the demo will look for a Mobile IO device with family `**Igor**` and name `**mobileIO**`.
## Controls
The demo provides default mappings for both supported controllers. You can modify them, if needed, by editing the `components/configuration.py` file directly.
### Mobile IO
The default Mobile IO mappings are as followed.
NOTE: The layout of the application may appear different on your device than what is shown, but the buttons and axes are guaranteed across any device.
image::resources/mobile_io_igor.png[mobile io igor]
Almost all of control code for Igor is contained within __**igor2DemoIntegrated.m**__.
The function will look for modules, look for a controller, and then start the demo by pressing the B3 Button.
After starting the demo with B3, the robot will standup partially and start balancing. Move slider A3 up to stand up the rest way. Return the slider back to the center to stop sending commands. There is a deadzone of 20% in each direction for the sliders.
HOLD DOWN THE B4 BUTTON TO END THE DEMO. The robot will squat down and exit the main loop when it hits the bottom of the leg travel. The robot will then wait for the B3 Button to restart the demo.
NOTE: If the Mobile IO app closes or disconnects while the demo is running the last commands sent will continue to be sent. That is, the robot will either maintain it's current balancing position OR continue moving in the commanded manner.
## Autostart
To run the main demo automatically, there are scripts provided:
- igorStart.sh - A shell script for Linux
- igorStart.bat - A batch file for Windows
Both of these scripts may need to be modified so that the paths to Matlab or the Igor code directory match the path on your machine.
| 41.587302 | 272 | 0.773282 |
11fe31ea7bb2ade466b6615e855e6b96b28e536e | 4,135 | adoc | AsciiDoc | documentation/src/main/asciidoc/topics/server_tasks.adoc | ryanemerson/infinispan | 68170ad67914ac29ee3194c3e9e7c2e006c6460d | [
"Apache-2.0"
] | null | null | null | documentation/src/main/asciidoc/topics/server_tasks.adoc | ryanemerson/infinispan | 68170ad67914ac29ee3194c3e9e7c2e006c6460d | [
"Apache-2.0"
] | 1 | 2022-03-02T14:34:13.000Z | 2022-03-02T14:34:13.000Z | documentation/src/main/asciidoc/topics/server_tasks.adoc | wburns/infinispan | d075f625073b625d1e92d378b00a1c0a28e09e57 | [
"Apache-2.0"
] | null | null | null | [[server_tasks]]
= Server Tasks
Server tasks are server-side scripts defined in Java language.
== Implementing Server Tasks
To develop a server task, you should define a class that extends
link:{javadocroot}/org/infinispan/tasks/ServerTask.html[`org.infinispan.tasks.ServerTask`]
interface, defined in `infinispan-tasks-api` module.
A typical server task implementation would implement these methods:
* link:{javadocroot}/org/infinispan/tasks/ServerTask.html#setTaskContext-org.infinispan.tasks.TaskContext-[`setTaskContext`]
allows server tasks implementors to access execution context information.
This includes task parameters, cache reference on which the task is executed...etc.
Normally, implementors would store this information locally and use it when the task is actually executed.
* link:{javadocroot}/org/infinispan/tasks/Task.html#getName--[`getName`]
should return a unique name for the task.
The client will use this name to to invoke the task.
* link:{javadocroot}/org/infinispan/tasks/Task.html#getExecutionMode--[`getExecutionMode`]
is used to decide whether to invoke the task in 1 node in a cluster of N nodes or invoke it in N nodes.
For example, server tasks that invoke stream processing are only required to be executed in 1 node in the cluster.
This is because stream processing itself makes sure processing is distributed to all nodes in cluster.
* http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Callable.html?is-external=true#call--[`call`]
is the method that's invoked when the user invokes the server task.
Here's an example of a hello greet task that takes as parameter the name of the person to greet.
[source,java]
----
package example;
import org.infinispan.tasks.ServerTask;
import org.infinispan.tasks.TaskContext;
public class HelloTask implements ServerTask<String> {
private TaskContext ctx;
@Override
public void setTaskContext(TaskContext ctx) {
this.ctx = ctx;
}
@Override
public String call() throws Exception {
String name = (String) ctx.getParameters().get().get("name");
return "Hello " + name;
}
@Override
public String getName() {
return "hello-task";
}
}
----
Once the task has been implemented, it needs to be wrapped inside a jar.
The jar is then deployed to the {brandname} Server and from them on it can be invoked.
The {brandname} Server uses
https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html[service loader pattern]
to load the task, so implementations need to adhere to these requirements.
For example, server task implementations must have a zero-argument constructor.
Moreover, the jar must contain a
`META-INF/services/org.infinispan.tasks.ServerTask`
file containing the fully qualified name(s) of the server tasks included in the jar.
For example:
[source]
----
example.HelloTask
----
With jar packaged, the next step is to push the jar to the {brandname} Server.
The server is powered by WildFly Application Server, so if using Maven
https://docs.jboss.org/wildfly/plugins/maven/latest/index.html[Wildfly's Maven plugin]
can be used for this:
[source,xml,options="nowrap",subs=attributes+]
----
include::dependencies_maven/wildfly_maven_plugin.xml[]
----
Then call the following from command line:
[source, bash]
----
$ mvn package wildfly:deploy
----
Alternative ways of deployment jar files to Wildfly Application Server are explained
https://docs.jboss.org/author/display/WFLY10/Application+deployment[here].
Executing the task can be done using the following code:
[source, java]
----
// Create a configuration for a locally-running server
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServer().host("127.0.0.1").port(11222);
// Connect to the server
RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());
// Obtain the remote cache
RemoteCache<String, String> cache = cacheManager.getCache();
// Create task parameters
Map<String, String> parameters = new HashMap<>();
parameters.put("name", "developer");
// Execute task
String greet = cache.execute("hello-task", parameters);
System.out.println(greet);
----
| 35.34188 | 124 | 0.767836 |
37d50bc53340526cd342cc1fb11dbfc8d3bb5cdd | 2,837 | adoc | AsciiDoc | examples/camel-example-fhir-auth-tx-spring-boot/readme.adoc | rmannibucau/camel | 2f4a334f3441696de67e5eda57845c4aecb0d7a6 | [
"Apache-2.0"
] | 1 | 2019-03-28T06:37:13.000Z | 2019-03-28T06:37:13.000Z | examples/camel-example-fhir-auth-tx-spring-boot/readme.adoc | rmannibucau/camel | 2f4a334f3441696de67e5eda57845c4aecb0d7a6 | [
"Apache-2.0"
] | 9 | 2020-12-21T17:08:47.000Z | 2022-02-01T01:08:09.000Z | examples/camel-example-fhir-auth-tx-spring-boot/readme.adoc | rmannibucau/camel | 2f4a334f3441696de67e5eda57845c4aecb0d7a6 | [
"Apache-2.0"
] | 1 | 2020-10-02T17:47:17.000Z | 2020-10-02T17:47:17.000Z | == FHIR Authorization and Transaction Example - Spring Boot
=== Introduction
This is an example application of the `camel-fhir` component. We'll be using `camel-spring-boot` as well for an easy setup.
The Camel route is located in the `MyCamelRouter` class.
This example will read patients stored in csv files from a directory and convert them to FHIR dtsu3 patients and upload them to a configured FHIR server. Each file is uploaded in a new transaction.
The example assumes you have a running FHIR server at your disposal, which is configured for basic authentication.
You may use [hapi-fhir-jpa-server-example](https://github.com/rkorytkowski/hapi-fhir/tree/basic-auth/hapi-fhir-jpaserver-example). You can start it up by running `mvn jetty:run`.
By default, the example uses `http://localhost:8080/hapi-fhir-jpaserver-example/baseDstu3` as the FHIR server URL, DSTU3 as the FHIR version, BASIC authentication (`admin` as username and `Admin123` as password) and `target/work/fhir/input`
as the directory to look for csv patients.
However, you can edit the `application.properties` file to change the defaults and provide your own configuration.
There is an example of a test in the `MyCamelApplicationTest` class, which mocks out the FHIR server, thus can be run without the FHIR server.
=== Build
You can build this example using:
```sh
$ mvn package
```
=== Run
You can run this example using:
```sh
$ mvn spring-boot:run
```
When the Camel application runs, you should see a folder created under `target/work/fhir/input`. Copy the file `hl7v2.patient`
located in the `src/main/data` folder into it. You should see the following output:
```
2018-07-24 11:52:51.615 INFO 30666 --- [work/fhir/input] fhir-example: Converting hl7v2.patient
2018-07-24 11:52:52.700 INFO 30666 --- [work/fhir/input] fhir-example: Inserting Patient: {"resourceType":"Patient","id":"100005056","name":[{"family":"Freeman","given":["Vincent"]}]}
2018-07-24 11:52:56.995 INFO 30666 --- [ #2 - CamelFhir] fhir-example: Patient created successfully: ca.uhn.fhir.rest.api.MethodOutcome@270f03f1
```
The Camel application can be stopped pressing <kbd>ctrl</kbd>+<kbd>c</kbd> in the shell.
=== To get health check
To show a summary of spring boot health check
----
curl -XGET -s http://localhost:8080/actuator/health
----
=== To get info about the routes
To show a summary of all the routes
----
curl -XGET -s http://localhost:8080/actuator/camelroutes
----
To show detailed information for a specific route
----
curl -XGET -s http://localhost:8080/actuator/camelroutes/{id}/detail
----
=== Forum, Help, etc
If you hit an problems please let us know on the Camel Forums
<http://camel.apache.org/discussion-forums.html>
Please help us make Apache Camel better - we appreciate any feedback you may have. Enjoy!
The Camel riders!
| 35.4625 | 240 | 0.747621 |
81a7e65abb3d856a1a96325ba5b4784677b33ba2 | 122,549 | adoc | AsciiDoc | docs/command_interface/src/asciidoc/_chapters/commands.adoc | anoopsharma00/incubator-trafodion | b109e2cf5883f8e763af853ab6fad7ce7110d9e8 | [
"Apache-2.0"
] | null | null | null | docs/command_interface/src/asciidoc/_chapters/commands.adoc | anoopsharma00/incubator-trafodion | b109e2cf5883f8e763af853ab6fad7ce7110d9e8 | [
"Apache-2.0"
] | null | null | null | docs/command_interface/src/asciidoc/_chapters/commands.adoc | anoopsharma00/incubator-trafodion | b109e2cf5883f8e763af853ab6fad7ce7110d9e8 | [
"Apache-2.0"
] | null | null | null | ////
/**
*@@@ START COPYRIGHT @@@
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* @@@ END COPYRIGHT @@@
*/
////
<<<
[[commands]]
= Commands
TrafCI supports these commands in the command-line interface or in script files that you run from the command-line interface.
[cols="20%l,50%,30%",options="header"]
|===
| Command | Description | Documentation
| @ | Runs the SQL statements and interface commands contained in a specified script file. | <<cmd_at_sign, @ Command>>
| / | Runs the previously executed SQL statement. | <<cmd_slash, / Command>>
| ALIAS | Maps a string to any interface or SQL command. | <<cmd_alias, ALIAS Command>>
| CLEAR | Clears the command console so that only the prompt appears at the top of the screen. | <<cmd_clear, CLEAR Command>>
| CONNECT | Creates a new connection to the {project-name} database from a current or existing TrafCI session. | <<cmd_connect, CONNECT Command>>
| DELAY | Allows the TrafCI session to be in sleep mode for the specified interval. | <<cmd_delay, DELAY Command>>
| DISCONNECT | Terminates the connection to the {project-name} database. | <<cmd_disconnect, DISCONNECT Command>>
| ENV | Displays attributes of the current TrafCI session. | <<cmd_env, ENV Command>>
| EXIT | Disconnects from and exits the command-line interface. | <<cmd_exit, EXIT Command>>
| FC | Edits and re-executes a previous command. This command is restricted to the command-line
interface and is disallowed in script files. | <<cmd_fc, FC Command>>
| GET STATISTICS | Returns formatted statistics for the last executed SQL statement. | <<cmd_get_statistics, GET STATISTICS Command>>
| GOTO | Jumps to a point the command history specified by the <<cmd_label, LABEL Command>>. | <<cmd_goto, GOTO Command>>
| HELP | Displays help text for the interface commands. | <<cmd_help, HELP Command>>
| HISTORY | Displays recently executed commands. | <<cmd_history, HISTORY Command>>
| IF&8230;THEN | Allows the conditional execution of actions specified within the `IF…THEN` conditional statement. | <<cmd_if_then, IF…THEN Command>>
| LABEL | Marks a point in the command history that you can jump to by using the <<cmd_goto, GOTO Command>>. | <<cmd_label, LABEL Command>>
| LOCALHOST | Executes client machine commands. | <<cmd_localhost, LOCALHOST Command>>
| LOG | Logs commands and output from TrafCI to a log file. | <<cmd_log, LOG Command>>
| OBEY | Runs the SQL statements and interface commands contained in a specified script file. | <<cmd_obey, OBEY Command>>
| PRUN | Runs script files in parallel. | <<cmd_prun, PRUN Command>>
| QUIT | Disconnects from and exits TrafCI. | <<cmd_quit, QUIT Command>>
| RECONNECT | Creates a new connection to the {project-name} database using the login credentials of the last
successful connection. | <<cmd_reconnect, RECONNECT Command>>
| REPEAT | Re-executes a command. | <<cmd_repeat, REPEAT Command>>
| RESET LASTERROR | Resets the last error code to `0`. | <<cmd_reset_lasterror, RESET LASTERROR Command>>
| RESET PARAM | Clears all parameter values or a specified parameter value in the current session. | <<cmd_reset_param, RESET PARAM Command>>
| RUN | Runs the previously executed SQL statement. | <<cmd_run, RUN Command>>
| SAVEHIST | Saves the session history in a user-specified file. | <<cmd_savehist, SAVEHIST Command>>
| SESSION | Displays attributes of the current TrafCI session. | <<cmd_session, SESSION Command>>
| SET COLSEP | Sets the column separator and allows you to control the formatting of the result displayed for SQL queries. | <<cmd_set_colsep, SET COLSEP Command>>
| SET FETCHSIZE | Changes the default fetchsize used by JDBC. | <<cmd_set_fetchsize, SET FETCHSIZE Command>>
| SET HISTOPT | Sets the history option and controls how commands are added to the history buffer. | <<cmd_set_histopt, SET HISTOPT Command>>
| SET IDLETIMEOUT | Sets the idle timeout value for the current session. | <<cmd_set_idletimeout, SET IDLETIMEOUT>>
| SET LIST_COUNT | Sets the maximum number of rows to be returned by `SELECT` statements that are executed after this command. | <<cmd_set_list_count, SET LIST_COUNT Command>>
| SET MARKUP | Sets the markup format and controls how results are displayed by TrafCI. | <<cmd_set_markup, SET MARKUP Command>>
| SET PARAM | Sets a parameter value in the current session. | <<cmd_set_param, SET PARAM Command>>
| SET PROMPT | Sets the prompt of the current session to a specified string or to a session variable. | <<cmd_set_prompt, SET PROMPT Command>>
| SET SQLPROMPT | Sets the SQL prompt of the current session to a specified string. The default is `SQL`. | <<cmd_set_sqlprompt, SET SQLPROMPT Command>>
| SET SQLTERMINATOR | Sets the SQL statement terminator of the current session to a specified string.
The default is a semicolon (`;`). | <<cmd_set_sqlterminator, SET SQLTERMINATOR Command>>
| SET STATISTICS | Automatically retrieves the statistics information for a query being executed. | <<cmd_set_statistics, SET STATISTICS Command>>
| SET TIME | Causes the local time of the client workstation to be displayed as part of the interface prompt. | <<cmd_set_time, SET TIME Command>>
| SET TIMING | Causes the elapsed time to be displayed after each SQL statement executes. | <<cmd_set_timing, SET TIMING Command>>
| SHOW ACTIVITYCOUNT | Functions as an alias of <<cmd_show_reccount, SHOW RECCOUNT Command>>. | <<cmd_show_activitycount, SHOW ACTIVITYCOUNT Command>>
| SHOW ALIAS | Displays all or a set of aliases available in the current TrafCI session. | <<cmd_show_alias, SHOW ALIAS Command>>
| SHOW ALIASES | Displays all the aliases available in the current TrafCI session. | <<cmd_show_aliases, SHOW ALIASES Command>>
| SHOW CATALOG | Displays the current catalog of the TrafCI session. | <<cmd_show_catalog, SHOW CATALOG Command>>
| SHOW COLSEP | Displays the value of the column separator for the current TrafCI session. | <<cmd_show_colsep, SHOW COLSEP Command>>
| SHOW ERRORCODE | Functions as an alias for the <<cmd_show_lasterror, SHOW LASTERROR Command>>. | <<cmd_show_errorcode, SHOW ERRORCODE Command>>
| SHOW FETCHSIZE | Displays the fetch size value for the current TrafCI session. | <<cmd_show_fetchsize, SHOW FETCHSIZE Command>>
| SHOW HISTOPT | Displays the value that has been set for the history option of the current setting. | <<cmd_show_histopt, SHOW HISTOPT Command>>
| SHOW IDLETIMEOUT | Displays the idle timeout value of the current session. | <<cmd_show_idletimeout, SHOW IDLETIMEOUT Command>>
| SHOW LASTERROR | Displays the last error of the statement that was executed. | <<cmd_show_lasterror, SHOW LASTERROR Command>>
| SHOW LIST_COUNT | Displays the maximum number of rows to be returned by `SELECT` statements in the current session. | <<cmd_show_list_count, SHOW LIST_COUNT Command>>
| SHOW MARKUP | Displays the value that has been set for the markup option for the current TrafCI session. | <<cmd_show_markup, SHOW MARKUP Command>>
| SHOW PARAM | Displays the parameters that are set in the current session. | <<cmd_show_param, SHOW PARAM Command>>
| SHOW PREPARED | Displays the prepared statements in the current TrafCI session. | <<cmd_show_prepared, SHOW PREPARED Command>>
| SHOW RECCOUNT | Displays the record count of the previous executed SQL statement. | <<cmd_show_reccount, SHOW RECCOUNT Command>>
| SHOW REMOTEPROCESS | Displays the process name of the DCS server that is handling the current connection. | <<cmd_show_remoteprocess, SHOW REMOTEPROCESS Command>>
| SHOW SCHEMA | Displays the current schema of the TrafCI session. | <<cmd_show_schema, SHOW SCHEMA Command>>
| SHOW SESSION | Displays attributes of the current TrafCI session. | <<cmd_show_session, SHOW SESSION Command>>
| SHOW SQLPROMPT | Displays the value of the SQL prompt for the current session. | <<cmd_show_sqlprompt, SHOW SQLPROMPT Command>>
| SHOW SQLTERMINATOR | Displays the SQL statement terminator of the current session. | <<cmd_show_sqlterminator, SHOW SQLTERMINATOR Command>>
| SHOW STATISTICS | Displays if statistics has been enabled or disabled for the current session. | <<cmd_show_statistics, SHOW STATISTICS Command>>
| SHOW TIME | Displays the setting for the local time in the SQL prompt. | <<cmd_show_time, SHOW TIME Command>>
| SHOW TIMING | Displays the setting for the elapsed time. | <<cmd_show_timing, SHOW TIMING Command>>
| SPOOL | Logs commands and output from TrafCI to a log file. | <<cmd_spool, SPOOL Command>>
| VERSION | Displays the build versions of the platform, database connectivity services, JDBC Type 4 Driver, and TrafCI.| <<cmd_version, VERSION Command>>
|===
<<<
[[cmd_at_sign]]
== @ Command
The `@` command executes the SQL statements and interface commands contained in a specified script file. The `@` command is
executed the same as the `OBEY` command. For more information on syntax and considerations, <<cmd_obey, OBEY Command>>.
=== Syntax
```
@{script-file | wild-card-pattern} [(section-name)]
```
* `_script-file_`
+
is the name of an ASCII text file that contains SQL statements, interface commands, and comments. If the script file exists outside the
local directory where you launch TrafCI (by default, the `bin` directory) specify the full directory path of the script file.
* `_wild-card-pattern_`
+
is a character string used to search for script files with names that match the character string. `_wild-card-pattern_` matches a string,
depending on the operating system for case-sensitivity, unless you enclose it within double quotes. To look for similar values, specify
only part of the characters of `_wild-card-pattern_` combined with these wild-card characters:
+
[cols="10%,90%"]
|===
| `*` | Use an asterisk (`*`) to indicate zero or more characters of any type. For example, `*art*` matches `SMART`, `ARTIFICIAL`, and `PARTICULAR`.
| `?` | Use a question mark (`?`) to indicate any single character. For example, `boo?` matches `BOOK` and `BOOT` but not `BOO` or `BOOTS`.
|===
* `(_section-name_)`
+
is the name of a section within the `_script-file_` to execute. If you specify `_section-name_`, the `@` command executes the commands between
the header line for the specified section and the header line for the next section (or the end of the script file).
If you omit `_section-name_`, the `@` command executes the entire script file. For more information, <<script_section_headers, Section Headers>>.
<<<
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* Space is disallowed between the `@` sign and the first character of the script name.
* For additional considerations, see the <<cmd_obey, OBEY Command>>.
=== Examples
* This `@` command runs the script file from the local directory (the same directory where you are running TrafCI):
+
```
SQL> @ddl.sql
```
* This `@` command runs the script file in the specified directory on a Windows workstation:
+
```
SQL> @c:\my_files\ddl.sql
```
* This `@` command runs the script file in the specified directory on a Linux or UNIX workstation:
+
```
SQL> @./my_files/ddl.sql
```
<<<
[[cmd_slash]]
== / Command
The `/` command executes the previously executed SQL statement. This command does not repeat an interface command.
=== Syntax
```
/
```
=== Considerations
* You must enter the command on one line.
* The command does not require an SQL terminator.
=== Example
This `/` command executes the previously executed `SELECT` statement:
```
SQL> SELECT COUNT() FROM persnl.employee;
(EXPR)
--------------------
62
--- 1 row(s) selected.
`SQL>`/
(EXPR)
--------------------
62
--- 1 row(s) selected.
SQL>
```
<<<
[[cmd_alias]]
== ALIAS Command
The `ALIAS` command allows you to map a string to any interface or SQL command. The syntax of the interface or SQL command
is checked only when the mapped string is executed. This command replaces only the first token of a command string, which allows
the rest of the tokens to be treated as parameters.
=== Syntax
```
ALIAS value AS command SQL-terminator
```
* `_value_`
+
is a case-insensitive string without spaces. `_Value_` cannot be a command.
* `_command_`
+
is an command or SQL command.
* `_SQL-terminator_`
+
is the default terminator (`;`) or a string value defined for the statement terminator by the
<<cmd_set_sqlterminator, SET SQLTERMINATOR Command>>. For more information, see
<<interactive_set_show_terminator, Set and Show the SQL Terminator>>.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* The `ALIAS` command lasts only for the duration of the session.
* An alias on an alias is not supported.
<<<
=== Examples
* This command creates an alias named `.OS` to perform the `LOCALHOST (LH)` command:
+
```
SQL> ALIAS .OS AS LH;
```
* This command executes the new `ALIAS` with the `ls` option:
+
```
SQL> .OS ls
trafci-perl.pl trafci-python.py trafci.cmd trafci.pl trafci.py trafci.sh
```
* This command creates an alias named `.GOTO` to perform the `GOTO` command:
+
```
SQL> ALIAS .GOTO AS GOTO;
SQL> .GOTO mylabel
```
+
The `GOTO` statement executed, ignoring all commands until a `'LABEL MYLABEL'` command is encountered.
* This command creates an alias named USE to perform the `SET SCHEMA` operation, uses the alias to set the schema to
`TRAFODION.USR`, and checks the current schema to verify that the alias worked correctly:
+
```
SQL> ALIAS use AS "SET SCHEMA";
SQL> use TRAFODION.USR;
SQL> SHOW SCHEMA
SCHEMA USR
```
<<<
[[cmd_clear]]
== CLEAR Command
The `CLEAR` command clears the interface window so that only the prompt appears at the top of the window. `CLEAR` does not clear the log file or
reset the settings of the session.
=== Syntax
```
CLEAR
```
=== Considerations
* You must enter the command on one line.
* The `CLEAR` command does not require an SQL terminator.
=== Example
This CLEAR command clears the interface window:
```
SQL> CLEAR
```
After the CLEAR command executes, the interface window appears with only the prompt showing:
```
SQL>
```
<<<
[[cmd_connect]]
== CONNECT Command
The `CONNECT` command creates a new connection to the database from the current or existing TrafCI session.
=== Syntax
```
CONNECT [ username [ /password ][@hostname]]
```
* `_username_`
+
specifies the user name for logging in to the database platform.
+
** If the user name is not specified, then TrafCI prompts for the user name.
** If the user name contains spaces or special characters, such as a period (`.`), hyphen (`-`), or underscore (`_`),
then put the name within double quotes. For example: *"sq.user-1"*.
* `_/password_`
+
specifies the password of the user for logging in to the database platform.
+
** If the password is not specified, then TrafCI prompts for the password.
** If the password contains spaces or special characters, such as `@` or a single quote (`'`), then put the password
within double quotes. For example: *"Tr@f0d!0n"*.
* `_@hostname_`
+
specifies the host name or IP address of the database platform to which you want the client to connect.
+
** If the hostname is not specified, then the value is automatically used from the current TrafCI session.
** If TrafCI was invoked with the `-noconnect` launch parameter, then you are prompted for a `_hostname_` value.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If TrafCI was invoked with the `-noconnect` launch parameter, then TrafCI prompts you for the values.
* If the user name or password contains space or special characters, then you must put the name or password within double quotes.
=== Examples
* This command creates a new connection to the {project-name} database from the current or existing TrafCI session:
+
```
SQL> CONNECT
User Name: user1
Password:
Connected to Trafodion
```
* This command creates a new connection to the {project-name} database from the current or existing TrafCI session:
+
```
SQL> CONNECT user1/password
Connected to Trafodion
```
* This command creates a new connection to the {project-name} database from the current or existing TrafCI session:
+
```
SQL> CONNECT user1/password@host0101
Connected to Trafodion
```
* This command creates a new connection to the {project-name} database from the current or existing TrafCI session:
+
```
SQL> CONNECT user2
Password:
Connected to Trafodion
```
<<<
[[cmd_delay]]
== DELAY Command
The `DELAY` command allows the TrafCI session to be in sleep mode for the specified interval.
=== Syntax
```
DELAY time [sec[ond][s] | min[ute][s]]
```
* `_time_`
+
is an integer.
=== Considerations
* If `seconds` or `minutes` are not specified, then the default is `seconds`.
* The maximum delay limit is 3600 seconds. You can override this value by setting `trafci.maxDelayLimit` in `_JAVA_OPTIONS`.
The unit is seconds for `trafci.maxDelayLimit`.
* This command does not require an SQL terminator.
=== Examples
* This DELAY command puts the TrafCI session to sleep for 5 seconds before executing the next command:
+
```
SQL> DELAY 5 secs
SQL> SHOW VIEWS
```
* This DELAY command puts TrafCI session to sleep for 5 minutes before executing the next command, which is to exit the session:
+
```
SQL> DELAY 5 mins
SQL> EXIT
```
<<<
[[cmd_disconnect]]
== DISCONNECT Command
The `DISCONNECT` command terminates the connection from the database, not from TrafCI.
=== Syntax
```
DISCONNECT [WITH] [status] [IF {condition}]
```
* _status_
+
is any 1-byte integer. `_status_` is a shell return value, and the range of allowable values is platform dependent.
* _condition_
+
is the same as the condition parameter defined for the <<cmd_if_then, IF&8230;THEN Command>>. See <<cmd_conditional_parameters, Condition Parameter>>.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* After you disconnect from the {project-name} database, you can still run these interface commands:
+
[cols="15%,20%,28%,32%"]
|===
| ALIAS | HELP | SAVEHIST | SET/SHOW SQLTERMINATOR
| CLEAR | HISTORY | SESSION | SET/SHOW TIME
| CONNECT | LABEL | SET/SHOW COLSEP | SET/SHOW TIMING
| DELAY | LOCALHOST | SET/SHOW HISTOPT | SHOW ALIAS/ALIASES
| DISCONNECT | LOG | SET/SHOW IDLETIMEOUT | SHOW SESSION
| ENV | QUIT | SET/SHOW MARKUP | SPOOL
| EXIT | REPEAT | SET/SHOW PARAM | VERSION
| FC | RESET LASTERROR | SET PROMPT | GOTO
| RESET PARAM | SET/SHOW SQLPROMPT
|===
<<<
=== Examples
This command terminates the connection to the {project-name} database. You can connect to the {project-name} database by using the `CONNECT`
and `RECONNECT` commands:
```
SQL> DISCONNECT
Session Disconnected. Please connect to the database by using
connect/reconnect command.
```
<<<
[[cmd_env]]
== ENV Command
`ENV` displays attributes of the current TrafCI session. You can also use the `SESSION` and `SHOW SESSION` commands to perform the same function.
=== Syntax
```
ENV
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* ENV displays these attributes:
[cols="15%,85%",options="header"]
|===
| Attribute | Description
| `COLSEP` | Current column separator, which is used to control how query results are displayed. For more information, see <<cmd_set_colsep, SET COLSEP Command>>.
| `HISTOPT` | Current history options, which controls how the commands are added to the history buffer. For more information, see <<cmd_set_histopt, SET HISTOPT Command>>.
| `IDLETIMEOUT` | Current idle timeout value, which determines when the session expires after a period of inactivity. By default, the idle timeout is `30 minutes`.
For more information, see <<interactive_idle_timeout, Set and Show Session Idle Timeout Value>> and <<cmd_set_idletimeout, SET IDLETIMEOUT Command>>.
| `LIST_COUNT` | Current list count, which is the maximum number of rows that can be returned by SELECT statements. By default, the list count is all rows.
For more information, see <<cmd_set_list_count, SET LIST_COUNT Command>>.
| `LOG FILE` | Current log file and the directory containing the log file. By default, logging during a session is turned `off`.
For more information, see <<interactive_log_output, Log Output>>, and <<cmd_log, LOG Command>> or <<cmd_spool, SPOOL Command>>.
| `LOG OPTIONS` | Current logging options. By default, logging during a session is turned `off`, and this attribute does not appear in the output.
For more information, see the <<cmd_log, LOG Command>> or <<cmd_spool, SPOOL Command>>.
| `MARKUP` | Current markup option selected for the session. The default option is `RAW`. For more information, <<cmd_set_markup, SET MARKUP Command>>.
| `PROMPT` | Current prompt for the session. For example, the default is `SQL>`.
For more information, <<interactive_customize_prompt,Customize the Standard Prompt>> and <<cmd_set_prompt, SET PROMPT Command>>.
| `SCHEMA` | Current schema. The default is `USR`. For more information, see <<interactive_set_show_current_schema, Set and Show the Current Schema>>.
| `SERVER` | Host name and port number that you entered when logging in to the database platform. For more information, see <<trafci_login, Log In to Database Platform>>.
| `SQLTERMINATOR` | Current SQL statement terminator. The default is a semicolon (`;`).
For more information, see <<interactive_set_show_terminator, Set and Show the SQL Terminator>> and <<cmd_show_sqlterminator, SHOW SQLTERMINATOR Command>>.
| `STATISTICS` | Current setting (`on` or `off`) of statistics. For more information, see the <<cmd_set_statistics, SET STATISTICS Command>>.
| `TIME` | Current setting (`on` or `off`) of the local time as part of the prompt. When this command is set to `on`, military time is displayed.
By default, the local time is `off`. For more information, see <<interactive_customize_prompt,Customize the Standard Prompt>> and <<cmd_set_time, SET TIME Command>>.
| `TIMING` | Current setting (`on` or `off`) of the elapsed time. By default, the elapsed time is `off`.
For more information, see <<interactive_display_elapsed_time, Display the Elapsed Time>> and <<cmd_set_timing, SET TIMING Command>>.
| `USER` | User name that you entered when logging in to the database platform.
For more information, <<trafci_login, Log In to Database Platform>>.
|===
=== Examples
* This `ENV` command displays the attributes of the current session:
+
```
SQL> ENV
COLSEP " "
HISTOPT DEFAULT [No expansion of script files]
IDLETIMEOUT 0 min(s) [Never Expires]
LIST_COUNT 0 [All Rows]
LOG FILE c:\session.txt
LOG OPTIONS APPEND,CMDTEXT ON
MARKUP RAW
PROMPT SQL>
SCHEMA SEABASE
SERVER sqws135.houston.host.com:23400
SQLTERMINATOR ;
STATISTICS OFF
TIME OFF
TIMING OFF
USER user1
```
<<<
* This `ENV` command shows the effect of setting various session attributes:
+
```
4:16:43 PM > ENV
COLSEP " "
HISTOPT DEFAULT [No expansion of script files]
IDLETIMEOUT 30 min(s)
LIST_COUNT 0 [All Rows]
LOG OFF
MARKUP RAW
PROMPT SQL>
SCHEMA SEABASE
SERVER sqws135.houston.host.com:23400
SQLTERMINATOR ;
STATISTICS OFF
TIME OFF
TIMING OFF
USER user1
4:16:49 PM >
```
<<<
[[cmd_exit]]
== EXIT Command
The `EXIT` command disconnects from and exits TrafCI. `EXIT` can return a status code.
If no status code is specified, then `0` (zero) is returned by default. In addition, a conditional statement
can be appended to the command.
=== Syntax
```
EXIT [WITH] [status] [IF {condition}]
```
* `_status_`
+
is any 1-byte integer. `_status_` is a shell return value, and the range of allowable values is platform dependent.
* `_condition_`
+
is the same as the condition parameter defined for the <<cmd_if_then, IF&8230;THEN Command>>.
See <<cmd_conditional_parameter, Condition Parameter>>.
=== Considerations
You must enter the command on one line. The command does not require an SQL terminator.
=== Examples
* This command disconnects from and exits TrafCI, which disappears from the screen:
+
```
SQL> EXIT
```
<<<
* In a script file, the conditional exit command causes the script file to quit running and disconnect from
and exit TrafCI when the previously run command returns error code `4082`:
+
```
LOG c:\errorCode.log
SELECT * FROM employee;
EXIT IF errorcode=4082
LOG OFF
```
+
These results are logged when error code 4082 occurs:
+
```
SQL> SELECT * FROM employee;
**** ERROR[4082] Table, view or stored procedure TRAFODION.USR.EMPLOYEE does not exist or is inaccessible.
SQL> EXIT IF errorcode=4082
```
* The following two examples are equivalent:
+
```
SQL> EXIT -1 IF LASTERROR <> 0
SQL> EXIT WITH -1 IF LASTERROR != 0
```
* This example exits TrafCI if the last error code is equal to `4082`:
+
```
SQL> EXIT WITH 82 IF LASTERROR == 4082
SQL> EXIT -- default status is 0
```
<<<
[[cmd_fc]]
== FC Command
The `FC` command allows you to edit and reissue a command in the history buffer of an TrafCI session.
You can display the commands in the history buffer by using the `HISTORY` command. For information about the history
buffer, see the <<cmd_history,HISTORY Command>>.
=== Syntax
```
FC [text | [-]number]
```
* `_text_`
+
is the beginning text of a command in the history buffer. Case is not significant in matching the text to a command.
* `[-]_number_`
+
is either a positive integer that is the ordinal number of a command in the history buffer or a negative integer that indicates the position of
a command relative to the most recent command.
Without text or number, `FC` retrieves the most recent command.
<<<
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* You cannot execute this command in a script file. You can execute this command only at a command prompt.
* As each line of the command is displayed, you can modify the line by entering these editing commands (in uppercase or lowercase letters) on
the line below the displayed command line:
[cols="20%,80%",options="header"]
|===
| Edit Command | Description
| `D` | Deletes the character immediately above the letter `D`. Repeat to delete more characters.
| `I`_characters_ | Inserts characters in front of the character immediately above the letter `I`.
| `R`_characters_ | Replaces existing characters one-for-one with characters, beginning with the character immediately above the letter `R`.
| _characters_ | Replaces existing characters one-for-one with characters, beginning with the first character immediately above characters.
_`characters`_ must begin with a non-blank character.
|===
To specify more than one editing command on a line, separate the editing commands with a double slash (`//`). The end of a line terminates an
editing command or a set of editing commands.
After you edit a line of the command, TrafCI displays the line again and allows you to edit it again. Press *Enter* without specifying editing
commands to stop editing the line. If that line is the last line of the command, pressing *Enter* executes the command.
To terminate a command without saving changes to the command, use the double slash (`//`), and then press *Enter*.
=== Examples
* Re-execute the most recent command that begins with SH:
+
```
SQL> FC SH
SQL> SHOW SCHEMA
....
```
+
Pressing *Enter* executes the `SHOW SCHEMA` command and displays the current schema, `PERSNL`:
+
```
SQL> FC SH
SQL> SHOW SCHEMA
....
SCHEMA PERSNL
SQL>
```
* Correct an SQL statement that you entered incorrectly by using the delete (`D`) editing command:
+
```
SQL> SELECT * FROM persnl.employee;
*** ERROR[15001] A syntax error occurred at or before:
SELECCT * FROM persnl.employee;
^
SQL> FC
SQL> SELECCT * FROM persnl.employee;
.... d
SQL>SELECT * FROM persnl.employee;
....
```
+
Pressing *Enter* executes the corrected `SELECT` statement.
* Correct an SQL statement that you entered incorrectly by using more than one editing command:
+
```
SQL> SELT * FROMM persnl.employee;
*** ERROR[15001] A syntax error occurred at or before:
SELT * FROMM persnl.employee;
^
SQL> FC
SQL> SELT * FROMM persnl.employee;
.... iEX// d
SQL> SELECT * FROM persnl.employee;
....
```
+
Pressing *Enter* executes the corrected `SELECT` statement.
<<<
* Modify a previously executed statement by replacing a value in the `WHERE` clause with another value:
+
```
SQL> SELECT first_name, last_name
+> FROM persnl.employee
+> WHERE jobcode=111;
--- 0 row(s) selected.
SQL> FC
SQL> SELECT first_name, last_name
....
SQL> FROM persnl.employee
....
SQL> WHERE jobcode=111;
450
....
SQL> WHERE jobcode=450;
....
```
+
Pressing Enter lists the first and last names of all of the employees whose job code is `450`.
* Modify a previously executed statement by replacing a column name in the select list with another column name:
+
```
SQL> SELECT first_name, last_name
+> FROM persnl.employee
+> WHERE jobcode=450;
FIRST_NAME LAST_NAME
--------------- --------------------
MANFRED CONRAD
WALTER LANCASTER
JOHN JONES
KARL HELMSTED
THOMAS SPINNER
--- 5 row(s) selected.
SQL> FC
SQL> SELECT first_name, last_name
.... R empnum,
SQL> SELECT empnum, last_name
....
SQL> FROM persnl.employee
....
SQL> WHERE jobcode=450;
....
```
+
<<<
+
Pressing *Enter* lists the employee number and last names of all employees whose job code is `450`:
+
```
EMPNUM LAST_NAME
------ --------------------
180 CONRAD
215 LANCASTER
216 JONES
225 HELMSTED
232 SPINNER
--- 5 row(s) selected.
SQL>
```
<<<
[[cmd_get_statistics]]
== GET STATISTICS Command
The GET STATISTICS command returns formatted statistics for the last executed SQL statement.
=== Syntax
```
GET STATISTICS
```
=== Description of Returned Values
[cols="30%l,70%",options="header"]
|===
| Value | Description
| Records Accessed | Number of rows returned by disk process to `EID` (Executor In Disk process).
| Records Used | Number of rows returned by `EID` after selection.
| Disk IOs | Number of actual disk IOs done by disk process.
| Message Count | Number of messages sent/received between file system and disk process.
| Message Bytes | Number of message bytes sent/received between file system and disk process.
| Lock Escl | Number of lock escalations.
| Lock Wait | Number of lock waits.
| Disk Process Busy Time | CPU time for disk process processes for the specified table.
|===
=== Considerations
The command requires an SQL terminator.
<<<
=== Examples
```
SQL> SELECT * FROM job;
JOBCODE JOBDESC
------- ------------------
100 MANAGER
1234 ENGINEER
450 PROGRAMMER
900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 11 row(s) selected.
SQL> GET STATISTICS;
Start Time 21:45:34.082329
End Time 21:45:34.300265
Elapsed Time 00:00:00.217936
Compile Time 00:00:00.002423
Execution Time 00:00:00.218750
Table Name Records Records Disk Message Message Lock Lock Disk Process
Accessed Used I/Os Count Bytes Escl Wait Busy Time
TRAFODION.TOI.JOB 2 2 0 4 15232 0 0 363
--- SQL operation complete.
```
<<<
[[cmd_goto]]
== GOTO Command
The GOTO command allows you to jump to a designated point in the command history. The point in the command history is designated
by a `LABEL` command. All commands executed after a `GOTO` statement are ignored until the specified label is set. To set a label,
use the <<cmd_label, LABEL Command>>.
=== Syntax
```
GOTO {label}
```
* `_label_`
+
is a string of characters without quotes and spaces, or a quoted string.
=== Considerations
* You must enter the command on one line.
* The `GOTO` command cannot currently jump back in the command history; it is a forward-only command.
=== Examples
These examples show the use of the `GOTO` and `LABEL` commands:
```
SQL> GOTO ViewManagers
SQL> SELECT FROM Employees; -- skipped
SQL> SHOW RECCOUNT; -- skipped
SQL> LABEL ViewManagers
SQL> SELECT FROM Managers;
SQL> GOTO "View Customers"
SQL> SELECT FROM Invoices; -- skipped
SQL> LABEL "View Customers"
SQL> SELECT FROM Customers;
```
<<<
[[cmd_help]]
== HELP Command
The HELP command displays help text for the commands. See <<commands, Commands>> for a descriptions of the commands.
== Syntax
```
HELP [command-name]
```
`_command-name_`
is the name of a command.
* If you do not specify a command, then TrafCI returns a list of all commands.
* If you specify `SET`, then TrafCI returns a list of all SET commands.
* If you specify `SHOW`, then TrafCI returns a list of all `SHOW` commands.
=== Considerations
You must enter the command on one line. The command does not require an SQL terminator.
<<<
=== Examples
* This `HELP` command lists all the interface commands that are supported:
+
```
SQL> HELP
```
* This `HELP` command lists all the `SET` commands that are supported:
+
```
SQL> HELP SET
```
* This `HELP` command lists all the `SHOW` commands that are supported:
+
```
SQL> HELP SHOW
```
* This `HELP` command shows help text for `SET IDLETIMEOUT`:
+
```
SQL> HELP SET IDLETIMEOUT
```
<<<
[[cmd_history]]
== HISTORY Command
The `HISTORY` command displays recently executed commands, identifying each command by a number that you can use
to re-execute or edit the command.
=== Syntax
```
HISTORY [number]
```
* `_number_`
+
is the number of commands to display. The default number is `10`. The maximum number is `100`.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* You can use the `FC` command to edit and re-execute a command in the history buffer, or use the
`REPEAT` command to re-execute a command without modifying it. See <<cmd_fc,FC Command>> or
<<cmd_repeat,REPEAT Command>>.
=== Example
Display the three most recent commands and use `FC` to redisplay one:
```
SQL> HISTORY 3
14> SET SCHEMA SALES;
15> SHOW TABLES
16> SHOW VIEWS
SQL> FC 14
SQL> SET SCHEMA sales
....
```
Now you can use the edit capabilities of `FC` to modify and execute a different `SET SCHEMA` statement.
<<<
[[cmd_if_then]]
== IF…THEN Command
`IF…THEN` statements allow for the conditional execution of actions. If the condition is met, the action
is executed; otherwise, no action is taken.
=== Syntax
```
IF {condition} THEN {action} {SQL-terminator}
```
[[cmd_condition_parameter]]
* `_condition_`
+
The condition parameter (`_condition_`) is a Boolean statement structured as follows:
+
```
( {variable-name | value} {operator} {variable-name | value}
```
* `_variable-name_`
+
is one of:
+
```
{ LASTERROR
| RECCOUNT
| ACTIVITYCOUNT
| ERRORCODE
| [%]any ENV variable | any SQL parameter
}
```
* `_value_`
+
is any integer or a quoted string, where the quoted string is any non-quote character. `\` is the optional escape character.
<<<
* `_operator_`
+
is one of:
+
[cols="30%l,70%",options="header"]
|===
| Operator | Meaning
| == \| = | equal to
| <> \| != \| ~= \| ^= | not equal to
| > | greater than
| >= | greater than or equal to
| < | less than
| <= | less than or equal to
|===
* `_action_`
+
The action parameter (`_action_`) is any interface or SQL command.
* `_SQL Terminator_`
+
The SQL terminator (`_SQL-terminator_`) is the default terminator (`;`) or a string value defined for the statement
terminator by the <<cmd_set_sqlterminator, SET SQLTERMINATOR Command>>.
See <<interactive_set_show_terminator, Set and Show the SQL Terminator>>.
=== Considerations
* `IF…THEN` is itself an action. Thus, nested `IF…THEN` statements are allowed.
* An action must end with the SQL terminator, even if the action is an interface command.
<<<
=== Examples
These commands show multiple examples of `IF…THEN` statements:
```
SQL> INVOKE employees
SQL> -- ERROR 4082 means the table does not exist
SQL> IF ERRORCODE != 4082 THEN GOTO BeginPrepare
SQL> CREATE TABLE employees(ssn INT PRIMARY KEY NOT NULL NOT DROPPABLE, fname VARCHAR(50), lname VARCHAR(50), hiredate DATE DEFAULT CURRENT_DATE);
SQL> LABEL beginprepare
SQL> PREPARE empSelect FROM
+> SELECT * FROM
+> employees
+> WHERE SSN=?empssn;
SQL> IF user == "alice" THEN SET PARAM ?empssn 987654321;
SQL> IF %user == "bob" THEN SET PARAM ?empssn 123456789;
SQL> EXECUTE empselect
SQL> IF user == "alice" THEN
+> IF activitycount == 0 THEN GOTO insertalice;
SQL> IF user == "bob" THEN IF activitycount == 0 THEN GOTO insertbob;
SQL> EXIT
SQL> LABEL insertalice
SQL> INSERT INTO employees(ssn, fname, lname) VALUES(987654321, 'Alice', 'Smith');
SQL> EXIT
SQL> LABEL insertbob
SQL> INSERT INTO employees(ssn, fname, lname) VALUES(123456789, 'Bob', 'Smith');
SQL> EXIT
```
<<<
[[cmd_label]]
== LABEL Command
The LABEL command marks a point in the command history that you can jump to by using the `GOTO` command.
For more information, see the <<cmd_goto, GOTO Command>>.
=== Syntax
```
LABEL {label}
```
* `_label_`
+
is a string of characters without quotes and spaces, or a quoted string.
=== Considerations
You must enter the command on one line.
=== Examples
* This command creates a label using a string of characters:
+
```
SQL> LABEL MyNewLabel
```
* This command creates a label using a quoted string:
+
```
SQL> LABEL "Trafodion Label"
```
<<<
[[cmd_localhost]]
== LOCALHOST Command
The `LOCALHOST` command allows you to execute client machine commands.
=== Syntax
```
LOCALHOST | LH <client M/C commands>
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* The `LOCALHOST` command has a limitation. When input is entered for the operating system commands
(for example, `date`, `time`, and `cmd`), the input is not visible until you hit the `enter` key.
* If the `SET TIMING` is set to `ON`, the elapsed time information is displayed.
=== Examples
* If you are using a Windows system, dir lists the contents of the directory name. Similarly, if you are on a UNIX system you enter
`LOCALHOST LS` to display the contents of the folder.
+
```
SQL> LOCALHOST dir
Volume in drive C is E-Client
Volume Serial Number is DC4F-5B3B
Directory of c:\Program Files (x86)\Apache Software Foundation\Trafodion Command
Interface\bin 05/11/2105 01:17 PM <DIR>
05/11/2105 01:17 PM <DIR>
05/16/2105 09:47 AM 1,042 trafci-perl.pl
05/16/2105 09:47 AM 1,017 trafci-python.pl
05/16/2105 09:47 AM 752 trafci.cmd
05/16/2105 09:47 AM 1,416 trafci.pl
05/16/2105 09:47 AM 2,388 trafci.py
05/16/2105 09:47 AM 3,003 trafci.sh
6 Files(s) 19,491 bytes
2 Dir (s) 57,686,646,784 bytes free
SQL> LH mkdir c:\trafci -- Will create a directory c:\trafci on your local machine.
```
* This command displays the elapsed time information because the `SET TIMING` command is set to `ON`:
+
```
SQL> SET TIMING ON
SQL> LOCALHOST ls
trafci-perl.pl
trafci-python.py
trafci.cmd
trafci.pl
trafci.py
trafci.sh
Elapsed :00:00:00.078
```
<<<
[[cmd_log]]
== LOG Command
The `LOG` command logs the entered commands and their output from TrafCI to a log file.
If this is an obey script file, then the command text from the obey script file is shown on the console.
=== Syntax
```
LOG { ON [CLEAR, QUIET, CMDTEXT {ON | OFF}]
| log-file [CLEAR, QUIET, CMDTEXT {ON | OFF}]
| OFF
}
```
* `ON`
+
starts the logging process and records information in the `sqlspool.lst` file in the `bin` directory.
* `CLEAR`
+
instructs TrafCI to clear the contents of the sqlspool.lst file before logging new information to the file.
* `QUIET`
+
specifies that the command text is displayed on the screen, but the results of the command are written only to the log file and not to the screen.
* `CMDTEXT ON`
+
specifies that the command text and the log header are displayed in the log file.
* `CMDTEXT OFF`
+
specifies that the command text and the log header are not displayed in the log file.
* `_log-file_`
+
is the name of a log file into which TrafCI records the entered commands and their output. If you want the log file to exist outside the local
directory where you launch TrafCI (by default, the `bin` directory), specify the full directory path of the log file. The log file does not
need to exist, but the specified directory must exist before you execute the `LOG` command.
<<<
* `_log-file_ CLEAR`
+
instructs TrafCI to clear the contents of the specified `_log-file_` before logging new information to the file.
* `OFF`
+
stops the logging process.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* Use a unique name for each log file to avoid writing information from different TrafCI sessions into the same log file.
<<<
=== Examples
* This command starts the logging process and records information to the `sqlspool.lst` file in the `bin` directory:
+
```
SQL> LOG ON
```
* This command starts the logging process and appends new information to an existing log file, `persnl_updates.log`,
in the local directory (the same directory where you are running TrafCI):
+
```
SQL> LOG persnl_updates.log
```
* This command starts the logging process and appends new information to a log file,
`sales_updates.log`, in the specified directory on a Windows workstation:
+
```
SQL> LOG c:\log_files\sales_updates.log
```
* This command starts the logging process and appends new information to a log file,
`sales_updates.log`, in the specified directory on a Linux or UNIX workstation:
+
```
SQL> LOG ./log_files/sales_updates.log
```
* This command starts the logging process and clears existing information from the log file before
logging new information to the file:
+
```
SQL> LOG persnl_ddl.log CLEAR
```
<<<
* This command start the logging process, clears existing information from the log file, and specifies
that the command text and log header is not displayed in the log file:
+
```
SQL> LOG c:\temp\a.txt clear, CMDTEXT OFF
SQL> (SELECT * FROM trafodion.toi.job
+>;
JOBCODE JOBDESC
------- ------------------
100 MANAGER
450 PROGRAMMER 900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 10 row(s) selected.
SQL> log off
Output of c:\temp\a.txt
JOBCODE JOBDESC
------- ------------------
100 MANAGER
450 PROGRAMMER 900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 10 row(s) selected
```
<<<
* This command start the logging process, clears existing information from the log file, specifies that no output appears on the console
window, and the quiet option is enabled:
+
```
SQL> LOG c:\temp\b.txt CLEAR, CMDTEXT OFF, QUIET
SQL> SELECT
+> FROM trafodion.toi.job; +
SQL> LOG OFF
Output of c:\temp\b.txt
JOBCODE JOBDESC
------- ------------------
100 MANAGER
450 PROGRAMMER 900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 10 row(s) selected
```
+
This command stops the logging process:
+
```
SQL> LOG OFF
```
For more information, see <<interactive_log_output, Log Output>>.
<<<
[[cmd_obey]]
== OBEY Command
The `OBEY` command executes the SQL statements and interface commands of a specified script file or an
entire directory. This command accepts a single filename or a filename with a wild-card pattern specified.
Executing the `OBEY` command without optional parameters prompts you to enter a filename. If a filename is
not specified, then `*.sql` is used.
=== Syntax
```
OBEY {script-file | wild-card-pattern} [(section-name)]
```
* `_script-file_`
+
is the name of an ASCII text file that contains SQL statements, interface commands, and comments. If the script file
exists outside the local directory where you launch TrafCI (by default, the `bin` directory), specify the full directory
path of the script file.
* `_wild-card-pattern_`
+
is a character string used to search for script files with names that match the character string. `_wild-card-pattern_`
matches a string, depending on the operating system for case-sensitivity, unless you enclose it within double quotes.
To look for similar values, specify only part of the characters of `_wild-card-pattern_` combined with these
wild-card characters:
* `(_section-name_)`
+
is the name of a section within the `_script-file_` to execute. If you specify `_section-name_`, the `OBEY` command
executes the commands between the header line for the specified section and the header line for the next section
(or the end of the script file). If you omit `_section-name_`, the `OBEY` command executes the entire script file.
For more information, see <<script_section_headers, Section Headers>>.
<<<
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* Put a space between `OBEY` and the first character of the file name.
* You can execute this command in a script file.
* Before putting dependent SQL statements across multiple files, consider the order of the file execution. If a directory
is not passed to the `OBEY` command, the file or wild card is assumed to be in the current working directory.
* If the (`*`) is issued in the `OBEY` command, all files are executed in the current directory. Some of the files in
the directory could be binary files. The `OBEY` command tries to read those binary files and junk or invalid characters are
displayed on the console. For example, this command causes invalid characters to be displayed on the console:
+
```
SQL> OBEY C:\trafci\bin\
```
* `OBEY` detects recursive obey files (for example, an SQL file that calls OBEY on itself) and prevents infinite loops using
a max depth environment variable. If no variable is passed to the JVM, the default depth is set to `10`. To change this depth
(for example to a value of `20`), pass a Java environment variable as follows:
+
```
-Dtrafci.obeydepth=20
```
<<<
=== Examples
* This `OBEY` command runs the script file from the local directory (the same directory where you are running TrafCI):
+
```
SQL> OBEY ddl.sql
```
* This `OBEY` command runs the script file in the specified directory on Windows.
+
```
SQL> OBEY c:\my_files\ddl.sql
```
<<<
* This `OBEY` command runs the script file in the specified directory on a Linux or UNIX workstation:
+
```
SQL> OBEY ./my_files/ddl.sql
```
* This sample file contains sections to be used in conjunction with the `OBEY` command:
+
```
?section droptable
DROP TABLE course ;
?section create
CREATE TABLE course ( cno VARCHAR(3) NOT NULL
, cname VARCHAR(22) NOT NULL
, cdescp VARCHAR(25) NOT NULL
, cred INT
, clabfee NUMERIC(5,2)
, cdept VARCHAR(4) NOT NULL
, PRIMARY KEY (cno)
) ;
?section insert
INSERT INTO course VALUES ('C11', 'Intro to CS','for Rookies',3, 100, 'CIS') ;
INSERT INTO course VALUES ('C22', 'Data Structures','Very Useful',3, 50, 'CIS') ;
INSERT INTO course VALUES ('C33', 'Discrete Mathematics', 'Absolutely Necessary',3, 0,'CIS') ;
?section select
SELECT * FROM course ;
?section delete
PURGEDATA course;
```
+
<<<
+
To run only the commands in section `create`, execute the following:
+
```
SQL> OBEY C:\Command Interfaces\course.sql (create)
SQL> ?section create
SQL> CREATE TABLE course
+>(
+> cno VARCHAR(3) NOT NULL,
+> cname VARCHAR(22) NOT NULL,
+> cdescp VARCHAR(25) NOT NULL,
+> cred INT,
+> clabfee NUMERIC(5,2),
+> cdept VARCHAR(4) NOT NULL,
+> PRIMARY KEY (cno)
+>) ;
--- SQL Operation complete.
```
+
To run only the commands in the `insert` section, execute the following:
+
```
SQL> OBEY C:\Command Interfaces\course.sql (insert)
SQL> ?section insert
SQL> INSERT INTO course VALUES
+> ('C11', 'Intro to CS','For Rookies',3, 100, 'CIS');
--- 1 row(s) inserted.
SQL> INSERT INTO course VALUES
+> ('C22', 'Data Structures','Very Useful',3, 50, 'CIS');
--- 1 row(s) inserted.
SQL> INSERT INTO course VALUES
+> ('C33', 'Discrete Mathematics', 'Absolutely Necessary',3, 0, 'CIS');
--- 1 row(s) inserted.
```
<<<
* This command executes all files with `.sql` extension:
+
```
SQL> OBEY c:\trafci\.sql;
SQL> OBEY c:\trafci
```
* This command executes all files beginning with the word `"script"` and contains one character after the word script
and ends with `.sql` extension. For example: `script1.sql`, `script2.sql`, `scriptZ.sqland` so on.
+
```
SQL> OBEY C:\trafci\script?.sql
```
* This command executes all files that contain the word `"test"`. This includes the files that do not end with `.sql` extension.
+
```
SQL> OBEY C:\trafci\test
```
* This command executes all files that begin with the word `"script"` and contains one character after the word `"script"` and
ends with an extension prefixed by a dot. For example: `script1.sql`, `script2.bat`, `scriptZ.txt`, and so on.
+
```
SQL> OBEY C:\trafci\script?.
```
* This command executes all files that have `.txt` extension in the current directory, the directory in which the command interface was launched.
+
```
SQL> OBEY .txt;
```
* This command prompts the user to enter the script filename or a pattern. The default value is `*.sql`.
+
```
SQL> OBEY;
Enter the script filename [.sql]:
```
<<<
[[]]
== PRUN Command
The `PRUN` command runs script files in parallel.
=== Syntax
```
PRUN { -d | -defaults }
PRUN
[ { -sd | -scriptsdir } scriptsdirectory ]
[ { -e | -extension } filedirectory ]
[ { -ld | -logsdir } log-directory ]
[ { -o | -overwrite } {Y | N}
[ { -c | -connections } num ]
```
* `-d | -defaults`
+
Specify this option to have PRUN use these default settings:
+
[cols="30%,70%", options="header"]
|===
| Parameter | Default Setting
| `-sd \| -scriptsdir` | `PRUN` searches for the script files in the same directory as the `trafci.sh` or `trafci.cmd` file (`_trafci-installation-directory_/trafci/bin` or
`_trafci-installation-directory_\trafci\bin`).
| `-e \| -extension` | The file extension is `.sql`.
| `-ld \| -logsdir` | `PRUN` places the log files in the same directory as the script files.
| `-o \| -overwrite` | No overwriting occurs. `PRUN` keeps the original information in the log files and appends new information at the end of each file.
| `-c \| -connections` | `PRUN` uses two connections.
|===
* `{-sd | -scriptsdir} _scripts-directory_`
+
In this directory, `PRUN` processes every file with the specified file extension. If you do not specify a directory or if you specify an
invalid directory, an error message occurs, and you are prompted to reenter the directory. Before running `PRUN`, verify that this directory
contains valid script files.
* `{-e | -extension} _file-extension_`
+
Specify the file extension of the script files. The default is `.sql`.
<<<
* `{-ld | -logsdir} _log-directory_`
+
In this directory, `PRUN` creates a log file for each script file by appending the `.log` extension to the name of the script file. If you do
not specify a log file directory, `PRUN` places the log files in the same directory as the script files.
* `{-o | -overwrite} {y | n}`
+
If you specify `y`, `PRUN` overwrites the contents of existing log files. By default, `PRUN` keeps the original information in the log files and
appends new information at the end of each file.
* `{-c | -connections} _num_`
+
Enter a number for the maximum number of connections If you do not specify the maximum number of connections, `PRUN` uses two connections.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If you execute the `PRUN` command without any arguments, then TrafCI prompts you for the `PRUN` arguments. If you specify one or more options,
then the `PRUN` command runs without prompting you for more input. In the non-interactive mode, if any options are not specified, `PRUN` uses the default values.
* The `-d` or `-defaults` option cannot be specified with any other option.
* The `PRUN` log files also contain the log end time.
* `PRUN` does not support the `SPOOL` or `LOG` commands. Those commands are ignored in `PRUN` script files.
* The environment values from the main session (which are available through the `SET` commands) are propagated to new sessions started via
`PRUN`. However, prepared statements and parameters are bound only to the main user session.
* For a summary of all errors and warnings that occurred during the `PRUN` operation, go to the error subdirectory in the same directory as the log
files (for example, `C:\log\error`) and open the `prun.err.log` summary file.
* For details about the errors that occurred during the execution of a script file, open each individual log file (`_script-file_.sql.log`).
<<<
=== Examples
* To use `PRUN`, enter the `PRUN` command in the TrafCI session:
+
```
SQL> PRUN
```
+
```
Enter as input to stop the current prun session
--------------------------------------------------
Enter the scripts directory : c:\ddl_scripts
Enter the script file extension[sql] :
Enter the logs directory[scripts dir] : c:\log
Overwrite the log files (y/n)[n]? : y
Enter the number of connections(2-248)[2]: 3
```
+
After you enter the number of connections, `PRUN` starts to process the script files and displays this status:
+
```
Status: In Progress.......
```
+
<<<
+
After executing all the script files, `PRUN` returns a summary of the operation:
+
```
__________________________________________________
PARALLELRUN(PRUN) SUMMARY
__________________________________________________
Total files present............................. 3
Total files processed........................... 3
Total queries processed........................ 40
Total errors.................................... 4
Total warnings.................................. 0
Total successes................................ 36
Total connections............................... 5
Total connection failures....................... 0
Please verify the error log file c:\log\error\prun.err.log
SQL>
```
+
NOTE: In the `PRUN` summary, the `Total queries processed` is the total number of commands that `PRUN` processes.
Those commands can include SQL statements and commands. The total `errors`, `warnings`, and `successes` also
include commands other than SQL statements.
<<<
* This `PRUN` command initiates a parallel run operation with the `-d` option:
+
```
SQL> PRUN -d
SQL> PRUN -scriptsdir ./prun/sql -e sql -ld ./prun/logs -o y -connections 5
PRUN options are -scriptsdir c:/_trafci/prun
-logsdir c:/_trafci/prun/logs
-extension sql
-overwrite y
-connections 5
Status: Complete
__________________________________________________
PARALLELRUN(PRUN) SUMMARY
__________________________________________________
Total files present............................ 99
Total files processed.......................... 99
Total queries processed....................... 198
Total errors.................................... 0
Total warnings.................................. 0
Total warnings.................................. 0
Total connections............................... 5
Total connection failures....................... 0
===========================================================================
PRUN completed at May 20, 2105 9:33:21 AM
===========================================================================
```
* PRUN can be started in non-interactive mode using the `-q` parameter of `trafci.cmd` or
`trafci.sh`, thus requiring no input:
+
```
trafci.cmd -h 16.123.456.78
-u user1 -p host1
-q "PRUN -sd c:/_trafci/prun -o y -c 3"
```
<<<
* `PRUN` can be started in non-interactive mode from an `OBEY` file:
+
```
SQL> OBEY startPrun.txt
SQL> PRUN -sd c:/_trafci/prun -ld c:/_trafci/prun/logs -e sql -o y -c 5
PRUN options are -scriptsdir c:/_trafci/prun
-logsdir c:/_trafci/prun/logs
-extension sql
-overwrite yes
-connections 5
Status: Complete
```
<<<
[[cmd_quit]]
== QUIT Command
The `QUIT` command disconnects from and exits TrafCI.
=== Syntax
```
QUIT [WITH] [status] [IF {condition}]
```
* `_status_`
+
is any 1-byte integer. `_status_` is a shell return value, and the range of allowable values is platform dependent.
* `_condition_`
+
is the same as the condition parameter defined for the <<cmd_if_then, IF…THEN Command>>.
See <<cmd_conditional_parameters, Condition Parameters>>.
=== Considerations
You must enter the command on one line. The command does not require an SQL terminator.
=== Examples
* This command disconnects from and exits TrafCI, which disappears from the screen:
+
```
SQL> QUIT
```
* In a script file, the conditional exit command causes the script file to quit running and disconnect from and
exit TrafCI when the previously run command returns error code `4082`:
+
```
SQL> LOG c:\errorCode.log
SQL> SELECT * FROM employee;
SQL> QUIT IF errorcode=4082
SQL> LOG OFF
```
+
<<<
These results are logged when error code `4082` occurs:
+
```
SQL> SELECT * FROM employee;
**** ERROR[4082] Table, view or stored procedure TRAFODION.USR.EMPLOYEE does not exist or is inaccessible.
SQL> QUIT IF errorcode=4082
```
<<<
[[cmd_reconnect]]
== RECONNECT Command
The `RECONNECT` command creates a new connection to the {project-name} database using the login credentials of the last successful connection.
=== Syntax
```
RECONNECT
```
=== Considerations
The host name (or IP address) and port number, plus the credentials (user name and password), are used from information previously entered.
This is the information specified at launch or when the last `CONNECT` command was executed.
If TrafCI was invoked with the `-noconnect` launch parameter, TrafCI prompts you for the values.
=== Examples
* This command creates a new connection to the {project-name} database using the login credentials of the last successful connection:
+
```
SQL> RECONNECT
Connected to Trafodion
```
<<<
[[cmd_repeat]]
== REPEAT Command
The `REPEAT` command re-executes a previous command.
=== Syntax
```
REPEAT [text | [-]number ]
```
* `_text_`
+
specifies the text of the most recently executed command. The command must have been executed beginning with `_text_`,
but `_text_` need be only as many characters as necessary to identify the command. TrafCI ignores leading blanks.
* `[-]_number_`
+
is an integer that identifies a command in the history buffer. If number is negative, it indicates the position of the
command in the history buffer relative to the current command; if number is positive, it is the ordinal number of a
command in the history buffer.
The HISTORY command displays the commands or statements in the history buffer. See the <<cmd_history,HISTORY Command>>.
== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* To re-execute the immediately preceding command, enter `REPEAT` without specifying a number. If you enter more than one
command on a line, then the `REPEAT` command re-executes only the last command on the line.
* When a command is selected for repeat, and the SQL terminator value has changed since the execution of that command,
then TrafCI replaces the SQL terminator in the command with the current SQL terminator value and executes the command.
<<<
=== Examples
* Display the previously executed commands and re-execute the second to the last command:
+
```
SQL> HISTORY
1> SET IDLETIMEOUT 0
2> LOG ON
3> SET SCHEMA persnl;
4> SELECT * FROM employee;
5> SHOW TABLES
6> SELECT * FROM dept;
7> SHOW VIEWS
8> SELECT * FROM emplist;
SQL>
SQL> REPEAT -2
SHOW VIEWS
VIEW NAMES
-------------------------------------------------------------
EMPLIST MGRLIST
SQL>
```
<<<
* Re-execute the fifth command in the history buffer:
+
```
SQL> REPEAT 5
SHOW TABLES
TABLE NAMES
-------------------------------------------------------------
DEPT EMPLOYEE JOB PROJECT
SQL>
```
* Re-execute the `SHOW TABLES` command:
+
```
SQL> REPEAT SHOW
SHOW TABLES
TABLE NAMES
-------------------------------------------------------------
DEPT EMPLOYEE JOB PROJECT
SQL>
```
<<<
[[cmd_reset_lasterror]]
== RESET LASTERROR Command
The `RESET LASTERROR` command resets the last error code to 0.
=== Syntax
```
RESET LASTERROR
```
=== Considerations
You must enter the command on one line. The command does not require an SQL terminator.
=== Examples
* This command resets the last error in the current session:
+
```
SQL> SELECT * FROM emp;
**** ERROR[4082]Object TRAFODION.SCH.EMP does not exist or is inaccessible.
SQL> SHOW LASTERROR
LASTERROR 4082
SQL> RESET LASTERROR
SQL> SHOW LASTERROR
LASTERROR 0
```
<<<
[[cmd_reset_param]]
== RESET PARAM Command
The RESET PARAM command clears all parameter values or a specified parameter value in the current session.
=== Syntax
```
RESET PARAM [param-name]
```
* `_param-name_`
+
is the name of the parameter for which you specified a value. Parameter names are case-sensitive. For example,
the parameter `?pn` is not equivalent to the parameter `?PN`. `_param-name_` can be preceded by a
question mark (`?`), such as `?_param-name_`.
+
If you do not specify a parameter name, all of the parameter values in the current session are cleared.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* To clear several parameter values but not all, you must use a separate `RESET PARAM` command for each parameter.
=== Example
* This command clears the setting of the `?sal` (`salary`) parameter, and the `SET PARAM` command resets it to a new value:
+
```
SQL> RESET PARAM ?sal +
SQL> SET PARAM ?sal 80000.00
```
For more information, see <<interactive_reset_parameters,Reset the Parameters>>.
<<<
[[cmd_run]]
== RUN Command
The `RUN` command executes the previously executed SQL statement. This command does not repeat an interface command.
=== Syntax
```
RUN
```
=== Considerations
* You must enter the command on one line.
* The command does not require an SQL terminator.
=== Example
* This command executes the previously executed SELECT statement:
+
```
SQL> SELECT COUNT(*) FROM persnl.employee;
(EXPR)
--------------------
62
--- 1 row(s) selected.
SQL> RUN
(EXPR)
--------------------
62
--- 1 row(s) selected.
SQL>
```
<<<
[[cmd_savehist]]
== SAVEHIST Command
The `SAVEHIST` command saves the session history in a user-specified file. The session history consists of a list of the commands that were
executed in the TrafCI session before the SAVEHIST command.
=== Syntax
```
SAVEHIST file-name [CLEAR]
```
* `_file-name_`
+
is the name of a file into which TrafCI stores the session history. If you want the history file to exist outside the local directory where you
launch TrafCI (by default, the `bin` directory), specify the full directory path of the history file. The specified directory must exist
before you execute the `SAVEHIST` command.
* `CLEAR`
+
instructs TrafCI to clear the contents of the specified file before adding the session history to the file.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the specified file already exists, TrafCI appends newer session-history information to the file.
=== Examples
* This command clears the contents of an existing file named `history.txt` in the local directory (the same directory where you are running TrafCI)
and saves the session history in the file:
+
```
SQL> SAVEHIST history.txt CLEAR
SQL>
```
* This command saves the session history in a file named `hist.txt` in the specified directory on a Windows workstation:
+
```
SQL> SAVEHIST c:\log_files\hist.txt
SQL>
```
<<<
* This command saves the session history in a file named `hist.txt` in the specified directory on a Linux or UNIX workstation:
+
```
SQL> SAVEHIST ./log_files/hist.txt
SQL>
```
For more information, see <<interactive_history,Display Executed Commands>>.
<<<
[[cmd_set_colsep]]
== SET COLSEP Command
The `SET COLSEP` command sets the column separator and allows you to control the formatting of the result displayed for
SQL queries. The `SET COLSEP` command specifies a delimiter value to use for separating columns in each row of the results.
The default delimiter is " "(white space).
=== Syntax
```
SET COLSEP [separator]
```
=== Considerations
* You must enter the command on one line.
* The `SET COLSEP` command has no effect if the markup is set to `HTML`,`XML`, or `CSV`.
=== Examples
* This command specifies the separator as a "`|`"(pipe):
+
```
SQL> SET COLSEP |
SQL> SHOW COLSEP
COLSEP "|"
SQL> SELECT * FROM employee;
EMPNUM|EMPNAME |REGNUM|BRANCHNUM|JOB
------|--------------|------|---------|--------
| 1|ROGER GREEN | 99| 1|MANAGER
| 23|JERRY HOWARD | 2| 1|MANAGER
| 29|JACK RAYMOND | 1| 1|MANAGER
| 32|THOMAS RUDLOFF| 5| 3|MANAGER
| 39|KLAUS SAFFERT | 5| 2|MANAGER
--- 5 row(s) selected.
```
<<<
[[cmd_set_fetchsize]]
== SET FETCHSIZE Command
The `SET FETCHSIZE` command allows you to change the default fetchsize used by JDBC. Setting the value to `0` sets the
fetchsize to the default value used in JDBC.
=== Syntax
```
SET FETCHSIZE _value_
```
* `_value_`
+
is an integer representing the fetch size as a number of rows. Zero (`0`) represents the default value of fetch size set in JDBC.
=== Considerations
* You must enter the command on one line.
* The command does not require an SQL terminator.
=== Examples
* This command sets the fetchsize to `1`:
+
```
SQL> SET FETCHSIZE 1
SQL> SHOW FETCHSIZE
FETCHSIZE 1
SQL> SELECT * FROM stream(t1);
C1 C2 C3
------- ------- -------
TEST1 TEST2 TEST3
AAA BBB CCC
```
<<<
[[set_histopt]]
== SET HISTOPT Command
The `SET HISTOPT` command sets the history option and controls how commands are added to the history buffer.
By default, commands within a script file are not added to history. If the history option is set to `ALL`,
then all the commands in the script file are added to the history buffer. If no options are specified,
`DEFAULT` is used.
=== Syntax
```
SET HISTOPT [ ALL | DEFAULT ]
```
=== Considerations
You must enter the command on one line.
<<<
=== Examples
* This command shows only the obey commands added to the history buffer.
+
```
SQL> SHOW HISTOPT
HISTOPT DEFAULT [No expansion of script files]
SQL> OBEY e:\scripts\nobey\insert2.sql
SQL> ?SECTION insert
SQL> SET SCHEMA trafodion.sch;
--- SQL operation complete.
SQL> INSERT INTO course1 VALUES
+> ('C11', 'Intro to CS','For Rookies',3, 100,'CIS');
--- 1 row(s) inserted.
SQL> INSERT INTO course1 VALUES
+> ('C55', 'Computer Arch.','VON Neumann''S Mach.',3, 100, 'CIS');
--- 1 row(s) inserted.
```
<<<
```
SQL> HISTORY;
1> SHOW HISTOPT
2> OBEY e:\scripts\nobey\insert2.sql
```
* This command shows all the commands added to the history buffer.
+
```
SQL> SET HISTOPT ALL
SQL> OBEY e:\scripts\nobey\insert2.sql
?SECTION insert
SQL> set schema trafodion.sch;
--- SQL operation complete.
SQL> INSERT INTO course1 VALUES
+> ('C11','Intro to CS','For Rookies',3, 100, 'CIS');
---1 row(s) inserted.
SQL> INSERT INTO course1 VALUES
+> ('C55','Computer Arch.','Von Neumann''s Mach.',3,100, 'CIS');
---1 row(s) inserted.
SQL> HISTORY;
1> SHOW HISTOPT
2> OBEY e:\scripts\nobey\insert2.sql
3> HISTORY;
4> SET HISTOPT ALL
5> SET SCHEMA trafodion.sch;
6> INSERT INTO course1 VALUES
('C11','Intro to CS','For Rookies',3, 100, 'CIS');
7> INSERT INTO course1 VALUES
('C55','Computer Arch.','Von Neumann''s MACH.',3,100, 'CIS');
```
<<<
[[cmd_set_idletimeout]]
== SET IDLETIMEOUT Command
The `SET IDLETIMEOUT` command sets the idle timeout value for the current session. The idle timeout value
of a session determines when the session expires after a period of inactivity. The default is `30 minutes`.
=== Syntax
```
SET IDLETIMEOUT value
```
* `_value_`
+
is an integer representing the idle timeout value in minutes. Zero represents an infinite amount of time, meaning that
the session never expires.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If you execute this command in a script file, it affects the session in which the script file runs. You can specify
this command in `PRUN` script files. However, running this command from a `PRUN` script file does not affect the idle
timeout value for the current session.
* To reset the default timeout value, enter this command:
+
```
SET IDLETIMEOUT 30
```
<<<
=== Examples
* This command sets the idle timeout value to four hours:
+
```
SQL> SET IDLETIMEOUT 240
```
* This command sets the idle timeout value to an infinite amount of time so that the session never expires:
+
```
SQL> SET IDLETIMEOUT 0
```
<<<
* To reset the idle timeout to the default, enter this command:
+
```
SQL> SET IDLETIMEOUT 30
SQL>
```
For more information, see <<interactive_idle_timeout, Set and Show Session Idle Timeout Value>>.
<<<
[[cmd_set_list_count]]
== SET LIST_COUNT Command
The `SET LIST_COUNT` command sets the maximum number of rows to be returned by `SELECT` statements that are executed
after this command. The default is zero, which means that all rows are returned.
=== Syntax
```
SET LIST_COUNT num-rows
```
* `_num-rows_`
+
is a positive integer that specifies the maximum number of rows of data to be displayed by `SELECT` statements that
are executed after this command. Zero means that all rows of data are returned.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* To reset the number of displayed rows, enter this command:
+
```
SET LIST_COUNT 0
```
=== Examples
* This command specifies that the number of rows to be displayed by `SELECT` statements is five:
+
```
SQL> SET LIST_count 5
SQL> SELECT empnum, first_name, last_name FROM persnl.employee ORDER BY empnum;
EMPNUM FIRST_NAME LAST_NAME
------ --------------- --------------------
1 ROGER GREEN
23 JERRY HOWARD
29 JANE RAYMOND
32 THOMAS RUDLOFF
39 KLAUS SAFFERT
--- 5 row(s) selected. LIST_COUNT was reached.
SQL>
```
<<<
* This command resets the number of displayed rows to all rows:
+
```
SQL> SET LIST_COUNT 0
SQL> SELECT empnum, first_name, last_name
+> FROM persnl.employee
+> ORDER BY empnum;
EMPNUM FIRST_NAME LAST_NAME
------ --------------- --------------------
1 ROGER GREEN
23 JERRY HOWARD
29 JANE RAYMOND
32 THOMAS RUDLOFF
39 KLAUS SAFFERT
43 PAUL WINTER
65 RACHEL MCKAY
...
995 Walt Farley
--- 62 row(s) selected.
SQL>
```
<<<
[[cmd_set_markup]]
== SET MARKUP Command
The `SET MARKUP` command sets the markup format and controls how results are displayed by TrafCI.
=== Syntax
```
SET MARKUP [ RAW | HTML | XML | CSV | COLSEP ]
```
The supported options enable results to be displayed in `XML`, `HTML`, `CSV` (Comma Separated Values), and `COLSEP` format.
The default format is `RAW`.
=== Considerations
* You must enter the command on one line.
* If the `MARKUP` format is `CSV` or `COLSEP`, the column header information and status messages are not displayed.
* For the `XML` and `HTML` markup format, the syntax and interface errors is consistent `XML`
and `HTML` markup is displayed.
* For `XML` markup, any occurrence of `]]>` that appear in the error message or invalid query are replaced with `]]>`.
* When error messages are output as `HTML` markup, both the `>` (greater than) and `<` (less than) symbols are
replaced with their escaped versions: `>` and `<`, respectively. An example of the formatted error messages are show below.
<<<
=== Examples
* This command specifies results be displayed in `HTML`:
+
```
SQL> SET MARKUP HTML
SQL> SELECT c.custnum, c.custnum, ordernum, order_date
+> FROM customer c, orders o where c.custnum=o.custnum;
<TABLE>
<!--SELECT c.custnum, c.custname,ordernum,order_date
FROM customer c, orders o where c.custnum=o.custnum;-->
<tr>
<th>CUSTNUM</th>
<th>CUSTNAME</th>
<th>ORDERNUM</th>
<th>ORDER_DATE</th>
</tr>
<tr>
<td>143</td>
<td>STEVENS SUPPLY</td>
<td>700510</td>
<td>2105-05-01</td>
</tr>
<tr>
<td>3333</td>
<td>NATIONAL UTILITIES</td>
<td>600480</td>
<td>2105-05-12</td>
</tr>
<tr>
<td>7777</td>
<td>SLEEP WELL HOTELS</td>
<td>100250</td>
<td>2105-01-23</td>
</tr>
<!-- --- 3 row(s) selected.-->
</TABLE>
```
<<<
```
SQL> SELECT c.custnum, c.custname,ordernum,order_date,
+> FROM customer c, orders o where c.custnum=o.custnum;
<TABLE>
<!-- SELECT c.custnum, c.custname,ordernum,order_date,
FROM customer c, orders o where c.custnum=o.custnum;-->
<tr>
<th>Error Id</th>
<th>Error Code</th>
<th>Error Message</th>
</tr>
<tr>
<td>1</td>
<td>4082</td>
<td>Object TRAFODION.NVS.CUSTOMER does not exist or is inaccessible.</td>
</tr>
</TABLE>
```
* To set the application to format output as `HTML`:
+
```
SQL> SET MARKUP HTML
```
+
HTML formatted error message example:
+
```
SQL> SET MARKUP <invalid>
<?xml version="1.0"?>
<Results>
<Query>
<![CDATA[set markup <invalid ]]>
</Query>
<ErrorList>
<Error id="1">
<ErrorCode>NVCI001</ErrorCode>
<ErrorMsg> <![CDATA[
ERROR: A syntax error occurred at or before:
set markup <invalid>
^ ]]
</ErrorMsg>
</ErrorList>
</Results>
```
<<<
* This command specifies results be displayed in `CSV`:
+
```
SQL> SET MARKUP CSV
SQL> SELECT c.custnum, c.custnum, ordernum, order_date
+> FROM customer c,orders o where c.custnum=o.custnum;
143,STEVENS SUPPLY ,700510,2105-05-01
3333,NATIONAL UTILITIES,600480,2105-05-12
7777,SLEEPWELL HOTELS ,100250,2105-01-23
324,PREMIER INSURANCE ,500450,2105-04-20
926,METALL-AG. ,200300,2105-02-06
123,BROWN MEDICAL CO ,200490,2105-03-19
123,BROWN MEDICAL CO ,300380,2105-03-19
543,FRESNO STATE BANK ,300350,2105-03-03
5635,ROYAL CHEMICALS ,101220,2105-05-21
21,CENTRAL UNIVERSITY,200320,2105-02-17
1234,DATASPEED ,100210,2105-04-10
3210,BESTFOOD MARKETS ,800660,2105-05-09
```
<<<
* This command specifies results be displayed in `XML`:
+
```
SQL> SET MARKUP XML
SQL> SELECT * FROM author
<?xml version="1.0"?>
<Results>
<Query>
<![CDATA[select from author;]]>
</Query>
<rowid="1">
<AUTHORID>91111</AUTHORID>
<AUTHORNAME>Bjarne Stroustrup</AUTHORNAME>
</row>
<rowid="2">
<AUTHORID>444444</AUTHORID>
<AUTHORNAME>John Steinbeck</AUTHORNAME>
</row>
<rowid="3">
<AUTHORID>2323423</AUTHORID>
<AUTHORNAME>Irwin Shaw</AUTHORNAME>
</row>
<rowid="4">
<AUTHORID>93333</AUTHORID>
<AUTHORNAME>Martin Fowler</AUTHORNAME>
</row>
<rowid="5">
<AUTHORID>92222</AUTHORID>
<AUTHORNAME>Grady Booch</AUTHORNAME>
</row>
<rowid="6">
<AUTHORID>84758345</AUTHORID>
<AUTHORNAME>Judy Blume</AUTHORNAME>
</row>
<rowid="7">
<AUTHORID>89832473</AUTHORID>
<AUTHORNAME>Barbara Kingsolver</AUTHORNAME>
</row>
<Status> <![CDATA[-- 7 row(s) selected .]]></Status>
</Results>
```
<<<
* To set the application to format output as `XML`:
+
```
SQL> SET MARKUP XML
```
+
`XML` formatted error message examples:
+
```
SQL> SET MARKUP <]]>
<?xml version="1.0"?>
<Results>
<Query>
<![CDATA[set markup <]]> ]]>>
</Query>
<ErrorList>
<Error id="1">
<ErrorCode>UNKNOWN ERROR CODE</ErrorCode
<ErrorMessage> <![CDATA[
ERROR: A syntax error occurred at or before:
set markup <]]>>
^ ]]<>
</ErrorMessage>
</ErrorList>
</Results>
```
* This command displays `CSV` like output using the `COLSEP` value as a separator.
+
```
SQL> SET COLSEP |
SQL> SET MARKUP COLSEP
SQL> SELECT * FROM employee;
32|THOMAS |RUDLOFF |2000|100|138000.40
39|KLAUS |SAFFERT |3200|100|75000.00
89|PETER |SMITH |3300|300|37000.40
29|JANE |RAYMOND |3000|100|136000.00
65|RACHEL |MCKAY |4000|100|118000.00
75|TIM |WALKER |3000|300|320000.00
11|ROGER |GREEN |9000|100|175500.00
93|DONALD |TAYLOR |3100|300|33000.00
```
<<<
[[cmd_set_param]]
== SET PARAM Command
The `SET PARAM` command associates a parameter name with a parameter value in the current session.
The parameter name and value are associated with one of these parameter types:
* Named parameter (represented by `?_param-name_`) in a DML statement or in a prepared SQL statement
* Unnamed parameter (represented by `?`) in a prepared SQL statement only
A prepared statement is one that you SQL compile by using the PREPARE statement.
For more information about PREPARE, see the
{docs-url}/sql_reference/index.html[_{project-name} SQL Reference Manual_].
After running `SET PARAM` commands in the session:
* You can specify named parameters (`?_param-name_`) in a DML statement.
* You can execute a prepared statement with named parameters by using the `EXECUTE` statement without a `USING` clause.
* You can execute a prepared statement with unnamed parameters by using the `EXECUTE` statement with a `USING` clause
that contains literal values and/or a list of the named parameters set by `SET PARAM`.
The `EXECUTE` statement substitutes parameter values for the parameters in the prepared statement. For more information about `EXECUTE`, see the
{docs-url}/sql_reference/index.html[_{project-name} SQL Reference Manual_].
<<<
=== Syntax
```
SET PARAM param-name [UTF8] param-value
```
* `_param-name_`
+
is the name of the parameter for which a value is specified. Parameter names are case-sensitive.
For example, the parameter `?pn` is not equivalent to the parameter `?PN`. `_param-name_` can be
preceded by a question mark (`?`), such as `?_param-name_`.
* `UTF8`
+
specifies that a character string specified for the parameter value, `_param-value_`, uses the
`UTF8` character set. If the character string is in `UTF8` format, it must be prefixed by `UTF8`.
* `_param-value_`
+
is a numeric or character literal that specifies the value for the parameter. If you do not specify a value,
TrafCI returns an error.
+
If `_param-value_` is a character literal and the target column type is a character string, you do not have
to enclose the value in single quotation marks. Its data type is determined from the data type of the column
to which the literal is assigned. Character strings specified as parameter values are always case-sensitive
even if they are not enclosed in quotation marks. If the character string is in `UTF8` format, it must
be prefixed by `UTF8`.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* Use separate `SET PARAM` commands to name and assign values to each unique parameter in a prepared SQL
statement before running the `EXECUTE` statement.
* Parameter names are case-sensitive. If you specify a parameter name in lowercase in the `SET PARAM` command,
you must specify it in lowercase in other statements, such as DML statements or `EXECUTE`.
* The name of a named parameter (`?_param-name_`) in a DML statement must be identical to the parameter name
(`_param-name_`) that you specify in a `SET PARAM` command.
<<<
=== Examples
* This command sets a value for the `?sal` (`salary`) parameter:
+
```
SQL> SET PARAM ?sal 40000.00
```
* This command sets a character string value, `GREEN`, for the `?lastname` parameter:
+
```
SQL> SET PARAM ?lastname GREEN
```
* These commands set values for named parameters in a subsequent `SELECT` statement:
+
```
SQL> SET PARAM ?sal 80000.00
SQL> SET PARAM ?job 100
SQL> SELECT * FROM persnl.employee WHERE salary = ?sal AND jobcode = ?job;
EMPNUM FIRST_NAME LAST_NAME DEPTNUM JOBCODE SALARY
------ --------------- -------------------- ------- ------- ----------
72 GLENN THOMAS 3300 100 80000.00
--- 1 row(s) selected.
SQL>
```
+
NOTE: The names of the named parameters, `?sal` and `?job`, in the `SELECT` statement are
identical to the parameter `names`, `sal` and `job`, in the `SET PARAM` command.
* This command sets a character string value, `Peña`, which is in `UTF8` format,
for the `?lastname` parameter:
+
```
SQL> SET PARAM ?lastname UTF8'Peña'
```
* This command sets a character string value, which uses the `UTF8` character set and is in
hexadecimal notation, for the `?lastname` parameter:
+
```
SQL> SET PARAM ?lastname UTF8x'5065266e74696c64653b61'
```
For more information, see <<interactive_set_parameters,Set Parameters>>.
<<<
[[cmd_set_prompt]]
== SET PROMPT Command
The `SET PROMPT` command sets the prompt of the current session to a specified string and/or to the session variables,
which start with `%`. The default prompt is `SQL>`.
=== Syntax
```
SET PROMPT [string] [%USER] [%SERVER] [%SCHEMA]
```
* `_string_`
+
is a string value to be displayed as the prompt. The string may contain any characters. Spaces are allowed if you enclose
the string in double quotes (`"`). If you do not enclose the string in double quotes, the prompt is displayed in uppercase.
* `%USER`
+
displays the session user name as the prompt.
* `%SERVER`
+
displays the session host name and port number as the prompt.
* `%SCHEMA`
+
displays the session schema as the prompt.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* To reset the default prompt, enter this command:
+
```
SET PROMPT
```
<<<
=== Examples
* This `SET PROMPT` command sets the SQL prompt to `ENTER>`:
+
```
SQL> SET PROMPT Enter>
ENTER>
```
* To reset the SQL prompt to the default, enter this `SET PROMPT` command:
+
```
ENTER> SET PROMPT +
SQL>
```
* This command displays the session user name for the prompt:
+
```
SQL> SET PROMPT %user>
user1>
```
* This command displays the session host name and port number for the prompt:
+
```
SQL> SET PROMPT %server>
sqws135.houston.host.com:22900>
```
* This command displays the session schema for the prompt:
+
```
SQL> SET PROMPT "Schema %schema:"
Schema USR:
```
* This command displays multiple session variables:
+
```
SQL> SET PROMPT %USER@%SCHEMA> user1@USR>
user1@USR>set prompt %SERVER:%USER>
sqws135.houston.host.com:22900:user1>
sqws135.houston.host.com:22900:user1> SET PROMPT "%schema CI> "
USR CI>
```
For more information, see <<interactive_customize_prompt, Customize Standard Prompt>>.
<<<
[[]]
== SET SQLPROMPT Command
The `SET SQLPROMPT` command sets the SQL prompt of the current session to
a specified string. The default is `SQL>`.
=== Syntax
```
SET SQLPROMPT [string] [%USER] [%SERVER] [%SCHEMA]
```
* `_string_`
+
is a string value to be displayed as the SQL prompt. The string may contain any characters.
Spaces are allowed if you enclose the string in double quotes. If you do not enclose the string
in double quotes (`"`), the prompt is displayed in uppercase.
* `%USER`
+
displays the session user name as the prompt.
* `%SERVER`
+
displays the session host name and port number as the prompt.
* `%SCHEMA`
+
displays the session schema as the prompt.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* To reset the default SQL prompt, enter this command:
+
```
SET SQLPROMPT
```
<<<
=== Examples
* This command sets the SQL prompt to `ENTER>`:
+
```
SQL> SET SQLPROMPT Enter>
ENTER>
```
* To reset the SQL prompt to the default, enter this command:
+
```
ENTER> SET SQLPROMPT
SQL>
```
* This command displays the session user name for the prompt:
+
```
SQL> SET SQLPROMPT %user>
user1>
```
* This command displays the session host name and port number for the prompt:
+
```
SQL> SET SQLPROMPT %server>
sqws135.houston.host.com:22900>
```
* This command displays the session schema for the prompt:
+
```
SQL> SET SQLPROMPT "Schema %schema:"
Schema USR:
```
* This command displays multiple session variables:
+
```
SQL> SET SQLPROMPT %USER@%SCHEMA>
user1@USR>
SQL> SET SQLPROMPT %SERVER:%USER>
sqws135.houston.host.com:22900:user1>
sqws135.houston.host.com:22900:user1> SET SQLPROMPT "%schema CI> "
USR CI>
```
For more information, see <<interactive_customize_prompt, Customize Standard Prompt>>.
<<<
[[cmd_set_sqlterminator]]
== SET SQLTERMINATOR Command
The `SET SQLTERMINATOR` command sets the SQL statement terminator of the current session.
The default is a semicolon (`;`).
=== Syntax
```
SET SQLTERMINATOR string
```
* `_string_`
+
is a string value for the SQL terminator. The string may contain any characters except spaces.
Spaces are disallowed even if you enclose the string in double quotes. Lowercase and uppercase
characters are accepted, but the SQL terminator is always shown in uppercase.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* Do not include a reserved word as an SQL terminator.
* If you execute this command in a script file, it affects not only the SQL statements in the script
file but all subsequent SQL statements that are run in the current session. If you set the SQL terminator
in a script file, reset the default terminator at the end of the script file.
* To reset the default SQL terminator (`;`), enter this command:
+
```
SET SQLTERMINATOR ;
```
<<<
=== Examples
* This command sets the SQL terminator to a period (`.`):
+
```
SQL> SET SQLTERMINATOR .
```
* This command sets the SQL terminator to a word, `go`:
+
```
SQL> SET SQLTERMINATOR go
```
+
This query ends with the new terminator, `go`:
+
```
SQL> SELECT * FROM persnl.employee go
```
* To reset the SQL terminator to the default, enter this command:
+
```
SQL> SET SQLTERMINATOR ;
```
For more information, <<interactive_set_show_terminator, Set and Show the SQL Terminator>>.
<<<
[[cmd_set_statistics]]
== SET STATISTICS Command
The `SET STATISTICS` command automatically retrieves the statistics information for a query being executed.
The results returned are the same as would have been returned if the `GET STATISTICS` command was executed.
The default is `OFF` which means the statistics information is not automatically printed for any queries.
=== Syntax
```
SET STATISTICS { ON | OFF }
```
=== Considerations
You must enter the command on one line.
<<<
=== Examples
* This command shows the default output format as `PERTABLE`:
+
```
SQL> SET STATISTICS ON
SQL> SELECT * FROM job;
JOBCODE JOBDESC
------- ------------------
100 MANAGER
450 PROGRAMMER
900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 11 row(s) selected.
Start Time 2105/05/18 21:45:34.082329
End Time 2105/05/18 21:45:34.300265
Elapsed Time 00:00:00.217936
Compile Time 00:00:00.002423
Execution Time 00:00:00.218750
Table Name Records Records Disk Message Message Lock Lock Disk Process
Accessed Used I/Os Count Bytes Escl Wait Busy Time
TRAFODION.TOI.JOB
2 2 0 4 15232 0 0 363
SQL>
```
For more information on the STATISTICS command, see the
{docs-url}/sql_reference/index.html[_{project-name} SQL Reference Manual_].
<<<
[[cmd_set_time]]
== SET TIME Command
The `SET TIME` command causes the local time of the client workstation to be displayed as part of the
interface prompt. By default, the local time is not displayed in the interface prompt.
=== Syntax
```
SET TIME { ON[12H] | OFF }
```
* `ON`
+
specifies that the local time be displayed as part of the prompt.
* `OFF`
+
specifies that the local time not be displayed as part of the prompt. `OFF` is the default.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* The default is a 24-hour military style display. The additional argument of `12h` allows
the time to be displayed in a 12-hour AM/PM style.
<<<
=== Examples
* This command causes the local time to be displayed in the SQL prompt:
+
```
SQL> SET TIME ON
14:17:17 SQL>
```
* This command causes the local time to be displayed in 12-hour AM/PM style in the SQL prompt:
+
```
SQL> SET TIME ON 12H
2:17:17 PM SQL>
```
* This command turns off the local time in the SQL prompt:
+
```
2:17:17 PM SQL> SET TIME OFF
SQL>
```
For more information, see <<interactive_customize_prompt,Customize the Standard Prompt>>.
<<<
[[cmd_set_timing]]
== SET TIMING Command
The `SET TIMING` command causes the elapsed time to be displayed after each SQL statement executes.
This command does not cause the elapsed time of interface commands to be displayed. By default, the
elapsed time is `off`.
=== Syntax
```
SET TIMING { ON | OFF }
```
* `ON`
+
specifies the elapsed time be displayed after each SQL statement executes.
* `OFF`
+
specifies that the elapsed time not be displayed after each SQL statement executes. `OFF` is the default.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* The elapsed time value includes compile and execution time plus any network I/O time and client-side processing time.
=== Examples
* This command displays the elapsed time of SQL statements:
+
```
SQL> SET TIMING ON
```
* This command turns off the elapsed time:
+
```
SQL> SET TIMING OFF
```
For more information, see <<interactive_display_elapsed_time,Display the Elapsed Time>>.
<<<
[[cmd_show_activitycount]]
== SHOW ACTIVITYCOUNT Command
The `SHOW ACTIVITYCOUNT` command provides an alias for `SHOW RECCOUNT`.
`ACTIVITYCOUNT` is an alias for `RECCOUNT`. For more information, see the <<cmd_reccount,SHOW RECCOUNT Command>>.
=== Syntax
```
SHOW ACTIVITYCOUNT
```
=== Examples
* This command shows the record count of the previous executed SQL statement:
+
```
SQL> SHOW ACTIVITYCOUNT
ACTIVITYCOUNT 0
```
<<<
[[cmd_show_alias]]
== SHOW ALIAS Command
The `SHOW ALIAS` command displays all or a set of aliases available in the current TrafCI session. If a pattern is specified,
then all aliases matching the pattern are displayed. By default, all aliases in the current session are displayed.
=== Syntax
```
SHOW ALIAS [ alias-name | wild-card-pattern ]
```
* `_alias-name_`
+
is any alias name that is used with the `ALIAS` command. See <<cmd_alias, ALIAS Command>>.
* `_wild-card-pattern_`
+
is a character string used to search for and display aliases with names that match the character string. `_wild-card-pattern_`
matches an uppercase string unless you enclose it within double quotes. To look for similar values, specify only part of the
characters of `_wild-card-pattern_` combined with these wild-card characters.
+
[cols="10%,90%"]
|===
| `%` | Use a percent sign (`%`) to indicate zero or more characters of any type. +
+
For example, `%art%` matches `SMART`, `ARTIFICIAL`, and `PARTICULAR` but not smart or Hearts. `"%art%"` matches `smart` and `Hearts`
but not `SMART`, `ARTIFICIAL`, or `PARTICULAR`.
| `*` | Use an asterisk (`*`) to indicate zero or more characters of any type. +
+
For example, `*art*` matches `SMART`, `ARTIFICIAL`, and `PARTICULAR` but not `smart` or `Hearts`.
`"*art*"` matches `smart` and `Hearts` but not `SMART`, `ARTIFICIAL`, or `PARTICULAR`.
| `_` | Use an underscore (`_`) to indicate any single character. +
+
For example, `boo_` matches `BOOK` and `BOOT` but not `BOO` or `BOOTS`. `"boo_"` matches `book` and `boot` but not `boo` or `boots`.
| `?` | Use a question mark (`?`) to indicate any single character. +
+
For example, `boo?` matches `BOOK` and `BOOT` but not `BOO` or `BOOTS`. `"boo?"` matches `book` and `boot` but not `boo` or `boots`.
|===
=== Considerations
You must enter the command on one line. The command does not require an SQL terminator.
<<<
=== Examples
* This command displays a list of the available aliases:
+
```
SQL> SHOW ALIAS
.OS AS LH
.GOTO AS GOTO
USE AS SET SCHEMA
```
* This command displays the `.GOTO` alias:
+
```
SQL> SHOW ALIAS .GOTO
.GOTO AS GOTO
```
* This command displays the `.FOO` alias:
+
```
SQL> SHOW ALIAS .FOO
No aliases found.
```
* This command displays all aliases beginning with the letter `S`:
+
```
SQL> SHOW ALIAS S*
SEL AS SELECT
SHOWTIME AS SHOW TIME
ST AS SHOW TABLES
```
<<<
[[cmd_show_aliases]]
== SHOW ALIASES Command
The `SHOW ALIASES` command displays all the aliases available in the current TrafCI session.
=== Syntax
```
SHOW ALIASES
```
=== Considerations
You must enter the command on one line. The command does not require an SQL terminator.
=== Examples
* This command displays all the aliases in the current TrafCI session:
+
```
SQL> SHOW ALIASES
.OS AS LH
.GOTO AS GOTO
USE AS SET SCHEMA
```
<<<
[[cmd_show_catalog]]
== SHOW CATALOG Command
The `SHOW CATALOG` command displays the current catalog of the TrafCI session.
=== Syntax
```
SHOW CATALOG
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command shows that the current catalog of the session is TRAFODION:
+
```
SQL> SHOW CATALOG
CATALOG TRAFODION
```
<<<
[[cmd_show_colsep]]
== SHOW COLSEP Command
The `SHOW COLSEP` command displays the value of the column separator for the current TrafCI session.
=== Syntax
```
SHOW COLSEP
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Examples
* This command displays the column separator.
+
```
SQL> SHOW COLSEP
COLSEP " "
SQL> SET COLSEP
SQL> SHOW COLSEP
COLSEP ""
```
* This command displays the column separator.
+
```
SQL> SHOW COLSEP
COLSEP " "
SQL> SET COLSEP
SQL> SHOW COLSEP
COLSEP ""
```
<<<
[[cmd_show_errorcode]]
== SHOW ERRORCODE Command
The `SHOW ERRORCODE` command is an alias for the `SHOW LASTERROR` command. `ERRORCODE` is an alias for `LASTERROR`. For more information, see
<<cmd_show_lasterror,SHOW LASTERROR Command>>.
=== Syntax
```
SHOW ERRORCODE
```
=== Examples
* This command displays the error of the last SQL statement that was executed:
+
```
SQL> SHOW ERRORCODE
ERRORCODE 29481
```
<<<
[[cmd_show_fetchsize]]
== SHOW FETCHSIZE Command
The `SHOW FETCHSIZE` command displays the fetch size value for the current TrafCI session.
=== Syntax
```
SHOW FETCHSIZE
```
=== Considerations
You must enter the command on one line.
=== Examples
* These commands display the fetch size in the current TrafCI session, set the fetch size to a new value, and then redisplay the fetch size:
+
```
SQL> SHOW FETCHSIZE
FETCHSIZE 0 [Default]
SQL> SET FETCHSIZE 1
SQL> SHOW FETCHSIZE
FETCHSIZE 1
```
<<<
[[cmd_show_histopt]]
== SHOW HISTOPT Command
The `SHOW HISTOPT` command displays the value that has been set for the history option.
=== Syntax
```
SHOW HISTOPT
```
=== Considerations
* You must enter the command on one line.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Examples
* This command displays the value set for the history option:
+
```
SQL> SHOW HISTOPT
HISTOPT DEFAULT [No expansion of script files]
SQL> SET HISTOPT ALL
SQL> SHOW HISTOPT
HISTOPT ALL
```
<<<
[[cmd_show_idletimeout]]
== SHOW IDLETIMEOUT Command
The `SHOW IDLETIMEOUT` command displays the idle timeout value of the current TrafCI session. The idle timeout
value of a session determines when the session expires after a period of inactivity.
The default is `30 minutes`.
=== Syntax
```
SHOW IDLETIMEOUT
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
<<<
=== Examples
* This command shows that the idle timeout value of the session is 30 minutes, which is the default:
+
```
SQL> SHOW IDLETIMEOUT
IDLETIMEOUT 30 min(s)
Elapsed time:00:00:00:078
```
* This command shows that the idle timeout value of the session is four hours:
+
```
SQL> SHOW IDLETIMEOUT
IDLETIMEOUT 240 min(s)
```
* This command shows that the idle timeout value is an infinite amount of time, meaning that the session never expires:
+
```
SQL> SHOW IDLETIMEOUT
IDLETIMEOUT 0 min(s) [Never Expires]
```
* This command displays the elapsed time information because `SET TIMING` command is enabled:
+
```
SQL> SET TIMING ON
SQL> SHOW IDLETIMEOUT
IDLETIMEOUT 0 min(s) [Never Expires]
Elapsed time:00:00:00:078
```
For more information, see <<interactive_idle_timeout, Set and Show Session Idle Timeout Value>>.
<<<
[[cmd_show_lasterror]]
== SHOW LASTERROR Command
The `SHOW LASTERROR` command displays the error of the last SQL statement that was executed.
If the query was successful, then `0` is returned; otherwise an SQL error code is returned.
=== Syntax
```
SHOW LASTERROR
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Examples
* This command shows the last error in the current session:
+
```
SQL> SELECT * FROM emp;
**** ERROR[4082]Object TRAFODION.SCH.EMP does not exist or is inaccessible.
SQL> SHOW LASTERROR
LASTERROR 4082
```
<<<
[[cmd_show_list_count]]
== SHOW LIST_COUNT Command
The `SHOW LIST_COUNT` command displays the maximum number of rows to be returned by `SELECT` statements in the
current TrafCI session. The default is `zero`, which means that all rows are returned.
=== Syntax
```
SHOW LIST_COUNT
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Examples
* This command shows that `SELECT` statements return all rows in the current session:
+
```
SQL> SHOW LIST_COUNT
LISTCOUNT 0 [All Rows]
Elapsed time:00:00:00:078
```
* This command shows that the maximum number of rows to be displayed by `SELECT` statements in the session is five:
+
```
SQL> SET LIST_COUNT 5
SQL> SHOW LIST_COUNT
LIST_COUNT 5
Elapsed time:00:00:00:078
```
<<<
[[cmd_show_markup]]
== SHOW MARKUP Command
The `SHOW MARKUP` command displays the value set for the markup option.
=== Syntax
```
SHOW MARKUP
```
=== Considerations
* You must enter the command on one line.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Examples
* This command displays the value set for the markup option:
+
```
SQL> SHOW MARKUP
MARKUP RAW
Elapsed time:00:00:00:078
```
<<<
[[cmd_show_param]]
== SHOW PARAM Command
The `SHOW PARAM` command displays the parameters that are set in the current TrafCI session.
=== Syntax
```
SHOW PARAM
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command shows that parameters that are set for the current session:
+
```
SQL> SHOW PARAM
lastname GREEN
dn 1500
sal 40000.00
```
* This command shows that when no parameters exist, the `SHOW PARAM` command displays an error message:
+
```
SQL> SHOW PARAM
No parameters found.
```
For more information, <<interactive_display_session_parameters, Display Session Parameters>>.
<<<
[[cmd_show_prepared]]
== SHOW PREPARED Command
The `SHOW PREPARED` command displays the prepared statements in the current TrafCI session.
If a pattern is specified, then all prepared statements matching the prepared statement name
pattern are displayed. By default, all prepared statements in the current session are displayed.
=== Syntax
```
SHOW PREPARED
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Examples
* This command shows all the prepared statements, by default:
+
```
SQL> SHOW PREPARED
S1
SELECT * FROM t1
S2
SELECT * FROM student
T1
SELECT * FROM test123
SQL> SHOW PREPARED s%
S1
SELECT * FROM t1
S2
SELECT * FROM student
SQL> SHOW PREPARED t%
T1
SELECT * FROM test123
```
<<<
[[cmd_show_reccount]]
== SHOW RECCOUNT Command
The `SHOW RECCOUNT` command displays the record count of the previously executed SQL statement. If the previously
executed command was an interface command, then TrafCI returns zero.
=== Syntax
```
SHOW RECCOUNT
```
=== Considerations
* You must enter the command on one line. The command does not need an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Examples
* This command displays the record count of the SQL statement that was executed last:
+
```
SQL> SELECT * FROM employee;
SQL> SHOW RECCOUNT
RECCOUNT 62
```
<<<
[[cmd_show_remoteprocess]]
== SHOW REMOTEPROCESS Command
The `SHOW REMOTEPROCESS` command displays the process name of the DCS server that is handling the current connection.
=== Syntax
```
SHOW REMOTEPROCESS
```
=== Considerations
* You must enter the command on one line. The command does not need an SQL terminator.
* The command does not need an SQL terminator.
=== Example
* This command displays the process name, `\g4t3028.houston.host.com:0.$Z0000M2`, of the DCS server that is handling
the current connection:
+
```
SQL> SHOW REMOTEPROCESS
REMOTE PROCESS \g4t3028.houston.host.com:0.$Z0000M2
SQL>
```
<<<
[[cmd_show_schema]]
== SHOW SCHEMA Command
The `SHOW SCHEMA` command displays the current schema of the TrafCI session.
=== Syntax
```
SHOW SCHEMA
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command shows that the current schema of the session is `PERSNL`:
+
```
SQL> SHOW SCHEMA
SCHEMA PERSNL
```
For more information, see <<interactive_set_show_current_schema, Set and Show the Current Schema>>.
<<<
[[cmd_show_session]]
== SHOW SESSION Command
`SHOW SESSION` or `SESSION` displays attributes of the current TrafCI session.
You can also use the `ENV` command to perform the same function.
=== Syntax
```
[SHOW] SESSION
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
* `SHOW SESSION` or `SESSION` displays these attributes:
+
[cols="20%,80%",options="header"]
|===
| Attribute | Description
| `COLSEP` | Current column separator, which is used to control how query results are displayed. +
+
For more information, <<cmd_set_colsep,SET COLSEP Command>>.
| `HISTOPT` | Current history options, which controls how the commands are added to the history buffer. +
+
For more information, see <<cmd_set_histopt, SET HISTOPT Command>>.
| `IDLETIMEOUT` | Current idle timeout value, which determines when the session expire after a period of inactivity.
By default, the idle timeout is `30 minutes`. +
+
For more information, see <<interactive_idle_timeout, Set and Show Session Idle Timeout Value>> and
<<cmd_set_idletimeout, SET IDLETIMEOUT Command>>.
| `LIST_COUNT` | Current list count, which is the maximum number of rows that can be returned by SELECT statements.
By default, the list count is all rows. +
+
For more information, see <<cmd_set_list_count,SET LIST_COUNT Command>>.
| `LOG FILE` | Current log file and the directory containing the log file. By default, logging during a session is turned off. +
+
For more information, see <<interactive_log_output, Log Output>>, and <<cmd_log, LOG Command>>.
| `LOG OPTIONS` | Current logging options. By default, logging during a session is turned off, and this attribute does not appear in the output. +
+
For more information, see the <<cmd_log, LOG Command>> or <<cmd_spool, SPOOL Command>>.
| `MARKUP` | Current markup option selected for the session. The default option is RAW. +
+
For more information, see <<cmd_set_markup,SET MARKUP Command">>.
| `PROMPT` | Current prompt for the session. For example, the default is `SQL>`. +
+
For more information, see <<interactive_customize_prompt,Customize the Standard Prompt>> and <<cmd_set_prompt, SET PROMPT Command>>.
| `SCHEMA` | Current schema. The default is `USR`. +
+
For more information, see <<interactive_set_show_current_schema, Set and Show the Current Schema>>.
| `SERVER` | Host name and port number that you entered when logging in to the database platform. +
+
For more information, see <<trafci_login, Log In to Database Platform>>.
| `SQLTERMINATOR` | Current SQL statement terminator. The default is a semicolon (`;`). +
+
For more information, see <<interactive_set_show_terminator, Set and Show the SQL Terminator>> and
<<cmd_show_sqlterminator,SHOW SQLTERMINATOR Command>>.
| `STATISTICS` | Current setting (`on` or `off`) of statistics. +
+
For more information, see the <<cmd_set_statistics, SET STATISTICS Command>>.
| `TIME` | Current setting (`on` or `off`) of the local time as part of the prompt. When this command is set to `on`,
military time is displayed. By default, the local time is `off`. +
+
For more information, see <<interactive_customize_prompt,Customize the Standard Prompt>> and <<cmd_set_time, SET TIME Command>>.
| `TIMING` | Current setting (`on` or `off`) of the elapsed time. By default, the elapsed time is `off`. +
+
For more information, see <<interactive_display_elapsed_time, Display the Elapsed Time>> and <<cmd_set_timing, SET TIMING Command>>.
| `USER` | User name that you entered when logging in to the database platform. +
+
For more information, see <<trafci_login, Log In to Database Platform>>.
|===
<<<
=== Examples
* This SHOW SESSION command displays the attributes of the current session:
+
```
SQL> SHOW SESSION
COLSEP " "
HISTOPT DEFAULT [No expansion of script files]
IDLETIMEOUT 0 min(s) [Never Expires]
LIST_COUNT 0 [All Rows]
LOG FILE c:\session.txt
LOG OPTIONS APPEND,CMDTEXT ON
MARKUP RAW
PROMPT SQL>
SCHEMA SEABASE
SERVER sqws135.houston.host.com:23400
SQLTERMINATOR ;
STATISTICS OFF
TIME OFF
TIMING OFF
USER user1
```
* This `SESSION` command shows the effect of setting various session attributes:
+
```
SQL> SESSION
COLSEP " "
HISTOPT DEFAULT [No expansion of script files]
IDLETIMEOUT 30 min(s)
LIST_COUNT 0 [All Rows]
LOG OFF
MARKUP RAW
PROMPT SQL>
SCHEMA SEABASE
SERVER sqws135.houston.host.com:23400
SQLTERMINATOR ;
STATISTICS OFF
TIME OFF
TIMING OFF
USER user1
SQL>
```
<<<
[[cmd_set_sqlprompt]]
== SHOW SQLPROMPT Command
The `SHOW SQLPROMPT` command displays the value of the SQL prompt for the current TrafCI session.
=== Syntax
```
SHOW SQLPROMPT
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command shows that the SQL prompt for the current session is `SQL>`:
+
```
SQL> SHOW SQLPROMPT
SQLPROMPT SQL>
```
<<<
[[cmd_show_sqlterminator]]
== SHOW SQLTERMINATOR Command
The `SHOW SQLTERMINATOR` command displays the SQL statement terminator of the current TrafCI session.
=== Syntax
```
SHOW SQLTERMINATOR
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command shows that the SQL terminator for the current session is a period (`.`):
+
```
SQL> SHOW SQLTERMINATOR
SQLTERMINATOR .
```
For more information, see <<interactive_set_show_terminator, Set and Show the SQL Terminator>>.
<<<
[[cmd_show_statistics]]
== SHOW STATISTICS Command
The `SHOW STATISTICS` command displays if statistics has been enabled or disabled for the current session.
=== Syntax
```
SHOW STATISTICS
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command shows `SHOW STATISTICS` disabled and then enabled:
+
```
SQL> SHOW STATISTICS
STATISTICS OFF
SQL> SET STATISTICS ON
SQL> SHOW STATISTICS
STATISTICS ON
```
<<<
[[cmd_show_time]]
== SHOW TIME Command
The `SHOW TIME` command displays whether the setting for the local time in the interface prompt is `ON` or `OFF`.
=== Syntax
```
SHOW TIME
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command shows that the setting for the local time in the SQL prompt is `OFF`:
+
```
SQL> SHOW TIME
TIME OFF
```
<<<
[[cmd_show_timing]]
== SHOW TIMING Command
The `SHOW TIMING` command displays whether the setting for the elapsed time is `ON` or `OFF`.
=== Syntax
```
SHOW TIMING
```
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* If the `SET TIMING` command is set to `ON`, the elapsed time information is displayed.
=== Example
* This command displays the elapsed time information because the `SET TIMING` command is enabled:
+
```
SQL> SET TIMING ON
SQL> SHOW TIME
TIME OFF
Elapsed :00:00:00.000
```
<<<
[[cmd_spool]]
== SPOOL Command
The `SPOOL` command logs the entered commands and their output from TrafCI to a log file.
=== Syntax
```
SPOOL { ON [ CLEAR, QUIET, CMDTEXT { ON | OFF } ]
| log-file [ CLEAR, QUIET, CMDTEXT { ON | OFF } ]
| OFF
}
```
* `ON`
+
starts the logging process and records information in the `sqlspool.lst` file in the `bin directory.
* `ON CLEAR`
+
instructs TrafCI to clear the contents of the `sqlspool.lst` file before logging new information to the file.
* `QUIET`
+
specifies that the command text is displayed on the screen, but the results of the command are written only to
the log file and not to the screen.
* `CMDTEXT ON`
+
specifies that the command text and the log header are displayed in the log file.
* `CMDTEXT OFF`
+
specifies that the command text and the log header are not displayed in the log file.
* `_log-file_`
+
is the name of a log file into which TrafCI records the entered commands and their output. If you want the log file
to exist outside the local directory where you launch TrafCI (by default, the `bin` directory), then specify the
full directory path of the log file. The log file does not need to exist, but the specified directory must exist
before you execute the `SPOOL` command.
* `_log-file_ CLEAR`
+
instructs TrafCI to clear the contents of the specified `_log-file_` before logging new information to the file.
* `OFF`
+
stops the logging process.
=== Considerations
* You must enter the command on one line. The command does not require an SQL terminator.
* Use a unique name for each log file to avoid writing information from different TrafCI sessions into the same log file.
=== Examples
* This command starts the logging process and records information to the `sqlspool.lst` file in the `bin` directory:
+
```
SQL> SPOOL ON
```
* This command starts the logging process and appends new information to an existing log file, `persnl_updates.log`,
in the local directory (the same directory where you are running TrafCI):
+
```
SQL> SPOOL persnl_updates.log
```
* This command starts the logging process and appends new information to a log file, `sales_updates.log`, in the
specified directory on a Windows workstation:
+
```
SQL> SPOOL c:\log_files\sales_updates.log
```
* This command starts the logging process and appends new information to a log file, `sales_updates.log`,
in the specified directory on a Linux or UNIX workstation:
+
```
SQL> SPOOL ./log_files/sales_updates.log
```
* This command starts the logging process and clears existing information from the log file before logging
new information to the file:
+
```
SQL> SPOOL persnl_ddl.log CLEAR
```
<<<
* This command starts the logging process and records information to the `sqlspool.lst` file in the bin directory:
+
```
SQL> LOG ON
```
* This command starts the logging process and appends new information to an existing log file, `persnl_updates.log`,
in the local directory (the same directory where you are running TrafCI):
+
```
SQL> LOG persnl_updates.log
```
* This command starts the logging process and appends new information to a log file, `sales_updates.log`,
in the specified directory on a Windows workstation:
+
```
SQL> LOG c:\log_files\sales_updates.log
```
* This command starts the logging process and appends new information to a log file, `sales_updates.log`,
in the specified directory on a Linux or UNIX workstation:
+
```
SQL> LOG ./log_files/sales_updates.log
```
* This command starts the logging process and clears existing information from the log file before logging new
information to the file:
+
```
SQL> LOG persnl_ddl.log CLEAR
```
<<<
* This command start the logging process, clears existing information from the log file, and specifies that the
command text and log header is not displayed in the log file:
+
```
SQL> LOG c:\temp\a.txt clear, CMDTEXT OFF
SQL> SELECT * FROM trafodion.toi.job
+>;
JOBCODE JOBDESC
------- ------------------
100 MANAGER
450 PROGRAMMER
900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 10 row(s) selected.
SQL> LOG OFF
```
+
Output of `c:\temp\a.txt`
+
```
JOBCODE JOBDESC
------- ------------------
100 MANAGER
450 PROGRAMMER 900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 10 row(s) selected
```
<<<
* This command start the logging process, clears existing information from the log file, and specifies that no output
appears on the console window:
+
```
SQL> LOG c:\temp\b.txt CLEAR, CMDTEXT OFF, QUIET
SQL> SELECT *
+>FROM trafodion.toi.job;
SQL> LOG OFF
```
+
Output of `c:\temp\b.txt`
+
```
====================
JOBCODE JOBDESC
------- ------------------
100 MANAGER
450 PROGRAMMER
900 SECRETARY
300 SALESREP
500 ACCOUNTANT
400 SYSTEM ANALYST
250 ASSEMBLER
420 ENGINEER
600 ADMINISTRATOR
200 PRODUCTION SUPV
--- 10 row(s) selected
```
* This command stops the logging process:
+
```
SQL> LOG OFF
```
For more information, see <<interactive_log_output, Log Output>>.
<<<
[[cmd_version]]
== VERSION Command
The `VERSION` command displays the build versions of the {project-name} database, {project-name} Connectivity Service,
{project-name} JDBC Type 4 Driver, and TrafCI.
=== Syntax
```
VERSION
```
=== Considerations
You must enter the command on one line. The command does not require an SQL terminator.
=== Example
* This command shows versions of the {project-name} database, {project-name} Connectivity Service, {project-name} JDBC Type 4 Driver, and TrafCI:
+
```
SQL> VERSION
Trafodion Platform : Release 0.8.0
Trafodion Connectivity Services : Version 1.0.0 Release 0.8.0
Trafodion JDBC Type 4 Driver : Traf_JDBC_Type4_Build_40646)
Trafodion Command Interface : TrafCI_Build_40646
SQL>
```
<<<
* If TrafCI is started with the -noconnect parameter, the `VERSION` command displays only TrafCI and the
{project-name} JDBC Type 4 Driver versions.
+
```
C:\Program Files (x86)\Apache Software Foundation\Trafodion Command Interface\bin> TRAFCI -noconnect
Welcome to Trafodion Command Interface
Copyright(C) 2013-2105 Apache Software Foundation
SQL> VERSION
Trafodion Platform : Information not available.
Trafodion Connectivity Services : Information not available.
Trafodion JDBC Type 4 Driver : Traf_JDBC_Type4_Build_40646
Trafodion Command Interface : TrafCI_Build_40646
```
| 28.394115 | 189 | 0.682527 |
eb0a02b64aaa1567f0072374728bb89903dbcca6 | 1,130 | adoc | AsciiDoc | nodes/scheduling/nodes-scheduler-taints-tolerations.adoc | pittar/openshift-docs | 0c65dc626e592a5073d4dc6daef4167ebf3ed009 | [
"Apache-2.0"
] | 2 | 2021-04-12T10:18:10.000Z | 2022-03-25T09:15:15.000Z | nodes/scheduling/nodes-scheduler-taints-tolerations.adoc | pittar/openshift-docs | 0c65dc626e592a5073d4dc6daef4167ebf3ed009 | [
"Apache-2.0"
] | 2 | 2020-04-02T18:04:12.000Z | 2020-09-28T18:23:02.000Z | nodes/scheduling/nodes-scheduler-taints-tolerations.adoc | pittar/openshift-docs | 0c65dc626e592a5073d4dc6daef4167ebf3ed009 | [
"Apache-2.0"
] | 4 | 2020-05-07T15:07:11.000Z | 2022-01-07T07:39:21.000Z | :context: nodes-scheduler-taints-tolerations
[id="nodes-scheduler-taints-tolerations"]
= Controlling pod placement using node taints
include::modules/common-attributes.adoc[]
toc::[]
Taints and tolerations allow the Node to control which Pods should (or should not) be scheduled on them.
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.
include::modules/nodes-scheduler-taints-tolerations-about.adoc[leveloffset=+1]
include::modules/nodes-scheduler-taints-tolerations-adding.adoc[leveloffset=+1]
include::modules/nodes-scheduler-taints-tolerations-dedicating.adoc[leveloffset=+2]
include::modules/nodes-scheduler-taints-tolerations-binding.adoc[leveloffset=+2]
include::modules/nodes-scheduler-taints-tolerations-special.adoc[leveloffset=+2]
include::modules/nodes-scheduler-taints-tolerations-removing.adoc[leveloffset=+1]
//Removed per upstream docs modules/nodes-scheduler-taints-tolerations-evictions.adoc[leveloffset=+1]
| 35.3125 | 104 | 0.80708 |
ff86c674637cf3b34c42f1fda603e7a310dd73cd | 3,269 | adoc | AsciiDoc | readme.adoc | fekir/native-windows-hardening | fdf48eb37d3bf96e040e85f90b2e48d8f1ed435c | [
"0BSD"
] | 1 | 2022-03-23T22:58:28.000Z | 2022-03-23T22:58:28.000Z | readme.adoc | fekir/native-windows-hardening | fdf48eb37d3bf96e040e85f90b2e48d8f1ed435c | [
"0BSD"
] | null | null | null | readme.adoc | fekir/native-windows-hardening | fdf48eb37d3bf96e040e85f90b2e48d8f1ed435c | [
"0BSD"
] | null | null | null |
This is a work in progress.
There are lot of incomplet parts, and some might even be incorrekt.
Pull request, clarifications, additional informations, ideas are all welcome.
= What does native mean?
There are many products for hardening a Microsoft Windows System.
Unfortunately many rely on installing third-party programs and having processes running all the time in background, like an antivirus.
Also mixing different security suites can have unintended side-effects, because they might use different approaches, or might be incompatible, like using two antiviruses on the same computer.
Ideally, and this is what native means for me, the ideal solution
* Does not require external programs
* Does not need any programs running in the background all the time
* has no performance drawbacks
* every setting and side-effects are explained/documented
Last, but not least, the approach is to be proactive, i.e. avoiding infecting the system, instead of repairing it.
Of course, third party software can still be used, and incompatibilities might arise.
It should be easier to find the culprit, as there are fewer resource contentions, as there are no running programs.
There are no known incompatibilities between all enlisted hardening techniques unless stated otherwise.
== Why does this repo exist
I initiated this repository after changing the default file type association of scripts and executables.
The system is usable, there are some drawbacks and issues, but most of them can be resolved.
On the other hand, some programs do not work correctly when changing these settings, and I needed a place where to document those, and eventually how to circumvent the issues.
So this repo is mainly for collecting information about this novel approach, in particular
* what programs break
* if a program gets fixed (which version and so on)
* if the program can be made working again without an official fix/how to circumvent eventual bugs
* what components of windows are problematic
* point authors of offending programs where to find more information
Other resources, like snippets of code or script for helping to debug, changing hardening settings, and so on, might also be added to this repository.
== Want to participate?
My usage of the Windows operating system is probably completely different than yours.
The best way to help is making bug reports to programs that do not work correctly (even better would be to fix them), and testing as many programs/workflows as possible.
It would be nice to open a ticket here so that other people can track those pieces of information, as most closed-source software does not have an open/public bug tracker.
Of course, if there are ways to avoid issues with specific tasks or programs, it would be great to add that information to the repository too.
Also, as I do not always use Windows for work, I might not have noticed some major drawbacks that currently make some approaches impossible to use in practice.
== Other informations
Mostly WIP and links
* link:/docs/double-extensions.adoc[double extensions]
* link:/docs/file-associations.adoc[file associations], the main reason I've created this repository
* link:/docs/official-mitigations.adoc[official mitigations]
| 50.292308 | 191 | 0.795656 |
fb80f3798c78ff89567ec45a1a0520eb601b89fd | 6,033 | adoc | AsciiDoc | content/02/02.04/script.ruby.adoc | gyanesh-mahana/bdd-with-cucumber | 4b0c446e7e6eb20b288d1e94df130e633b4c03cf | [
"CC-BY-4.0"
] | 44 | 2020-08-06T21:33:33.000Z | 2022-01-17T10:31:01.000Z | content/02/02.04/script.ruby.adoc | gyanesh-mahana/bdd-with-cucumber | 4b0c446e7e6eb20b288d1e94df130e633b4c03cf | [
"CC-BY-4.0"
] | 34 | 2020-02-19T03:48:53.000Z | 2022-01-24T18:22:24.000Z | content/bdd-with-cucumber/02/02.04/script.ruby.adoc | cucumber-school/fundamentals-of-bdd | 464b69c921041f237fea495d0aafd1c258d2e438 | [
"CC-BY-4.0"
] | 112 | 2020-08-11T13:03:18.000Z | 2022-01-24T18:35:31.000Z | include::./title.adoc[]
Let's create our first feature file. Call the file `hear_shout.feature` shot::[1]
[source,bash]
----
$ touch features/hear_shout.feature
----
shot::[2]
All feature files start with the keyword `Feature:`
shot::[3]
followed by a name.
It’s a good convention to give it a name that matches the file name.
shot::[4]
Now let’s write out our first scenario.
.hear_shout.feature
[source,gherkin]
----
Feature: Hear shout
Scenario: Listener is within range
Given Lucy is located 15m from Sean
When Sean shouts "free bagels at Sean's"
Then Lucy hears Sean’s message
----
Save the file, switch back to the command-prompt shot::[5] and run
`cucumber`. shot::[6]
You’ll see Cucumber has found our feature file and read it back to us.shot::[7] We can
see a summary of the test results below the scenario: shot::[8] one scenario, shot::[9] three
steps - all undefined. shot::[10]
[source,bash]
----
$ bundle exec cucumber
Feature: Hear shout
Scenario: Listener is within range # features/hear_shout.feature:2
Given Lucy is located 15m from Sean # features/hear_shout.feature:3
When Sean shouts "free bagels at Sean's" # features/hear_shout.feature:4
Then Lucy hears Sean's message # features/hear_shout.feature:5
1 scenario (1 undefined)
3 steps (3 undefined)
0m0.051s
You can implement step definitions for undefined steps with these snippets:
Given("Lucy is located {int}m from Sean") do |int|
pending # Write code here that turns the phrase above into concrete actions
end
When("Sean shouts {string}") do |string|
pending # Write code here that turns the phrase above into concrete actions
end
Then("Lucy hears Sean's message") do
pending # Write code here that turns the phrase above into concrete actions
end
----
shot::[11]
_Undefined_ means Cucumber doesn’t know what to do for any of the three steps we wrote in our Gherkin scenario. It needs us to provide some _step definitions_.
shot::[11.1, "02.04.animation.mp4"]
Step definitions translate from the plain language you use in Gherkin into Ruby code.
When Cucumber runs a step, it looks for a step definition that matches the text in the Gherkin step. If it finds one, then it executes the code in the step definition.
If it doesn’t find one… well, you’ve just seen what happens. Cucumber helpfully prints out some code snippets that we can use as a basis for new step definitions.
shot::[12]
Let’s copy those to create our first step definitions.
shot::[13] shot::[14]
We’ll paste them into a Ruby file under the `step_definitions` directory, inside the `features` directory. We'll just call it `steps.rb`.
.steps.rb
[source,ruby]
----
Given("Lucy is located {int}m from Sean") do |int|
pending # Write code here that turns the phrase above into concrete actions
end
When("Sean shouts {string}") do |string|
pending # Write code here that turns the phrase above into concrete actions
end
Then("Lucy hears Sean's message") do
pending # Write code here that turns the phrase above into concrete actions
end
----
shot::[15]
Now run Cucumber again.
This time the output is a little different. None of the steps are undefined anymore. We now have a pending step shot::[16] and two skipped ones.shot::[17] This means Cucumber found all our step definitions, and executed the first one.
shot::[18]
But that first step definition throws a `PendingException`, which causes Cucumber to stop, skip the rest of the steps, and mark the scenario as pending.
[source,bash]
----
$ bundle exec cucumber
Feature: Hear shout
Scenario: Listener is within range # features/hear_shout.feature:2
Given Lucy is located 15m from Sean # features/step_definitions/steps.rb:1
TODO (Cucumber::Pending)
./features/step_definitions/steps.rb:2:in `"Lucy is located {int}m from Sean"'
features/hear_shout.feature:3:in `Given Lucy is located 15m from Sean'
When Sean shouts "free bagels at Sean's" # features/step_definitions/steps.rb:5
Then Lucy hears Sean's message # features/step_definitions/steps.rb:9
1 scenario (1 pending)
3 steps (2 skipped, 1 pending)
0m0.008s
----
Now that we've wired up our step definitions to the Gherkin steps, it's almost time to start working on our solution. First though, let's tidy up the generated code.
shot::[19]
We'll rename the `int` parameter to something that better reflects its meaning. We’ll call it `distance`.
shot::[20]
We can print it to the terminal to see what's happening.
[source,ruby]
----
Given("Lucy is located {int}m from Sean") do |distance|
puts distance
pending # Write code here that turns the phrase above into concrete actions
end
When("Sean shouts {string}") do |string|
pending # Write code here that turns the phrase above into concrete actions
end
Then("Lucy hears Sean's message") do
pending # Write code here that turns the phrase above into concrete actions
end
----
If we run `cucumber` again on our terminal, shot::[21] we can see the number 15 pop up in the output.shot::[22]
[source,bash]
----
$ bundle exec cucumber
Feature: Hear shout
Scenario: Listener is within range # features/hear_shout.feature:2
Given Lucy is located 15m from Sean # features/step_definitions/steps.rb:1
15
TODO (Cucumber::Pending)
./features/step_definitions/steps.rb:3:in `"Lucy is located {int}m from Sean"'
features/hear_shout.feature:3:in `Given Lucy is located 15m from Sean'
When Sean shouts "free bagels at Sean's" # features/step_definitions/steps.rb:6
Then Lucy hears Sean's message # features/step_definitions/steps.rb:10
1 scenario (1 pending)
3 steps (2 skipped, 1 pending)
0m0.005s
----
Notice that the number 15 does not appear anywhere in our Ruby code. The value is automatically passed from the Gherkin step to the step definition. If you're curious, that’s the shot::[23]`{int}` in the step definition pattern or _cucumber expression_. We’ll explain these patterns in detail in a future lesson.
| 35.488235 | 312 | 0.734792 |
c7b19641187a4af77713d8af7389ca49b7e819a0 | 1,355 | adoc | AsciiDoc | antora/components/system/modules/generated/pages/index/applib/services/iactn/InteractionContext.adoc | sample-maintenance-course/isis | 8b3db02eb34fe5a16c6ebd5d0c2cbeb175fdfe11 | [
"Apache-2.0"
] | null | null | null | antora/components/system/modules/generated/pages/index/applib/services/iactn/InteractionContext.adoc | sample-maintenance-course/isis | 8b3db02eb34fe5a16c6ebd5d0c2cbeb175fdfe11 | [
"Apache-2.0"
] | null | null | null | antora/components/system/modules/generated/pages/index/applib/services/iactn/InteractionContext.adoc | sample-maintenance-course/isis | 8b3db02eb34fe5a16c6ebd5d0c2cbeb175fdfe11 | [
"Apache-2.0"
] | 1 | 2020-12-22T03:26:18.000Z | 2020-12-22T03:26:18.000Z | = InteractionContext : _interface_
:Notice: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at. http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Provides the current thread's xref:system:generated:index/applib/services/iactn/Interaction.adoc[Interaction] .
.Java Sources
[source,java]
----
interface InteractionContext {
Optional<Interaction> currentInteraction() // <.>
Interaction currentInteractionElseFail()
}
----
<.> `[teal]#*currentInteraction*#()` : `Optional<xref:system:generated:index/applib/services/iactn/Interaction.adoc[Interaction]>`
+
--
Optionally, the currently active xref:system:generated:index/applib/services/iactn/Interaction.adoc[Interaction] for the calling thread.
--
| 58.913043 | 759 | 0.791882 |
6433b6c05a5763c2c29f67ee1d317db82fc9c9fc | 857 | adoc | AsciiDoc | nodes/containers/nodes-containers-projected-volumes.adoc | tradej/openshift-docs | 584ce30d22ccc79d822da825559fb7752b41b1f8 | [
"Apache-2.0"
] | 625 | 2015-01-07T02:53:02.000Z | 2022-03-29T06:07:57.000Z | nodes/containers/nodes-containers-projected-volumes.adoc | tradej/openshift-docs | 584ce30d22ccc79d822da825559fb7752b41b1f8 | [
"Apache-2.0"
] | 21,851 | 2015-01-05T15:17:19.000Z | 2022-03-31T22:14:25.000Z | nodes/containers/nodes-containers-projected-volumes.adoc | tradej/openshift-docs | 584ce30d22ccc79d822da825559fb7752b41b1f8 | [
"Apache-2.0"
] | 1,681 | 2015-01-06T21:10:24.000Z | 2022-03-28T06:44:50.000Z | :context: nodes-containers-projected-volumes
[id="nodes-containers-projected-volumes"]
= Mapping volumes using projected volumes
include::modules/common-attributes.adoc[]
toc::[]
A _projected volume_ maps several existing volume sources into the same directory.
The following types of volume sources can be projected:
* Secrets
* Config Maps
* Downward API
[NOTE]
====
All sources are required to be in the same namespace as the pod.
====
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.
include::modules/nodes-containers-projected-volumes-about.adoc[leveloffset=+1]
include::modules/nodes-containers-projected-volumes-creating.adoc[leveloffset=+1]
| 22.552632 | 82 | 0.775963 |
ae3949036bfca70ffcbe2a3e4ea70f04988137ba | 462 | adoc | AsciiDoc | ontrack-dsl/src/main/resources/net.nemerosa.ontrack.dsl.properties.ProjectProperties/git.adoc | pavelerofeev/ontrack | a71e281890e831fc56d4eb26ee68ed59bd157f2f | [
"MIT"
] | 102 | 2015-01-22T14:34:53.000Z | 2022-03-25T09:58:04.000Z | ontrack-dsl/src/main/resources/net.nemerosa.ontrack.dsl.properties.ProjectProperties/git.adoc | pavelerofeev/ontrack | a71e281890e831fc56d4eb26ee68ed59bd157f2f | [
"MIT"
] | 736 | 2015-01-01T13:27:13.000Z | 2022-03-24T18:39:39.000Z | ontrack-dsl/src/main/resources/net.nemerosa.ontrack.dsl.properties.ProjectProperties/git.adoc | pavelerofeev/ontrack | a71e281890e831fc56d4eb26ee68ed59bd157f2f | [
"MIT"
] | 35 | 2015-02-05T22:12:38.000Z | 2021-06-24T08:11:19.000Z | Associates a project with a <<usage-git,Git configuration>>.
`def git(String name)`
Gets the associated Git configuration:
`def getGit()`
Example:
[source,groovy]
----
ontrack.configure {
git 'ontrack', remote: 'https://github.com/nemerosa/ontrack.git', user: 'test', password: 'secret'
}
ontrack.project('project') {
config {
git 'ontrack'
}
}
def cfg = ontrack.project('project').config.git
assert cfg.configuration.name == 'ontrack'
----
| 19.25 | 101 | 0.683983 |
9e6d950578fc93cec2c51abff32bc337c03c6f3a | 1,270 | adoc | AsciiDoc | docs/quick-start-guide.adoc | bimanaev/ggr | 29e82c6f36f660852e3e6b25a7571b4dc69c8a45 | [
"Apache-2.0"
] | 274 | 2017-04-14T20:38:18.000Z | 2022-03-04T22:34:26.000Z | docs/quick-start-guide.adoc | bimanaev/ggr | 29e82c6f36f660852e3e6b25a7571b4dc69c8a45 | [
"Apache-2.0"
] | 296 | 2017-04-13T12:05:12.000Z | 2022-03-18T11:36:23.000Z | docs/quick-start-guide.adoc | bimanaev/ggr | 29e82c6f36f660852e3e6b25a7571b4dc69c8a45 | [
"Apache-2.0"
] | 69 | 2017-04-15T09:43:38.000Z | 2022-03-08T16:10:15.000Z | == Quick Start Guide
To use Go Grid Router do the following:
. Install http://docker.com/[Docker] to host
. Create configuration directory:
+
----
$ mkdir -p /etc/grid-router/quota
----
. Create ```users.htpasswd``` file:
+
----
$ htpasswd -bc /etc/grid-router/users.htpasswd test test-password
----
. Start https://aerokube.com/selenoid/latest/[Selenoid] on host `selenoid.example.com` and port `4444`.
. Create quota file (use correct browser name and version):
+
----
$ cat /etc/grid-router/quota/test.xml
<qa:browsers xmlns:qa="urn:config.gridrouter.qatools.ru">
<browser name="firefox" defaultVersion="88.0">
<version number="88.0">
<region name="1">
<host name="selenoid.example.com" port="4444" count="1"/>
</region>
</version>
</browser>
</qa:browsers>
----
+
NOTE: File name should correspond to user name you added to `htpasswd` file.
For user `test` we added on previous steps you should create `test.xml`.
. Start Ggr container:
+
----
# docker run -d --name \
ggr -v /etc/grid-router/:/etc/grid-router:ro \
--net host aerokube/ggr:latest-release
----
. Access Ggr on port 4444 in the same way you do for Selenium Hub but using the following url:
+
----
http://test:test-password@localhost:4444/wd/hub
----
| 24.901961 | 103 | 0.672441 |
66d2bbcb1c100bbec2b708f9b4d0d06beb7a99e3 | 4,189 | aj | AspectJ | src/main/java/org/esupportail/sgc/services/crous/CrousErrorLog_Roo_Finder.aj | hagneva1/esup-sgc | 9c5fbe3be1e0787314666ef49e1335016a9bc495 | [
"Apache-2.0"
] | 6 | 2017-11-16T08:19:23.000Z | 2018-10-15T13:14:34.000Z | src/main/java/org/esupportail/sgc/services/crous/CrousErrorLog_Roo_Finder.aj | hagneva1/esup-sgc | 9c5fbe3be1e0787314666ef49e1335016a9bc495 | [
"Apache-2.0"
] | 11 | 2020-03-04T22:07:24.000Z | 2022-02-16T00:55:14.000Z | src/main/java/org/esupportail/sgc/services/crous/CrousErrorLog_Roo_Finder.aj | hagneva1/esup-sgc | 9c5fbe3be1e0787314666ef49e1335016a9bc495 | [
"Apache-2.0"
] | 1 | 2018-03-14T10:12:11.000Z | 2018-03-14T10:12:11.000Z | // WARNING: DO NOT EDIT THIS FILE. THIS FILE IS MANAGED BY SPRING ROO.
// You may push code into the target .java compilation unit if you wish to edit any member(s).
package org.esupportail.sgc.services.crous;
import javax.persistence.EntityManager;
import javax.persistence.TypedQuery;
import org.esupportail.sgc.domain.Card;
import org.esupportail.sgc.domain.User;
import org.esupportail.sgc.services.crous.CrousErrorLog;
privileged aspect CrousErrorLog_Roo_Finder {
public static Long CrousErrorLog.countFindCrousErrorLogsByCard(Card card) {
if (card == null) throw new IllegalArgumentException("The card argument is required");
EntityManager em = CrousErrorLog.entityManager();
TypedQuery q = em.createQuery("SELECT COUNT(o) FROM CrousErrorLog AS o WHERE o.card = :card", Long.class);
q.setParameter("card", card);
return ((Long) q.getSingleResult());
}
public static Long CrousErrorLog.countFindCrousErrorLogsByUserAccount(User userAccount) {
if (userAccount == null) throw new IllegalArgumentException("The userAccount argument is required");
EntityManager em = CrousErrorLog.entityManager();
TypedQuery q = em.createQuery("SELECT COUNT(o) FROM CrousErrorLog AS o WHERE o.userAccount = :userAccount", Long.class);
q.setParameter("userAccount", userAccount);
return ((Long) q.getSingleResult());
}
public static TypedQuery<CrousErrorLog> CrousErrorLog.findCrousErrorLogsByCard(Card card) {
if (card == null) throw new IllegalArgumentException("The card argument is required");
EntityManager em = CrousErrorLog.entityManager();
TypedQuery<CrousErrorLog> q = em.createQuery("SELECT o FROM CrousErrorLog AS o WHERE o.card = :card", CrousErrorLog.class);
q.setParameter("card", card);
return q;
}
public static TypedQuery<CrousErrorLog> CrousErrorLog.findCrousErrorLogsByCard(Card card, String sortFieldName, String sortOrder) {
if (card == null) throw new IllegalArgumentException("The card argument is required");
EntityManager em = CrousErrorLog.entityManager();
StringBuilder queryBuilder = new StringBuilder("SELECT o FROM CrousErrorLog AS o WHERE o.card = :card");
if (fieldNames4OrderClauseFilter.contains(sortFieldName)) {
queryBuilder.append(" ORDER BY ").append(sortFieldName);
if ("ASC".equalsIgnoreCase(sortOrder) || "DESC".equalsIgnoreCase(sortOrder)) {
queryBuilder.append(" ").append(sortOrder);
}
}
TypedQuery<CrousErrorLog> q = em.createQuery(queryBuilder.toString(), CrousErrorLog.class);
q.setParameter("card", card);
return q;
}
public static TypedQuery<CrousErrorLog> CrousErrorLog.findCrousErrorLogsByUserAccount(User userAccount) {
if (userAccount == null) throw new IllegalArgumentException("The userAccount argument is required");
EntityManager em = CrousErrorLog.entityManager();
TypedQuery<CrousErrorLog> q = em.createQuery("SELECT o FROM CrousErrorLog AS o WHERE o.userAccount = :userAccount", CrousErrorLog.class);
q.setParameter("userAccount", userAccount);
return q;
}
public static TypedQuery<CrousErrorLog> CrousErrorLog.findCrousErrorLogsByUserAccount(User userAccount, String sortFieldName, String sortOrder) {
if (userAccount == null) throw new IllegalArgumentException("The userAccount argument is required");
EntityManager em = CrousErrorLog.entityManager();
StringBuilder queryBuilder = new StringBuilder("SELECT o FROM CrousErrorLog AS o WHERE o.userAccount = :userAccount");
if (fieldNames4OrderClauseFilter.contains(sortFieldName)) {
queryBuilder.append(" ORDER BY ").append(sortFieldName);
if ("ASC".equalsIgnoreCase(sortOrder) || "DESC".equalsIgnoreCase(sortOrder)) {
queryBuilder.append(" ").append(sortOrder);
}
}
TypedQuery<CrousErrorLog> q = em.createQuery(queryBuilder.toString(), CrousErrorLog.class);
q.setParameter("userAccount", userAccount);
return q;
}
}
| 54.402597 | 149 | 0.70709 |
573db5cb0e5e0da66aaefb7fedc6157a83938564 | 557 | aj | AspectJ | src/main/java/fr/univrouen/poste/domain/AppliVersion_Roo_JavaBean.aj | EsupPortail/esup-dematec | 75a9ad4db07c408645ac76613e5b187c3cd8910c | [
"Apache-2.0"
] | 4 | 2015-02-08T21:49:11.000Z | 2021-12-20T14:40:15.000Z | src/main/java/fr/univrouen/poste/domain/AppliVersion_Roo_JavaBean.aj | EsupPortail/esup-dematec | 75a9ad4db07c408645ac76613e5b187c3cd8910c | [
"Apache-2.0"
] | 17 | 2015-01-26T16:32:01.000Z | 2022-02-16T00:55:15.000Z | src/main/java/fr/univrouen/poste/domain/AppliVersion_Roo_JavaBean.aj | EsupPortail/esup-dematec | 75a9ad4db07c408645ac76613e5b187c3cd8910c | [
"Apache-2.0"
] | 3 | 2015-10-07T22:13:24.000Z | 2017-04-26T12:55:08.000Z | // WARNING: DO NOT EDIT THIS FILE. THIS FILE IS MANAGED BY SPRING ROO.
// You may push code into the target .java compilation unit if you wish to edit any member(s).
package fr.univrouen.poste.domain;
import fr.univrouen.poste.domain.AppliVersion;
privileged aspect AppliVersion_Roo_JavaBean {
public String AppliVersion.getEsupDematEcVersion() {
return this.esupDematEcVersion;
}
public void AppliVersion.setEsupDematEcVersion(String esupDematEcVersion) {
this.esupDematEcVersion = esupDematEcVersion;
}
}
| 29.315789 | 94 | 0.741472 |
4313efa5d169693ec0ecaae8b025d2f40d65d627 | 2,811 | aj | AspectJ | src/main/java/com/etshost/msu/entity/Entity_Roo_Jpa_ActiveRecord.aj | jintrone/FlintEatsServer | 142b9717e1c62cdaf83df64c48678c182a634c3e | [
"MIT"
] | null | null | null | src/main/java/com/etshost/msu/entity/Entity_Roo_Jpa_ActiveRecord.aj | jintrone/FlintEatsServer | 142b9717e1c62cdaf83df64c48678c182a634c3e | [
"MIT"
] | null | null | null | src/main/java/com/etshost/msu/entity/Entity_Roo_Jpa_ActiveRecord.aj | jintrone/FlintEatsServer | 142b9717e1c62cdaf83df64c48678c182a634c3e | [
"MIT"
] | 1 | 2019-05-30T18:56:58.000Z | 2019-05-30T18:56:58.000Z | // WARNING: DO NOT EDIT THIS FILE. THIS FILE IS MANAGED BY SPRING ROO.
// You may push code into the target .java compilation unit if you wish to edit any member(s).
package com.etshost.msu.entity;
import com.etshost.msu.entity.Entity;
import java.util.List;
import javax.persistence.EntityManager;
privileged aspect Entity_Roo_Jpa_ActiveRecord {
public static final List<String> Entity.fieldNames4OrderClauseFilter = java.util.Arrays.asList("logger", "entityManager", "id", "created", "modified", "status", "version", "tags", "comments");
public static final EntityManager Entity.entityManager() {
EntityManager em = new Entity() {
}.entityManager;
if (em == null) throw new IllegalStateException("Entity manager has not been injected (is the Spring Aspects JAR configured as an AJC/AJDT aspects library?)");
return em;
}
public static long Entity.countEntitys() {
return entityManager().createQuery("SELECT COUNT(o) FROM Entity o", Long.class).getSingleResult();
}
public static List<Entity> Entity.findAllEntitys() {
return entityManager().createQuery("SELECT o FROM Entity o", Entity.class).getResultList();
}
public static List<Entity> Entity.findAllEntitys(String sortFieldName, String sortOrder) {
String jpaQuery = "SELECT o FROM Entity o";
if (fieldNames4OrderClauseFilter.contains(sortFieldName)) {
jpaQuery = jpaQuery + " ORDER BY " + sortFieldName;
if ("ASC".equalsIgnoreCase(sortOrder) || "DESC".equalsIgnoreCase(sortOrder)) {
jpaQuery = jpaQuery + " " + sortOrder;
}
}
return entityManager().createQuery(jpaQuery, Entity.class).getResultList();
}
public static Entity Entity.findEntity(Long id) {
if (id == null) return null;
return entityManager().find(Entity.class, id);
}
public static List<Entity> Entity.findEntityEntries(int firstResult, int maxResults) {
return entityManager().createQuery("SELECT o FROM Entity o", Entity.class).setFirstResult(firstResult).setMaxResults(maxResults).getResultList();
}
public static List<Entity> Entity.findEntityEntries(int firstResult, int maxResults, String sortFieldName, String sortOrder) {
String jpaQuery = "SELECT o FROM Entity o";
if (fieldNames4OrderClauseFilter.contains(sortFieldName)) {
jpaQuery = jpaQuery + " ORDER BY " + sortFieldName;
if ("ASC".equalsIgnoreCase(sortOrder) || "DESC".equalsIgnoreCase(sortOrder)) {
jpaQuery = jpaQuery + " " + sortOrder;
}
}
return entityManager().createQuery(jpaQuery, Entity.class).setFirstResult(firstResult).setMaxResults(maxResults).getResultList();
}
}
| 46.081967 | 196 | 0.678406 |
56eb9a4c272fbd6c99d230236a3a31f7cbce625e | 1,342 | aj | AspectJ | java-api/properties/java/io/Reader_ManipulateAfterCloseMonitorAspect.aj | owolabileg/property-db | ccbe0137ba949d474303a7ba48c9360dc37be888 | [
"MIT"
] | null | null | null | java-api/properties/java/io/Reader_ManipulateAfterCloseMonitorAspect.aj | owolabileg/property-db | ccbe0137ba949d474303a7ba48c9360dc37be888 | [
"MIT"
] | null | null | null | java-api/properties/java/io/Reader_ManipulateAfterCloseMonitorAspect.aj | owolabileg/property-db | ccbe0137ba949d474303a7ba48c9360dc37be888 | [
"MIT"
] | null | null | null | package mop;
import java.io.*;
import rvmonitorrt.MOPLogging;
import rvmonitorrt.MOPLogging.Level;
import java.util.concurrent.*;
import java.util.concurrent.locks.*;
import java.util.*;
import rvmonitorrt.*;
import java.lang.ref.*;
import org.aspectj.lang.*;
public aspect Reader_ManipulateAfterCloseMonitorAspect implements rvmonitorrt.RVMObject {
public Reader_ManipulateAfterCloseMonitorAspect(){
}
// Declarations for the Lock
static ReentrantLock Reader_ManipulateAfterClose_MOPLock = new ReentrantLock();
static Condition Reader_ManipulateAfterClose_MOPLock_cond = Reader_ManipulateAfterClose_MOPLock.newCondition();
pointcut MOP_CommonPointCut() : !within(rvmonitorrt.RVMObject+) && !adviceexecution();
pointcut Reader_ManipulateAfterClose_close(Reader r) : (call(* Reader+.close(..)) && target(r)) && MOP_CommonPointCut();
before (Reader r) : Reader_ManipulateAfterClose_close(r) {
Reader_ManipulateAfterCloseRuntimeMonitor.closeEvent(r);
}
pointcut Reader_ManipulateAfterClose_manipulate(Reader r) : ((call(* Reader+.read(..)) || call(* Reader+.ready(..)) || call(* Reader+.mark(..)) || call(* Reader+.reset(..)) || call(* Reader+.skip(..))) && target(r)) && MOP_CommonPointCut();
before (Reader r) : Reader_ManipulateAfterClose_manipulate(r) {
Reader_ManipulateAfterCloseRuntimeMonitor.manipulateEvent(r);
}
}
| 41.9375 | 241 | 0.772727 |
dbe4ca8fbef05108b4cde4f0906750171f5dbb07 | 1,500 | aj | AspectJ | java-api/properties/java/util/Map_UnsynchronizedAddAllMonitorAspect.aj | owolabileg/property-db | ccbe0137ba949d474303a7ba48c9360dc37be888 | [
"MIT"
] | null | null | null | java-api/properties/java/util/Map_UnsynchronizedAddAllMonitorAspect.aj | owolabileg/property-db | ccbe0137ba949d474303a7ba48c9360dc37be888 | [
"MIT"
] | null | null | null | java-api/properties/java/util/Map_UnsynchronizedAddAllMonitorAspect.aj | owolabileg/property-db | ccbe0137ba949d474303a7ba48c9360dc37be888 | [
"MIT"
] | null | null | null | package mop;
import java.util.*;
import rvmonitorrt.MOPLogging;
import rvmonitorrt.MOPLogging.Level;
import java.util.concurrent.*;
import java.util.concurrent.locks.*;
import rvmonitorrt.*;
import java.lang.ref.*;
import org.aspectj.lang.*;
public aspect Map_UnsynchronizedAddAllMonitorAspect implements rvmonitorrt.RVMObject {
public Map_UnsynchronizedAddAllMonitorAspect(){
}
// Declarations for the Lock
static ReentrantLock Map_UnsynchronizedAddAll_MOPLock = new ReentrantLock();
static Condition Map_UnsynchronizedAddAll_MOPLock_cond = Map_UnsynchronizedAddAll_MOPLock.newCondition();
pointcut MOP_CommonPointCut() : !within(rvmonitorrt.RVMObject+) && !adviceexecution();
pointcut Map_UnsynchronizedAddAll_modify(Map s) : ((call(* Map+.clear(..)) || call(* Map+.put*(..)) || call(* Map+.remove*(..))) && target(s)) && MOP_CommonPointCut();
before (Map s) : Map_UnsynchronizedAddAll_modify(s) {
Map_UnsynchronizedAddAllRuntimeMonitor.modifyEvent(s);
}
pointcut Map_UnsynchronizedAddAll_enter(Map t, Map s) : (call(boolean Map+.putAll(..)) && target(t) && args(s)) && MOP_CommonPointCut();
before (Map t, Map s) : Map_UnsynchronizedAddAll_enter(t, s) {
Map_UnsynchronizedAddAllRuntimeMonitor.enterEvent(t, s);
}
pointcut Map_UnsynchronizedAddAll_leave(Map t, Map s) : (call(void Map+.putAll(..)) && target(t) && args(s)) && MOP_CommonPointCut();
after (Map t, Map s) : Map_UnsynchronizedAddAll_leave(t, s) {
Map_UnsynchronizedAddAllRuntimeMonitor.leaveEvent(t, s);
}
}
| 41.666667 | 168 | 0.759333 |
4ee99bdfdf89c7bc4517b3f0401db01416c26dca | 147 | asm | Assembly | other.7z/SFC.7z/SFC/ソースデータ/ゼルダの伝説神々のトライフォース/英語_PAL/pal_asm/zel_msge3.asm | prismotizm/gigaleak | d082854866186a05fec4e2fdf1def0199e7f3098 | [
"MIT"
] | null | null | null | other.7z/SFC.7z/SFC/ソースデータ/ゼルダの伝説神々のトライフォース/英語_PAL/pal_asm/zel_msge3.asm | prismotizm/gigaleak | d082854866186a05fec4e2fdf1def0199e7f3098 | [
"MIT"
] | null | null | null | other.7z/SFC.7z/SFC/ソースデータ/ゼルダの伝説神々のトライフォース/英語_PAL/pal_asm/zel_msge3.asm | prismotizm/gigaleak | d082854866186a05fec4e2fdf1def0199e7f3098 | [
"MIT"
] | null | null | null | Name: zel_msge3.asm
Type: file
Size: 23837
Last-Modified: '2016-05-13T04:25:37Z'
SHA-1: D63F772A5BFDB21D66BCCECF53553909DF672783
Description: null
| 21 | 47 | 0.816327 |
be2048a071545c3233f775617dc828a592b960a6 | 575 | asm | Assembly | code/file/load-sheets.asm | abekermsx/skooted | ea0eb5c0c2703c45807477bfdcda0ad1ad9119d8 | [
"MIT"
] | 3 | 2021-10-06T20:52:11.000Z | 2021-11-29T11:31:55.000Z | code/file/load-sheets.asm | abekermsx/skooted | ea0eb5c0c2703c45807477bfdcda0ad1ad9119d8 | [
"MIT"
] | null | null | null | code/file/load-sheets.asm | abekermsx/skooted | ea0eb5c0c2703c45807477bfdcda0ad1ad9119d8 | [
"MIT"
] | null | null | null |
load_sheets:
call open_file
ret nz
ld de,SKOOTER.SHEETS
ld b,3
load_sheets_loop:
push bc
push de
ld de,load_buffer
ld c,_SETDTA
call BDOSBAS
ld de,fcb
ld hl,2048
ld c,_RDBLK
call bdos_wrapper
pop de
pop bc
or a
ret nz
push bc
ld hl,load_buffer
ld bc,2048
ldir
pop bc
djnz load_sheets_loop
call close_file
ret
load_buffer: equ $c000
| 14.375 | 29 | 0.46087 |
3747e58148f22798cba38751916317727eed521b | 464 | asm | Assembly | programs/oeis/171/A171784.asm | karttu/loda | 9c3b0fc57b810302220c044a9d17db733c76a598 | [
"Apache-2.0"
] | 1 | 2021-03-15T11:38:20.000Z | 2021-03-15T11:38:20.000Z | programs/oeis/171/A171784.asm | karttu/loda | 9c3b0fc57b810302220c044a9d17db733c76a598 | [
"Apache-2.0"
] | null | null | null | programs/oeis/171/A171784.asm | karttu/loda | 9c3b0fc57b810302220c044a9d17db733c76a598 | [
"Apache-2.0"
] | null | null | null | ; A171784: Fourth smallest divisor of smallest number having exactly n divisors.
; 6,8,4,8,4,4,4,8,4,8,4,4,4,8,4,8,4,4,4,8,4,4,4,4,4,8,4,8,4,4,4,4,4,8,4,4,4,8,4,8,4,4,4,8,4,4,4,4,4,8,4,4,4,4,4,8,4,8,4,4,4,4,4,8,4,4,4,8,4,8,4,4,4,4,4,8,4,4,4,8,4,4,4,4,4,8,4,4,4,4,4,4,4,8,4,4,4,8,4,8,4,4,4,8,4
mov $1,$0
mov $2,$0
cmp $2,0
add $0,$2
div $1,$0
add $0,3
add $1,1
cal $0,10051 ; Characteristic function of primes: 1 if n is prime, else 0.
mul $1,$0
mul $1,2
add $1,4
| 30.933333 | 211 | 0.594828 |
d25ff1eaeb9f4c1ab3fe0af13e7a657757066173 | 380 | asm | Assembly | programs/oeis/113/A113909.asm | jmorken/loda | 99c09d2641e858b074f6344a352d13bc55601571 | [
"Apache-2.0"
] | 1 | 2021-03-15T11:38:20.000Z | 2021-03-15T11:38:20.000Z | programs/oeis/113/A113909.asm | jmorken/loda | 99c09d2641e858b074f6344a352d13bc55601571 | [
"Apache-2.0"
] | null | null | null | programs/oeis/113/A113909.asm | jmorken/loda | 99c09d2641e858b074f6344a352d13bc55601571 | [
"Apache-2.0"
] | null | null | null | ; A113909: Square table of odd numbers which are neither squares nor one less than squares, read by antidiagonals.
; 5,7,11,13,17,19,21,23,27,29,31,33,37,39,41,43,45,47,51,53,55,57,59,61,65,67,69,71,73,75,77,79,83,85,87,89,91,93,95,97,101,103,105,107,109,111,113,115,117,119,123,125,127,129,131
mov $1,$0
lpb $0
add $1,1
add $2,1
sub $0,$2
sub $0,1
lpe
mul $1,2
add $1,5
| 29.230769 | 179 | 0.676316 |
40cafaae417d6c157e6b390998d05294dc600869 | 5,673 | asm | Assembly | Transynther/x86/_processed/NONE/_xt_/i3-7100_9_0xca_notsx.log_21829_1235.asm | ljhsiun2/medusa | 67d769b8a2fb42c538f10287abaf0e6dbb463f0c | [
"MIT"
] | 9 | 2020-08-13T19:41:58.000Z | 2022-03-30T12:22:51.000Z | Transynther/x86/_processed/NONE/_xt_/i3-7100_9_0xca_notsx.log_21829_1235.asm | ljhsiun2/medusa | 67d769b8a2fb42c538f10287abaf0e6dbb463f0c | [
"MIT"
] | 1 | 2021-04-29T06:29:35.000Z | 2021-05-13T21:02:30.000Z | Transynther/x86/_processed/NONE/_xt_/i3-7100_9_0xca_notsx.log_21829_1235.asm | ljhsiun2/medusa | 67d769b8a2fb42c538f10287abaf0e6dbb463f0c | [
"MIT"
] | 3 | 2020-07-14T17:07:07.000Z | 2022-03-21T01:12:22.000Z | .global s_prepare_buffers
s_prepare_buffers:
push %r10
push %r13
push %r14
push %r15
push %rcx
push %rdi
push %rsi
lea addresses_normal_ht+0x95fc, %rsi
lea addresses_D_ht+0xa9fc, %rdi
nop
nop
nop
nop
cmp $39041, %r10
mov $93, %rcx
rep movsl
nop
cmp $58053, %rdi
lea addresses_UC_ht+0x5234, %r14
nop
nop
nop
nop
nop
inc %r15
movups (%r14), %xmm0
vpextrq $1, %xmm0, %r10
nop
nop
nop
nop
nop
and $54481, %r10
lea addresses_A_ht+0xddfc, %r15
nop
sub $15959, %r13
mov (%r15), %ecx
nop
nop
nop
nop
nop
cmp %r13, %r13
lea addresses_WC_ht+0x39fc, %r10
clflush (%r10)
nop
nop
nop
nop
sub $59616, %rdi
mov (%r10), %r15d
nop
nop
nop
cmp $33826, %rcx
lea addresses_UC_ht+0x1dc4e, %rcx
nop
nop
nop
sub %r14, %r14
and $0xffffffffffffffc0, %rcx
vmovaps (%rcx), %ymm7
vextracti128 $0, %ymm7, %xmm7
vpextrq $0, %xmm7, %rsi
add $50671, %rcx
lea addresses_A_ht+0x9acc, %rcx
nop
nop
nop
cmp $54338, %r14
mov (%rcx), %r15w
nop
nop
nop
nop
nop
cmp %r14, %r14
lea addresses_D_ht+0x1c94, %rcx
xor $19008, %rsi
mov (%rcx), %r13d
nop
nop
nop
sub $41038, %r14
pop %rsi
pop %rdi
pop %rcx
pop %r15
pop %r14
pop %r13
pop %r10
ret
.global s_faulty_load
s_faulty_load:
push %r11
push %r13
push %rax
push %rbp
push %rdi
push %rdx
// Faulty Load
lea addresses_UC+0xa9fc, %rax
nop
nop
add %rdx, %rdx
movups (%rax), %xmm6
vpextrq $0, %xmm6, %rbp
lea oracles, %rax
and $0xff, %rbp
shlq $12, %rbp
mov (%rax,%rbp,1), %rbp
pop %rdx
pop %rdi
pop %rbp
pop %rax
pop %r13
pop %r11
ret
/*
<gen_faulty_load>
[REF]
{'src': {'same': False, 'congruent': 0, 'NT': False, 'type': 'addresses_UC', 'size': 1, 'AVXalign': False}, 'OP': 'LOAD'}
[Faulty Load]
{'src': {'same': True, 'congruent': 0, 'NT': False, 'type': 'addresses_UC', 'size': 16, 'AVXalign': False}, 'OP': 'LOAD'}
<gen_prepare_buffer>
{'src': {'type': 'addresses_normal_ht', 'congruent': 10, 'same': False}, 'OP': 'REPM', 'dst': {'type': 'addresses_D_ht', 'congruent': 11, 'same': False}}
{'src': {'same': False, 'congruent': 3, 'NT': False, 'type': 'addresses_UC_ht', 'size': 16, 'AVXalign': False}, 'OP': 'LOAD'}
{'src': {'same': False, 'congruent': 8, 'NT': False, 'type': 'addresses_A_ht', 'size': 4, 'AVXalign': False}, 'OP': 'LOAD'}
{'src': {'same': False, 'congruent': 11, 'NT': False, 'type': 'addresses_WC_ht', 'size': 4, 'AVXalign': False}, 'OP': 'LOAD'}
{'src': {'same': False, 'congruent': 1, 'NT': False, 'type': 'addresses_UC_ht', 'size': 32, 'AVXalign': True}, 'OP': 'LOAD'}
{'src': {'same': False, 'congruent': 4, 'NT': False, 'type': 'addresses_A_ht', 'size': 2, 'AVXalign': False}, 'OP': 'LOAD'}
{'src': {'same': False, 'congruent': 3, 'NT': False, 'type': 'addresses_D_ht', 'size': 4, 'AVXalign': True}, 'OP': 'LOAD'}
{'37': 21829}
37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37
*/
| 40.234043 | 2,999 | 0.658558 |
5ffbf9958a5996698a7f73dbd200bcf251008d7f | 1,181 | asm | Assembly | _build/dispatcher/jmp_ippsHashDuplicate_rmf_d64fa443.asm | zyktrcn/ippcp | b0bbe9bbb750a7cf4af5914dd8e6776a8d544466 | [
"Apache-2.0"
] | 1 | 2021-10-04T10:21:54.000Z | 2021-10-04T10:21:54.000Z | _build/dispatcher/jmp_ippsHashDuplicate_rmf_d64fa443.asm | zyktrcn/ippcp | b0bbe9bbb750a7cf4af5914dd8e6776a8d544466 | [
"Apache-2.0"
] | null | null | null | _build/dispatcher/jmp_ippsHashDuplicate_rmf_d64fa443.asm | zyktrcn/ippcp | b0bbe9bbb750a7cf4af5914dd8e6776a8d544466 | [
"Apache-2.0"
] | null | null | null | extern m7_ippsHashDuplicate_rmf:function
extern n8_ippsHashDuplicate_rmf:function
extern y8_ippsHashDuplicate_rmf:function
extern e9_ippsHashDuplicate_rmf:function
extern l9_ippsHashDuplicate_rmf:function
extern n0_ippsHashDuplicate_rmf:function
extern k0_ippsHashDuplicate_rmf:function
extern ippcpJumpIndexForMergedLibs
extern ippcpSafeInit:function
segment .data
align 8
dq .Lin_ippsHashDuplicate_rmf
.Larraddr_ippsHashDuplicate_rmf:
dq m7_ippsHashDuplicate_rmf
dq n8_ippsHashDuplicate_rmf
dq y8_ippsHashDuplicate_rmf
dq e9_ippsHashDuplicate_rmf
dq l9_ippsHashDuplicate_rmf
dq n0_ippsHashDuplicate_rmf
dq k0_ippsHashDuplicate_rmf
segment .text
global ippsHashDuplicate_rmf:function (ippsHashDuplicate_rmf.LEndippsHashDuplicate_rmf - ippsHashDuplicate_rmf)
.Lin_ippsHashDuplicate_rmf:
db 0xf3, 0x0f, 0x1e, 0xfa
call ippcpSafeInit wrt ..plt
align 16
ippsHashDuplicate_rmf:
db 0xf3, 0x0f, 0x1e, 0xfa
mov rax, qword [rel ippcpJumpIndexForMergedLibs wrt ..gotpc]
movsxd rax, dword [rax]
lea r11, [rel .Larraddr_ippsHashDuplicate_rmf]
mov r11, qword [r11+rax*8]
jmp r11
.LEndippsHashDuplicate_rmf:
| 30.282051 | 111 | 0.813717 |
260da25c4ae35430d4f9cf9974a50725b5313758 | 81 | asm | Assembly | src/main/fragment/mos6502-common/pwsz1_derefidx_vbuyy_gt_vwsc1_then_la1.asm | jbrandwood/kickc | d4b68806f84f8650d51b0e3ef254e40f38b0ffad | [
"MIT"
] | 2 | 2022-03-01T02:21:14.000Z | 2022-03-01T04:33:35.000Z | src/main/fragment/mos6502-common/pwsz1_derefidx_vbuyy_gt_vwsc1_then_la1.asm | jbrandwood/kickc | d4b68806f84f8650d51b0e3ef254e40f38b0ffad | [
"MIT"
] | null | null | null | src/main/fragment/mos6502-common/pwsz1_derefidx_vbuyy_gt_vwsc1_then_la1.asm | jbrandwood/kickc | d4b68806f84f8650d51b0e3ef254e40f38b0ffad | [
"MIT"
] | null | null | null | lda #<{c1}
cmp ({z1}),y
iny
lda #>{c1}
sbc ({z1}),y
bvc !+
eor #$80
!:
bmi {la1}
| 8.1 | 12 | 0.481481 |
0e293f663b63808dcc7a3d2c43102bcabc1a5af0 | 1,886 | asm | Assembly | software/profi/net-tools/src/uGophy/dos/ochkodos.asm | solegstar/karabas-pro | 0ff5f234eee10cd2b0ed0eb9286c47dd33e1f599 | [
"MIT"
] | 26 | 2020-07-25T15:00:32.000Z | 2022-03-22T19:30:04.000Z | software/profi/net-tools/src/uGophy/dos/ochkodos.asm | solegstar/karabas-pro | 0ff5f234eee10cd2b0ed0eb9286c47dd33e1f599 | [
"MIT"
] | 42 | 2020-07-29T14:29:18.000Z | 2022-03-22T11:34:28.000Z | software/profi/net-tools/src/uGophy/dos/ochkodos.asm | solegstar/karabas-pro | 0ff5f234eee10cd2b0ed0eb9286c47dd33e1f599 | [
"MIT"
] | 7 | 2020-09-07T14:21:31.000Z | 2022-01-24T17:18:56.000Z |
module Dos
page = 1
include "ff.equ"
init:
di
ld a, page : call changeBank
ld hl, bin, de, #c000 : call sauk
mount:
ld l,1 ; Check for valid
push hl
ld bc,drpath_zsd : ld de,ffs
call FF_MOUNT
pop hl
or a
jp nz, error
ei
sub a : call changeBank
ret
; DE - ASCIZ path
cwd:
ld a, page : call changeBank
call FF_CHDIR
push af
sub a : call changeBank
pop af
ret
; DE - ASCIZ path
mkdir:
ld a, page : call changeBank
call FF_MKDIR
push af
sub a : call changeBank
pop af
ret
; L - file mode:
; BC - filename
fopen:
push bc : ld a, page : call changeBank : pop bc
push hl
ld de, file
call FF_OPEN
pop hl
push af : sub a : call changeBank : pop af
ret
; BC - buffer
; DE - buffer size
fread:
push bc : ld a, page : call changeBank : pop bc
ld hl, rwres
push hl
push de
ld de, file
call FF_READ
pop hl
pop hl
ret
; BC - buffer
; DE - buffer size
fwrite:
push bc : ld a, page : call changeBank : pop bc
ld hl, rwres
push hl
push de
ld de, file
call FF_WRITE
pop hl
pop hl
push af : sub a : call changeBank : pop af
ret
fclose:
push bc : ld a, page : call changeBank : pop bc
ld de, file
call FF_CLOSE
push af : sub a : call changeBank : pop af
ret
error:
add '0'
call putC
ld hl, .msg : call putStringZ
jr $
.msg db " - can't init SD Card or FAT!",13,"Computer halted!",0
ffs defs FATFS_SIZE
file defs FIL_SIZE
rwres defw 0
drpath_zsd defb '0:',0 ;путь девайса для монтирования. В данном случае Z-SD
dir defs DIR_SIZE
finfo FILINFO
IFNDEF NOIMAGE
bin:
incbin "fatfs.skv"
bin_size = $ - bin
include "decompressor.asm"
DISPLAY "fatfs size: ", $ - bin
display "FAT FS ENDS: ", $
ENDIF
endmodule
| 15.983051 | 76 | 0.591198 |
cae150afcf056a0993dbbd0cf0409c9df341b385 | 5,914 | asm | Assembly | Transynther/x86/_processed/NONE/_xt_/i9-9900K_12_0xca_notsx.log_21829_1234.asm | ljhsiun2/medusa | 67d769b8a2fb42c538f10287abaf0e6dbb463f0c | [
"MIT"
] | 9 | 2020-08-13T19:41:58.000Z | 2022-03-30T12:22:51.000Z | Transynther/x86/_processed/NONE/_xt_/i9-9900K_12_0xca_notsx.log_21829_1234.asm | ljhsiun2/medusa | 67d769b8a2fb42c538f10287abaf0e6dbb463f0c | [
"MIT"
] | 1 | 2021-04-29T06:29:35.000Z | 2021-05-13T21:02:30.000Z | Transynther/x86/_processed/NONE/_xt_/i9-9900K_12_0xca_notsx.log_21829_1234.asm | ljhsiun2/medusa | 67d769b8a2fb42c538f10287abaf0e6dbb463f0c | [
"MIT"
] | 3 | 2020-07-14T17:07:07.000Z | 2022-03-21T01:12:22.000Z | .global s_prepare_buffers
s_prepare_buffers:
push %r9
push %rax
push %rbp
push %rcx
push %rdi
push %rdx
push %rsi
lea addresses_WT_ht+0x57d6, %rdx
nop
nop
add $13180, %rsi
vmovups (%rdx), %ymm0
vextracti128 $1, %ymm0, %xmm0
vpextrq $0, %xmm0, %rbp
and $65024, %rax
lea addresses_normal_ht+0xd9e, %rsi
lea addresses_A_ht+0x7786, %rdi
add $11727, %r9
mov $16, %rcx
rep movsb
nop
nop
nop
nop
xor $28928, %rax
lea addresses_D_ht+0xa51a, %r9
nop
dec %rcx
movw $0x6162, (%r9)
nop
sub $4545, %r9
lea addresses_normal_ht+0x146e6, %rdi
clflush (%rdi)
xor %rdx, %rdx
mov (%rdi), %rsi
nop
sub $26646, %rsi
lea addresses_WT_ht+0x15e76, %r9
nop
nop
nop
nop
add %rsi, %rsi
movb $0x61, (%r9)
nop
nop
cmp %rax, %rax
lea addresses_WT_ht+0x1bed6, %rbp
nop
nop
nop
nop
nop
add $18849, %rdx
mov $0x6162636465666768, %rdi
movq %rdi, %xmm5
movups %xmm5, (%rbp)
nop
nop
nop
nop
nop
add %rbp, %rbp
lea addresses_WT_ht+0x19cc6, %rsi
lea addresses_A_ht+0x8bd6, %rdi
clflush (%rdi)
nop
nop
nop
nop
nop
add $26784, %rax
mov $79, %rcx
rep movsq
nop
dec %rbp
pop %rsi
pop %rdx
pop %rdi
pop %rcx
pop %rbp
pop %rax
pop %r9
ret
.global s_faulty_load
s_faulty_load:
push %r10
push %r11
push %r14
push %r8
push %rax
push %rbx
// Store
lea addresses_WC+0x17bd6, %r14
nop
dec %r8
mov $0x5152535455565758, %r11
movq %r11, (%r14)
nop
nop
nop
cmp %r10, %r10
// Faulty Load
lea addresses_WC+0x153d6, %r14
nop
nop
nop
dec %r10
mov (%r14), %rbx
lea oracles, %r11
and $0xff, %rbx
shlq $12, %rbx
mov (%r11,%rbx,1), %rbx
pop %rbx
pop %rax
pop %r8
pop %r14
pop %r11
pop %r10
ret
/*
<gen_faulty_load>
[REF]
{'OP': 'LOAD', 'src': {'same': False, 'type': 'addresses_WC', 'NT': False, 'AVXalign': False, 'size': 8, 'congruent': 0}}
{'OP': 'STOR', 'dst': {'same': False, 'type': 'addresses_WC', 'NT': True, 'AVXalign': False, 'size': 8, 'congruent': 11}}
[Faulty Load]
{'OP': 'LOAD', 'src': {'same': True, 'type': 'addresses_WC', 'NT': False, 'AVXalign': False, 'size': 8, 'congruent': 0}}
<gen_prepare_buffer>
{'OP': 'LOAD', 'src': {'same': False, 'type': 'addresses_WT_ht', 'NT': False, 'AVXalign': False, 'size': 32, 'congruent': 7}}
{'OP': 'REPM', 'src': {'same': False, 'congruent': 1, 'type': 'addresses_normal_ht'}, 'dst': {'same': False, 'congruent': 4, 'type': 'addresses_A_ht'}}
{'OP': 'STOR', 'dst': {'same': False, 'type': 'addresses_D_ht', 'NT': False, 'AVXalign': False, 'size': 2, 'congruent': 2}}
{'OP': 'LOAD', 'src': {'same': False, 'type': 'addresses_normal_ht', 'NT': False, 'AVXalign': False, 'size': 8, 'congruent': 3}}
{'OP': 'STOR', 'dst': {'same': False, 'type': 'addresses_WT_ht', 'NT': False, 'AVXalign': False, 'size': 1, 'congruent': 5}}
{'OP': 'STOR', 'dst': {'same': False, 'type': 'addresses_WT_ht', 'NT': False, 'AVXalign': False, 'size': 16, 'congruent': 8}}
{'OP': 'REPM', 'src': {'same': False, 'congruent': 4, 'type': 'addresses_WT_ht'}, 'dst': {'same': True, 'congruent': 10, 'type': 'addresses_A_ht'}}
{'38': 21829}
38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38
*/
| 40.786207 | 2,999 | 0.658607 |
28926abf586cad24e30eee0885b98a47e3aa675c | 516 | asm | Assembly | programs/oeis/158/A158948.asm | karttu/loda | 9c3b0fc57b810302220c044a9d17db733c76a598 | [
"Apache-2.0"
] | null | null | null | programs/oeis/158/A158948.asm | karttu/loda | 9c3b0fc57b810302220c044a9d17db733c76a598 | [
"Apache-2.0"
] | null | null | null | programs/oeis/158/A158948.asm | karttu/loda | 9c3b0fc57b810302220c044a9d17db733c76a598 | [
"Apache-2.0"
] | null | null | null | ; A158948: Triangle read by rows, left border = natural numbers repeated (1, 1, 2, 2, 3, 3,...); all other columns = (1, 0, 1, 0, 1, 0,...).
; 1,1,1,2,0,1,2,1,0,1,3,0,1,0,1,3,1,0,1,0,1,4,0,1,0,1,0,1,4,1,0,1,0,1,0,1,5,0,1,0,1,0,1,0,1,5,1,0,1,0,1,0,1,0,1
mov $3,2
mov $5,$0
lpb $3,1
mov $0,$5
sub $3,1
add $0,$3
sub $0,1
cal $0,4202 ; Skip 1, take 1, skip 2, take 2, skip 3, take 3, etc.
div $0,2
mov $2,$3
mov $4,$0
lpb $2,1
mov $1,$4
sub $2,1
lpe
lpe
lpb $5,1
sub $1,$4
mov $5,0
lpe
| 21.5 | 140 | 0.521318 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.