full_name stringlengths 7 104 | description stringlengths 4 725 ⌀ | topics stringlengths 3 468 ⌀ | readme stringlengths 13 565k ⌀ | label int64 0 1 |
|---|---|---|---|---|
heremaps/here-workspace-examples-java-scala | HERE Workspace Examples for Java and Scala Developers | examples here-workspace java scala | # HERE Workspace Examples for Java and Scala Developers
## Introduction
This repository holds a series of Java and Scala examples, that demonstrate typical use cases for the HERE Workspace – a part of HERE platform. HERE Workspace is an environment to enrich, transform and deploy location-centric data.
Go to [HERE platform](https://developer.here.com/products/open-location-platform) to learn more. If you are interested in knowing what the platform offers specifically for Java and Scala developers, visit [this page](https://developer.here.com/documentation/sdk-developer-guide/dev_guide/index.html).
## Prerequisites
In order to run the examples, you need to have a HERE Workspace account. If you do not have an account, navigate to [Pricing and Plans](https://developer.here.com/pricing/open-location-platform) to apply for a free trial.
You need to get access credentials and prepare your environment. For more information on how to prepare your environment, see our [guide for Java and Scala developers](https://developer.here.com/documentation/sdk-developer-guide/dev_guide/topics/how-to-use-sdk.html).
## Code Examples
### Processing Sensor Data
The following documents illustrate two use cases around inferring real-world situations from sensor data. [Batch processing](https://developer.here.com/documentation/java-scala-dev/dev_guide/topics/example-use-cases.html#use-case-map-of-recommended-speed-based-on-sensor-observations-and-other-data) is useful when it is important to aggregate sensor input over a longer time period (that is hours and longer). [Stream processing](https://developer.here.com/documentation/java-scala-dev/dev_guide/topics/example-use-cases.html#use-case-turning-sensor-data-into-warnings) is recommended for time-critical use cases, like informing about road hazards.
| Name | Description | Source | Labels / Topics |
| ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------------------ |
| Infer Stop Sign Locations from Automotive Sensor Observations | The example takes archived sensor data, clusters and path-matches it in a distributed Spark batch environment, and creates a map layer with stop signs at the output. | [Scala](location/scala/spark/infer-stop-signs-from-sensors) | Location Library, Data Client Library, Spark, Batch, GeoJSON, SDII |
| Stream Path Matcher | The example stands for a similar but time critical use case. It does not hash out anything but map matching. It takes sensor data from a stream, map-matches it in Flink, and puts on an output stream. | [Java](location/java/flink/stream-path-matcher) | Location Library, Data Client Library, Flink, Stream, SDII |
### Incremental Map Processing & Validation
For more information, see a use case illustration of [keeping a client map up to date with incremental processing](https://developer.here.com/documentation/java-scala-dev/dev_guide/topics/example-use-cases.html#use-case-incremental-map-processing).
| Name | Description | Source | Labels / Topics |
| ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
| Geometry Lifter | An application that takes level 12 partitions of road topology and geometry and aggregates them to higher-level (i.e. bigger) partitions. | [Java](data-processing/java/geometry-lifter) / [Scala](data-processing/scala/geometry-lifter) | Data Processing Library, Spark, Batch, Protobuf, HERE Map Content |
| Pedestrian Topologies Extraction to GeoJSON | Topologies, accessible by pedestrians, are selected based on the segment attributes and then are transformed into GeoJSON file format and stored in a new catalog. | [Java](data-processing/java/pedestrian-topologies-extraction-geojson) / [Scala](data-processing/scala/pedestrian-topologies-extraction-geojson) | Data Processing Library, Spark, Batch, GeoJSON, HERE Map Content |
| Pedestrian Topologies Extraction to Protobuf | Topologies, accessible by pedestrians, are selected based on the segment attributes and then are transformed to a newly created proto schema format and stored in a new catalog layer that follows that schema. | [Java](data-processing/java/pedestrian-topologies-extraction-protobuf) / [Scala](data-processing/scala/pedestrian-topologies-extraction-protobuf) | Data Processing Library, Spark, Batch, Protobuf, HERE Map Content |
| Statistics creation across multiple processing runs with stateful processing | The application counts how often the node cardinality of the topology changes in each partition. | [Java](data-processing/java/stateful-nodecardinality-extraction) / [Scala](data-processing/scala/stateful-nodecardinality-extraction) | Data Processing Library, Spark, Batch, JSON, HERE Map Content |
| Here Map Content Diff-Tool | An application to compare the content of two different versions of an input catalog. | [Java](data-processing/java/heremapcontent-difftool) / [Scala](data-processing/scala/heremapcontent-difftool) | Data Processing Library, Spark, Batch, JSON, HERE Map Content |
| Here Map Content Validation | An application to validate road topology and geometry content against a set of acceptance criteria using [scalatest](https://www.scalatest.org/). | [Scala](data-processing/scala/heremapcontent-validation) | Data Processing Library, Spark, Batch, JSON, HERE Map Content |
### Archiving Stream Data
The HERE Workspace allows you to retain stream data for longer periods, which allows you to later query and process the retained data for non-real-time use cases.
For more information, see [Data Archiving Library Developer Guide](https://developer.here.com/documentation/data-archiving-library/dev_guide/index.html).
| Name | Description | Source | Labels / Topics |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------- | --------------------------------------------------------- |
| Archiving SDII stream data in Avro | The example shows how to use the Data Archiving Library to quickly develop an archiving solution that archives data in Avro format. | [Java](data-archive/java/avro-example) | Data Archiving Library, Flink, Stream, Avro, SDII |
| Archiving SDII stream data in Parquet | The example shows how to use the Data Archiving Library to quickly develop an archiving solution that archives data in Parquet format. | [Java](data-archive/java/parquet-example) | Data Archiving Library, Flink, Stream, Parquet, SDII |
| Archiving SENSORIS stream data in Protobuf | The example shows how to use the Data Archiving Library to quickly develop an archiving solution that archives SENSORIS data in Protobuf format. | [Java](data-archive/java/sensoris-protobuf-example) | Data Archiving Library, Flink, Stream, SENSORIS, Protobuf |
### Compacting Index Data
The HERE Workspace allows you to compact data files with the same index attribute values into one or more files based on the configuration.
Compaction reduces the index layer storage cost, improves query performance, and makes subsequent data processing more efficient.
For more information, see [Index Compaction Library Developer Guide](https://developer.here.com/documentation/index-compaction-library/dev_guide/index.html).
| Name | Description | Source | Labels / Topics |
| --------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ----------------------------------------------------- |
| Compacting Parquet format indexed data | The example shows how to use the Index Compaction Library to quickly develop a compaction application that compacts parquet format data. | [Java](index-compaction-batch/java/parquet-example) | Index Compaction Library, Spark, Batch, Parquet, SDII |
| Compacting Protobuf format indexed data | The example shows how to use the Index Compaction Library to quickly develop a compaction application that compacts protobuf format data. | [Java](index-compaction-batch/java/protobuf-example) | Index Compaction Library, Spark, Batch, Parquet, SDII |
### Small Examples Showing Usage of Location Library
The following examples demonstrate how to use the Location Library.
| Name | Description | Source | Labels / Topics |
| -------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| Point Matching Example | Takes probe points and matches each one against the closest geometry without consider the path. | [Java](location/java/standalone#point-matching-example) / [Scala](location/scala/standalone#point-matching-example) | Location Library, GeoJSON, CSV |
| Traversing the Graph | Shows how to create a traversable graph from the HERE Optimized Map for Location Library catalog. | [Java](location/java/standalone#traversing-the-graph) / [Scala](location/scala/standalone#traversing-the-graph) | Location Library, GeoJSON |
| Most Probable Path | Navigates the graph along the most probable path based on simple assumptions like try to stay on same functional class. | [Java](location/java/standalone#most-probable-path) / [Scala](location/scala/standalone#most-probable-path) | Location Library, GeoJSON |
| Path Matching Example | Matches path probe points against the graph. | [Java](location/java/standalone#path-matching-example) / [Scala](location/scala/standalone#path-matching-example) | Location Library, GeoJSON, CSV |
| Path Matching with Restrictions | Matches path probe points against a graph that excludes segments that are not accessible by taxis. | [Scala](location/scala/standalone#path-matching-with-restrictions) | Location Library, GeoJSON, CSV |
| Turn Restrictions | Shows how to check if turns on a road network are restricted or not. | [Java](location/java/standalone#turn-restrictions) / [Scala](location/scala/standalone#turn-restrictions) | Location Library, GeoJSON |
| Generic Range Based Attributes | Shows how to load a generic attribute that is not available in the HERE Optimized Map for Location Library using a Vertex reference as input. | [Java](location/java/standalone#generic-range-based-attributes) / [Scala](location/scala/standalone#generic-range-based-attributes) | Location Library |
| Path Matching Sparse Probe Data | Shows how to match sparse path points against the graph by traversing it using the most probable path assumptions. | [Java](location/java/standalone#path-matching-sparse-probe-data) / [Scala](location/scala/standalone#path-matching-sparse-probe-data) | Location Library, GeoJSON, CSV |
| Converting References from HERE Optimized Map for Location Library to HERE Map Content | Converts Vertex references to Topology Segments and vice versa. | [Java](location/java/standalone#converting-references-from-here-optimized-map-for-location-library-to-here-map-content) / [Scala](location/scala/standalone#converting-references-from-here-optimized-map-for-location-library-to-here-map-content) | Location Library, HERE Map Content |
| Converting References from TPEG2 to its Binary Representation | Shows how to read an OpenLR location reference that has been written in the TPEG2 XML encoding and convert it to its binary representation. | [Java](location/java/standalone#converting-references-from-tpeg2-to-its-binary-representation) / [Scala](location/scala/standalone#converting-references-from-tpeg2-to-its-binary-representation) | Location Library, TPEG2, OpenLR |
| Extracting TPEG2 Document | Demonstrates how to load a TPEG2 document and extract its parts. | [Java](location/java/standalone#extracting-tpeg2-document) / [Scala](location/scala/standalone#extracting-tpeg2-document) | Location Library, TPEG2 |
| Creating and Resolving TMC Reference | Searches for a well-known vertex that is covered by TMC to define the input location. | [Java](location/java/standalone#creating-and-resolving-tmc-reference) / [Scala](location/scala/standalone#creating-and-resolving-tmc-reference) | Location Library, TMC |
| Resolving TMC References in RTTI Message | Demonstrates how TMC references in Real Time Traffic Incident (RTTI) messages can be converted to TPEG2 TMC references, and how the `location-referencing` module can be used to resolve those references. | [Java](location/java/standalone#resolving-tmc-references-in-rtti-message) / [Scala](location/scala/standalone#resolving-tmc-references-in-rtti-message) | Location Library, TPEG2 |
| Creating OpenLR Reference from Road Segments | Transforms a path given as segment references in HERE Map Content to OpenLR reference. | [Java](location/java/standalone#creating-openlr-reference-from-road-segments) / [Scala](location/scala/standalone#creating-openlr-reference-from-road-segments) | Location Library, HERE Map Content, OpenLR |
| Resolving OpenLR Reference from Road Segments | Shows how to take an OpenLR reference given in XML and resolve this reference to segments in HERE Map Content | [Java](location/java/standalone#resolving-openlr-reference-from-road-segments) / [Scala](location/scala/standalone#resolving-openlr-reference-from-road-segments) | Location Library, HERE Map Content, OpenLR |
| Functional Class for a Vertex | Shows how you can get the functional class for a vertex. | [Java](location/java/standalone#functional-class-for-a-vertex) / [Scala](location/scala/standalone#functional-class-for-a-vertex) | Location Library, GeoJSON |
| ADAS Curvature Attribute | Shows how to fetch and use ADAS attributes in the HERE Optimized Map for Location Library using a Vertex or an Edge reference. | [Java](location/java/standalone#adas-curvature-attribute) / [Scala](location/scala/standalone#adas-curvature-attribute) | Location Library, GeoJSON |
## License
Copyright (C) 2017-2024 HERE Europe B.V.
Unless otherwise noted in `LICENSE` files, source code files for specific files or directories, the [LICENSE](LICENSE) in the root applies to all content in this repository.
| 0 |
raeperd/realworld-springboot-java | Spring boot java implementation of realworld example.app | github-action jacoco-plugin java11 jib-gradle lombok-gradle sonarcloud spring-boot spring-data-jpa |

[](https://github.com/raeperd/realworld-springboot-java/actions/workflows/build.yml)
[](https://opensource.org/licenses/MIT)
[ReadWorld.io](https://github.com/gothinkster/realworld) backend project using spring boot java using `spring-security`, `spring-data-jpa`
# Insprired by
- [우아한형제들 기술 불로그 | Todo list 만들기는 이제 그만](https://woowabros.github.io/experience/2020/04/14/stop-making-todo-list.html)
- [우아한형제들 기술 블로그 | Gradle 프로젝트에 JaCoCo 설정하기](https://woowabros.github.io/experience/2020/02/02/jacoco-config-on-gradle-project.html)
- [우아한형제들 기술 블로그 | 우린 Git-flow를 사용하고 있어요](https://woowabros.github.io/experience/2017/10/30/baemin-mobile-git-branch-strategy.html)
- [Github | Realworld.io](https://github.com/gothinkster/realworld)
# Getting started
## Build from scratch
``` shell
$ ./gradlew build bootRun
```
## Using docker
``` shell
$ docker run --rm -p 8080:8080 ghcr.io/raeperd/realworld-spring-boot-java:master
```
- Dockerhub registry is [here](https://hub.docker.com/repository/docker/raeperd/realworld-spring-boot-java)
- Container tags are simply branch name of this repository following git-flow strategy
## How to test
After run application, you can try one of followings
### using shell script
``` shell
$ ./doc/run-api-tests.sh
```
### using postman
Import [`./doc/Conduit.postman_collection.json`](./doc/Conduit.postman_collection.json) in your postman application
And also, pure `gradle test` covers almost every line of code.
More details can be found in [`./doc/README.md`](./doc/README.md) and [original source](https://github.com/gothinkster/realworld/tree/master/spec)
# Overview
## Design Principal
- Always `final` whenever possible
- Always package private class whenever possible
- **Always test every package, class, method, instruction in codes**
- Except for some boilerplate `equals` and `hashcode` method
- This is validated by [jacoco-gradle-plugin](https://docs.gradle.org/current/userguide/jacoco_plugin.html).
- Coverage verification in [`./test.gradle`](./test.gradle)
- Try to avoid including additional dependencies as much as possible
- Implements JWT generation / validation logic without 3rd party library [#3](https://github.com/raeperd/realworld-springboot-java/issues/3)
- Try to maintain codes in domain package remain POJO
- Except for special spring annotations like `@Service`, `@Repository`, `@Transactional`
- Prohibit use of lombok in domain package
- Try to follow all modern best practices for spring-boot project
## Diagrams
- You can open full diagram file in [`realworld.drawio`](./realworld.drawio) using [draw.io](https://app.diagrams.net/)
### User

- Separate password encoding logic out of User.
- User must be created with password encoder.
### Article

- Article contains other elements with `@Embedded` classes
- Try to reduce number of repositories.
- Prefer `@JoinTable` to `@JoinColumn`
### JWT

- Try not to use 3rd party library
- Serialization and Deserialization are seperated with interfaces
- Domain package contains interface, infrastructure code provide implementation
- Application package do stuff with spring-security logic
## Performance

- Result of [`./doc/run-api-tests.sh`](./doc/run-api-tests.sh)
# What can be done more
- User class doing so many things now. It can be improved someway.
- Service classes can be divided into smaller services
- Test cases order can be improved
# Contact
You can contact me with [email](raeperd117@gmail.com) or issue in this project
# License
[MIT License](./LICENSE)
# Referenced
- [JSON Web Token Introduction - jwt.io](https://jwt.io/introduction)
- [Symmetric vs Asymmetric JWTs. What is JWT? | by Swayam Raina | Noteworthy - The Journal Blog](https://blog.usejournal.com/symmetric-vs-asymmetric-jwts-bd5d1a9567f6)
- [presentations/auth.md at master · alex996/presentations · GitHub](https://github.com/alex996/presentations/blob/master/auth.md)
| 0 |
ThomasVitale/spring-security-examples | Examples with Spring Security (OAuth2 and OpenID Connect, Authentication and Authorization) | null | # Spring Security Examples
Examples with Spring Security (OAuth2 and OpenID Connect, Authentication and Authorization)
## OAuth2 and OpenID Connect
* [Login - Custom authorities from UserInfo](https://github.com/ThomasVitale/spring-security-examples/tree/main/oauth2/login-user-authorities)
* [Login - Custom authorities from UserInfo (Reactive)](https://github.com/ThomasVitale/spring-security-examples/tree/main/oauth2/login-user-authorities-reactive)
* [Resource Server - Custom authorities from JWT](https://github.com/ThomasVitale/spring-security-examples/tree/main/oauth2/resource-server-jwt-authorities)
* [Resource Server - Custom authorities from JWT (Reactive)](https://github.com/ThomasVitale/spring-security-examples/tree/main/oauth2/resource-server-jwt-authorities-reactive)
| 0 |
pedroSG94/vlc-example-streamplayer | Example code how to play a stream with VLC | stream streamplayer vlc | # vlc-example-streamplayer
[](https://jitpack.io/#pedroSG94/vlc-example-streamplayer)
Example code how to play a stream with VLC.
Use this endpoint for test:
```
rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
```
Consider using the latest official compiled VLC version:
https://mvnrepository.com/artifact/org.videolan.android/libvlc-all
https://code.videolan.org/videolan/vlc-android#build-application
## Gradle
Compile my wrapper:
```gradle
allprojects {
repositories {
maven { url 'https://jitpack.io' }
}
}
dependencies {
compile 'com.github.pedroSG94.vlc-example-streamplayer:pedrovlc:2.5.14v3'
}
```
Compile only VLC (version 2.5.14):
```gradle
allprojects {
repositories {
maven { url 'https://jitpack.io' }
}
}
dependencies {
compile 'com.github.pedroSG94.vlc-example-streamplayer:libvlc:2.5.14v3'
}
```
| 1 |
zhouyongtao/spring-cloud-microservice | spring-cloud-microservice-example | microservice | 容器与微服务架构
=======================================================================
# docker&kubernetes
* https://www.cncf.io/projects/
* https://github.com/kubernetes/kubernetes
* https://github.com/kubernetes/examples
* https://kubernetes.io/docs/concepts/
* https://github.com/ramitsurana/awesome-kubernetes
* https://docs.docker.com/
# volume
* https://github.com/rook/rook
* https://github.com/ceph
* https://github.com/gluster
# monitor
* https://github.com/prometheus
* https://github.com/coreos/prometheus-operator
* https://github.com/kubernetes-incubator/metrics-server
* https://github.com/grafana/grafana
# maven repository
* https://www.jfrog.com/confluence/
* https://www.sonatype.com/nexus-repository-oss
* https://hub.docker.com/r/sonatype/nexus3/
# spring cloud config
* https://github.com/dyc87112/spring-cloud-config-admin
* https://github.com/alibaba/nacos
* https://github.com/ctripcorp/apollo
* https://nacos.io/en-us/blog/Nacos-is-Coming.html
# spring-cloud
* https://spring.io/projects/spring-cloud
* https://spring.io/projects/spring-cloud-gateway
* https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/
* http://cloud.spring.io/spring-cloud-static/Finchley.SR2/single/spring-cloud.html
* https://github.com/spring-projects/spring-boot/tree/master/spring-boot-project/spring-boot-starters
* https://github.com/spring-projects/spring-cloud/wiki/Spring-Cloud-Finchley-Release-Notes
* https://github.com/spring-cloud-samples/hystrix-dashboard
# env
```
sudo apt-get update
sudo apt-get upgrade
yum install -y redhat-lsb
uname -a
cat /proc/version
lsb_release -a
yum install epel-release
yum makecache
yum repolist
/opt/gitlab/embedded/bin/psql --version
sudo gitlab-ctl pg-upgrade
sudo yum install gitlab-ce
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
sudo yum install -y gitlab-ce
sudo yum install -y gitlab-ee
netstat -lntup
sudo yum install java-1.8.0-openjdk
sudo apt-get install software-properties-common htop
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
java -version
```
# eureka
```
#nohup java -jar microsrv-eureka-server-0.0.1-SNAPSHOT.jar --spring.profiles.active=peer1 &
java -jar microsrv-eureka-server-0.0.1-SNAPSHOT.jar --spring.profiles.active=peer1
java -jar microsrv-eureka-server-0.0.1-SNAPSHOT.jar --spring.profiles.active=peer2
java -jar microsrv-eureka-server-0.0.1-SNAPSHOT.jar --spring.profiles.active=peer3
vi /etc/hosts
10.255.131.162 microsrv-eureka-server-peer1
10.255.131.163 microsrv-eureka-server-peer2
10.255.131.164 microsrv-eureka-server-peer3
echo `hostname`
127.0.0.1 localhost iZuf60cj5pna5im3va46nkZ
netstat -tunlp
netstat -apn | grep 8000
``` | 0 |
taes-k/spring-example | spring-example | null | # Spring-Example
스프링 예제 소스를위한 Repository 입니다.
### - spring-boot-start-test
스프링 부트 시작하기 예제
| 0 |
DiUS/pact-workshop-jvm | Example JVM project for the Pact workshop | null | ## ⚠️ Important note ⚠️
This repository has been archived, please use the latest workshop here: https://github.com/pact-foundation/pact-workshop-jvm-spring
## Introduction
This workshop is aimed at demonstrating core features and benefits of contract testing with Pact.
Whilst contract testing can be applied retrospectively to systems, we will follow the [consumer driven contracts](https://martinfowler.com/articles/consumerDrivenContracts.html) approach in this workshop - where a new consumer and provider are created in parallel to evolve a service over time, especially where there is some uncertainty with what is to be built.
This workshop should take from 1 to 2 hours, depending on how deep you want to go into each topic.
**Example project overview**
This workshop is setup with a number of steps that can be run through. Each step is in a branch, so to run through a
step of the workshop just check out the branch for that step (i.e. `git checkout step1`).
This project has 3 components, a consumer project and two service providers, one Dropwizard and one
Springboot service that the consumer will interact with.
**Workshop outline**:
- [step 1: **Simple Consumer calling Provider**](https://github.com/DiUS/pact-workshop-jvm#step-1---simple-consumer-calling-provider): Create our consumer before the Provider API even exists
- [step 2: **Client Tested but integration fails**](https://github.com/DiUS/pact-workshop-jvm#step-2---client-tested-but-integration-fails): Write a unit test for our consumer
- [step 3: **Pact to the rescue**](https://github.com/DiUS/pact-workshop-jvm#step-3---pact-to-the-rescue): Write a Pact test for our consumer
- [step 4: **Verify pact against provider**](https://github.com/DiUS/pact-workshop-jvm#step-4---verify-pact-against-provider): Verify the consumer pact with the Provider API (Gradle)
- [step 5: **Verify the provider with a test**](https://github.com/DiUS/pact-workshop-jvm#step-5---verify-the-provider-with-a-test): Verify the consumer pact with the Provider API (JUnit)
- [step 6: **Back to the client we go**](https://github.com/DiUS/pact-workshop-jvm#step-6---back-to-the-client-we-go): Fix the consumer's bad assumptions about the Provider
- [step 7: **Verify the providers again**](https://github.com/DiUS/pact-workshop-jvm#step-7---verify-the-providers-again): Update the provider build
- [step 8: **Test for the missing query parameter**](https://github.com/DiUS/pact-workshop-jvm#step-8---test-for-the-missing-query-parameter): Test unhappy path of missing query string
- [step 9: **Verify the provider with the missing/invalid date query parameter**](https://github.com/DiUS/pact-workshop-jvm#step-9---verify-the-provider-with-the-missinginvalid-date-query-parameter): Verify provider's ability to handle the missing query string
- [step 10: **Update the providers to handle the missing/invalid query parameters**](https://github.com/DiUS/pact-workshop-jvm#step-10---update-the-providers-to-handle-the-missinginvalid-query-parameters): Update provider to handlre mising query string
- [step 11: **Provider states**](https://github.com/DiUS/pact-workshop-jvm#step-11---provider-states): Write a pact test for the `404` case
- [step 12: **provider states for the providers**](https://github.com/DiUS/pact-workshop-jvm#step-12---provider-states-for-the-providers): Update API to handle `404` case
- [step 13: **Using a Pact Broker**](https://github.com/DiUS/pact-workshop-jvm#step-13---using-a-pact-broker): Implement a broker workflow for integration with CI/CD
_NOTE: Each step is tied to, and must be run within, a git branch, allowing you to progress through each stage incrementally. For example, to move to step 2 run the following: `git checkout step2`_
## Learning objectives
If running this as a team workshop format, you may want to take a look through the [learning objectives](./LEARNING.md).
## Requirements
- [Java](https://java.com/en/download/) (version 1.8+)
- [Docker Compose](https://docs.docker.com/compose/install/)
## Step 1 - Simple Consumer calling Provider
Given we have a client that needs to make a HTTP GET request to a provider service, and requires a response in JSON format.

The client is quite simple and looks like this
*consumer/src/main/java/au/com/dius/pactworkshop/consumer/Client.java:*
```java
public class Client {
public Object loadProviderJson() throws UnirestException {
return Unirest.get("http://localhost:8080/provider.json")
.queryString("validDate", LocalDateTime.now().toString())
.asJson().getBody();
}
}
```
and the dropwizard provider resource
*providers/dropwizard-provider/src/main/java/au/com/dius/pactworkshop/dropwizardprovider/RootResource.java:*
```java
@Path("/provider.json")
@Produces(MediaType.APPLICATION_JSON)
public class RootResource {
@GET
public Map<String, Object> providerJson(@QueryParam("validDate") Optional<String> validDate) {
LocalDateTime valid_time = LocalDateTime.parse(validDate.get());
Map<String, Object> result = new HashMap<>();
result.put("test", "NO");
result.put("validDate", LocalDateTime.now().toString());
result.put("count", 1000);
return result;
}
}
```
The springboot provider controller is similar
*providers/springboot-provider/src/main/java/au/com/dius/pactworkshop/springbootprovider/RootController.java:*
```java
@RestController
public class RootController {
@RequestMapping("/provider.json")
public Map<String, Serializable> providerJson(@RequestParam(required = false) String validDate) {
LocalDateTime validTime = LocalDateTime.parse(validDate);
Map<String, Serializable> map = new HashMap<>(3);
map.put("test", "NO");
map.put("validDate", LocalDateTime.now().toString());
map.put("count", 1000);
return map;
}
}
```
This providers expects a `validDate` parameter in HTTP date format, and then return some simple json back.

Running the client with either provider works nicely. For example, start the dropwizard-provider in one terminal:
```console
$ ./gradlew :providers:dropwizard-provider:run
```
_NOTE_: this task won't complete, it will get to 75% and remain that way until you shutdown the process: `<=========----> 75% EXECUTING [59s]`)
(to start the Spring boot provider instead, you would run `./gradlew :providers:springboot-provider:bootRun`).
Once the provider has successfully initialized, open another terminal session and run the consumer:
```console
$ ./gradlew :consumer:run
> Task :consumer:run
{"test":"NO","validDate":"2018-04-10T10:59:41.122","count":1000}
BUILD SUCCESSFUL in 1s
2 actionable tasks: 2 executed
```
Don't forget to stop the dropwizard-provider that is running in the first terminal when you have finished this step.
## Step 2 - Client Tested but integration fails
Now lets get the client to use the data it gets back from the provider. Here is the updated client method that uses the returned data:
*consumer/src/main/java/au/com/dius/pactworkshop/consumer/Client.java:*
```java
public List<Object> fetchAndProcessData() throws UnirestException {
JsonNode data = loadProviderJson();
System.out.println("data=" + data);
JSONObject jsonObject = data.getObject();
int value = 100 / jsonObject.getInt("count");
ZonedDateTime date = ZonedDateTime.parse(jsonObject.getString("date"));
System.out.println("value=" + value);
System.out.println("date=" + date);
return Arrays.asList(value, date);
}
```

Let's now test our updated client. We're using [Wiremock](http://wiremock.org/) here to mock out the provider.
*consumer/src/test/java/au/com/dius/pactworkshop/consumer/ClientTest.java:*
```java
public class ClientTest {
@Rule
public WireMockRule wireMockRule = new WireMockRule(8080);
@Test
public void canProcessTheJsonPayloadFromTheProvider() throws UnirestException {
String date = "2013-08-16T15:31:20+10:00";
stubFor(get(urlPathEqualTo("/provider.json"))
.withQueryParam("validDate", matching(".+"))
.willReturn(aResponse()
.withStatus(200)
.withHeader("Content-Type", "application/json")
.withBody("{\"test\": \"NO\", \"date\": \"" + date + "\", \"count\": 100}")));
List<Object> data = new Client().fetchAndProcessData();
assertThat(data, hasSize(2));
assertThat(data.get(0), is(1));
assertThat(data.get(1), is(ZonedDateTime.parse(date)));
}
}
```

Let's run this spec and see it all pass:
```console
$ ./gradlew :consumer:check
BUILD SUCCESSFUL in 0s
3 actionable tasks: 3 up-to-date
```
However, there is a problem with this integration point. Running the actual client against any of the providers results in
a runtime exception!
```console
$ ./gradlew :consumer:run
> Task :consumer:run FAILED
data={"test":"NO","validDate":"2018-04-10T11:48:36.838","count":1000}
Exception in thread "main" org.json.JSONException: JSONObject["date"] not found.
at org.json.JSONObject.get(JSONObject.java:471)
at org.json.JSONObject.getString(JSONObject.java:717)
at au.com.dius.pactworkshop.consumer.Client.fetchAndProcessData(Client.java:26)
at au.com.dius.pactworkshop.consumer.Consumer.main(Consumer.java:7)
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':consumer:run'.
> Process 'command '/usr/lib/jvm/java-8-oracle/bin/java'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 1s
2 actionable tasks: 1 executed, 1 up-to-date
```
The provider returns a `validDate` while the consumer is trying to use `date`, which will blow up when run for
real even with the tests all passing. Here is where Pact comes in.
## Step 3 - Pact to the rescue
Let us add Pact to the project and write a consumer pact test.
*consumer/src/test/java/au/com/dius/pactworkshop/consumer/ClientPactTest.java*
```java
public class ClientPactTest {
// This sets up a mock server that pretends to be our provider
@Rule
public PactProviderRule provider = new PactProviderRule("Our Provider", "localhost", 1234, this);
private LocalDateTime dateTime;
private String dateResult;
// This defines the expected interaction for out test
@Pact(provider = "Our Provider", consumer = "Our Little Consumer")
public RequestResponsePact pact(PactDslWithProvider builder) {
dateTime = LocalDateTime.now();
dateResult = "2013-08-16T15:31:20+10:00";
return builder
.given("data count > 0")
.uponReceiving("a request for json data")
.path("/provider.json")
.method("GET")
.query("validDate=" + dateTime.toString())
.willRespondWith()
.status(200)
.body(
new PactDslJsonBody()
.stringValue("test", "NO")
.stringValue("date", dateResult)
.numberValue("count", 100)
)
.toPact();
}
@Test
@PactVerification("Our Provider")
public void pactWithOurProvider() throws UnirestException {
// Set up our HTTP client class
Client client = new Client(provider.getUrl());
// Invoke out client
List<Object> result = client.fetchAndProcessData(dateTime);
assertThat(result, hasSize(2));
assertThat(result.get(0), is(1));
assertThat(result.get(1), is(ZonedDateTime.parse(dateResult)));
}
}
```

This test starts a mock server on a random port that pretends to be our provider. To get this to work we needed to update
our consumer to pass in the URL of the provider. We also updated the `fetchAndProcessData` method to pass in the
query parameter.
Running this spec still passes, but it creates a pact file which we can use to validate our assumptions on the provider side.
```console
$ ./gradlew :consumer:check
Starting a Gradle Daemon, 1 incompatible and 3 stopped Daemons could not be reused, use --status for details
BUILD SUCCESSFUL in 8s
4 actionable tasks: 1 executed, 3 up-to-date
```
Generated pact file (*consumer/build/pacts/Our Little Consumer-Our Provider.json*):
```json
{
"provider": {
"name": "Our Provider"
},
"consumer": {
"name": "Our Little Consumer"
},
"interactions": [
{
"description": "a request for json data",
"request": {
"method": "GET",
"path": "/provider.json",
"query": {
"validDate": [
"2020-06-16T11:49:49.485"
]
}
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json; charset=UTF-8"
},
"body": {
"date": "2013-08-16T15:31:20+10:00",
"test": "NO",
"count": 100
},
"matchingRules": {
"header": {
"Content-Type": {
"matchers": [
{
"match": "regex",
"regex": "application/json(;\\s?charset=[\\w\\-]+)?"
}
],
"combine": "AND"
}
}
}
},
"providerStates": [
{
"name": "data count > 0"
}
]
}
],
"metadata": {
"pactSpecification": {
"version": "3.0.0"
},
"pact-jvm": {
"version": "4.1.2"
}
}
}
```
## Step 4 - Verify pact against provider
There are two ways of validating a pact file against a provider. The first is using a build tool (like Gradle) to
execute the pact against the running service. The second is to write a pact verification test. We will be doing both
in this step.
First, we need to **publish** the pact file from the consumer project. For this workshop, we have a `publishWorkshopPact` task in the
main project to do this.
```console
$ ./gradlew publishWorkshopPact
BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 up-to-dat
```
The Pact file from the consumer project will now exist in the build directory of the two provider projects.

### Verifying the springboot provider
For the springboot provider, we are going to use Gradle to verify the pact file for us. We need to add the pact gradle
plugin and the spawn plugin to the project and configure them.
**NOTE: This will not work on Windows, as the Gradle spawn plugin will not work with Windows.**
*providers/springboot-provider/build.gradle:*
```groovy
plugins {
id "au.com.dius.pact" version "4.1.7"
id "com.wiredforcode.spawn" version "0.8.2"
}
```
```groovy
task startProvider(type: SpawnProcessTask, dependsOn: 'assemble') {
command "java -jar ${jar.archivePath}"
ready 'Started MainApplication'
}
task stopProvider(type: KillProcessTask) {
}
pact {
serviceProviders {
'Our_Provider' {
port = 8080
startProviderTask = startProvider
terminateProviderTask = stopProvider
hasPactWith('Our Little Consumer') {
pactFile = file("$buildDir/pacts/Our Little Consumer-Our Provider.json")
}
}
}
}
```
Now if we copy the pact file from the consumer project and run our pact verification task, it should fail.
```console
$ ./gradlew :providers:springboot-provider:pactVerify
> Task :providers:springboot-provider:startProvider
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.0.RELEASE)
```
... omitting lots of logs ...
```console
2018-04-10 13:55:19.709 INFO 7912 --- [ main] a.c.d.p.s.MainApplication : Started MainApplication in 3.311 seconds (JVM running for 3.774)
java -jar /home/ronald/Development/Projects/Pact/pact-workshop-jvm/providers/springboot-provider/build/libs/springboot-provider.jar is ready.
> Task :providers:springboot-provider:pactVerify_Our_Provider FAILED
Verifying a pact between Our Little Consumer and Our_Provider
[Using File /home/ronald/Development/Projects/Pact/pact-workshop-jvm/providers/springboot-provider/build/pacts/Our Little Consumer-Our Provider.json]
Given data count > 0
WARNING: State Change ignored as there is no stateChange URL
a request for json data
returns a response which
has status code 200 (OK)
has a matching body (FAILED)
NOTE: Skipping publishing of verification results as it has been disabled (pact.verifier.publishResults is not 'true')
Failures:
1) Verifying a pact between Our Little Consumer and Our_Provider - a request for json data Given data count > 0
1.1) BodyMismatch: $ BodyMismatch: Expected date='2013-08-16T15:31:20+10:00' but was missing
{
- "date": "2013-08-16T15:31:20+10:00",
"test": "NO",
- "count": 100
+ "count": 1000,
+ "validDate": "2020-06-16T12:08:04.314696"
}
1.2) BodyMismatch: $.count BodyMismatch: Expected 100 (Integer) but received 1000 (Integer)
FAILURE: Build failed with an exception.
* What went wrong:
There were 2 non-pending pact failures for provider Our_Provider
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 6s
5 actionable tasks: 5 executed
```
The test has failed for 2 reasons. Firstly, the count field has a different value to what was expected by the consumer.
Secondly, and more importantly, the consumer was expecting a `date` field while the provider generates a `validDate`
field. Also, the date formats are different.
## Step 5 - Verify the provider with a test
In this step we will verify the same pact file against the Dropwizard provider using a JUnit test. If you need to,
re-run the `publishWorkshopPact` to get the pact file in the provider project.
We add the pact provider junit jar and the dropwizard testing jar to our project dependencies, and then we can create a
simple test to verify our provider.
```java
@RunWith(PactRunner.class)
@Provider("Our Provider")
@PactFolder("build/pacts")
public class PactVerificationTest {
@ClassRule
public static final DropwizardAppRule<ServiceConfig> RULE = new DropwizardAppRule<ServiceConfig>(MainApplication.class,
ResourceHelpers.resourceFilePath("main-app-config.yaml"));
@TestTarget
public final Target target = new HttpTarget(8080);
@State("data count > 0")
public void dataCountGreaterThanZero() {
}
}
```
This test will start the dropwizard app (using the class rule), and then execute the pact requests (defined by the
`@PactFolder` annotation) against the test target.
Running this test will fail for the same reasons as in step 4.
```console
$ ./gradlew :providers:dropwizard-provider:test
Starting a Gradle Daemon, 1 incompatible and 2 stopped Daemons could not be reused, use --status for details
> Task :providers:dropwizard-provider:test
au.com.dius.pactworkshop.dropwizardprovider.PactVerificationTest > Our Little Consumer - a request for json data FAILED
java.lang.AssertionError
1 test completed, 1 failed
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':providers:dropwizard-provider:test'.
> There were failing tests. See the report at: file:///home/ronald/Development/Projects/Pact/pact-workshop-jvm/providers/dropwizard-provider/build/reports/tests/test/index.html
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 12s
4 actionable tasks: 4 executed
```
The JUnit build report has the expected failures.
```
java.lang.AssertionError:
Failures:
1) a request for json data
1.1) BodyMismatch: $ BodyMismatch: Expected date='2013-08-16T15:31:20+10:00' but was missing
{
- "date": "2013-08-16T15:31:20+10:00",
"test": "NO",
- "count": 100
+ "count": 1000,
+ "validDate": "2020-06-16T12:29:52.836"
}
1.2) BodyMismatch: $.count BodyMismatch: Expected 100 (Integer) but received 1000 (Integer)
```
## Step 6 - Back to the client we go
Let's correct the consumer test to handle any integer for `count` and use the correct field for the `date`. Then we need
to add a type matcher for `count` and change the field for the date to be `validDate`. We can also add a date expression
to make sure the `validDate` field is a valid date. This is important because we are parsing it.
The consumer test is now updated to:
```java
.body(
new PactDslJsonBody()
.stringValue("test", "NO")
.datetime("validDate", "yyyy-MM-dd'T'HH:mm:ssXX", dateResult.toInstant())
.integerType("count", 100)
)
```
Running this test will fail until we fix the client. Here is the correct client function:
```java
public List<Object> fetchAndProcessData(LocalDateTime dateTime) throws UnirestException {
JsonNode data = loadProviderJson(dateTime);
System.out.println("data=" + data);
JSONObject jsonObject = data.getObject();
int value = 100 / jsonObject.getInt("count");
TemporalAccessor date = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssXX")
.parse(jsonObject.getString("validDate"));
System.out.println("value=" + value);
System.out.println("date=" + date);
return Arrays.asList(value, OffsetDateTime.from(date));
}
```
Now the test passes. But we still have a problem with the date format, which we must fix in the provider. Running the
client now fails because of that.
```console
$ ./gradlew consumer:run
Starting a Gradle Daemon, 1 busy and 1 incompatible and 2 stopped Daemons could not be reused, use --status for details
> Task :consumer:run FAILED
data={"test":"NO","validDate":"2018-04-10T14:39:50.419","count":1000}
Exception in thread "main" java.time.format.DateTimeParseException: Text '2018-04-10T14:39:50.419' could not be parsed at index 19
at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1949)
at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1851)
at java.time.OffsetDateTime.parse(OffsetDateTime.java:402)
at au.com.dius.pactworkshop.consumer.Client.fetchAndProcessData(Client.java:34)
at au.com.dius.pactworkshop.consumer.Consumer.main(Consumer.java:9)
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':consumer:run'.
> Process 'command '/usr/lib/jvm/java-8-oracle/bin/java'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 5s
2 actionable tasks: 1 executed, 1 up-to-date
```
We need to **publish** the consumer pact file to the provider projects again. Then, running the provider verification
tests we get the expected failure about the date format.
```
Failures:
1) Verifying a pact between Our Little Consumer and Our_Provider - a request for json data Given data count > 0
1.1) BodyMismatch: $.validDate BodyMismatch: Expected "2020-06-16T13:01:21.675150" to match a datetime of 'yyyy-MM-dd'T'HH:mm:ssXX': Text '2020-06-16T13:01:21.675150' could not be parsed at index 19
```
## Step 7 - Verify the providers again
Lets fix the providers and then re-run the verification tests. Here is the corrected Dropwizard resource:
```java
@RestController
public class RootController {
@RequestMapping("/provider.json")
public Map<String, Serializable> providerJson(@RequestParam(required = false) String validDate) {
LocalDateTime validTime = LocalDateTime.parse(validDate);
Map<String, Serializable> map = new HashMap<>(3);
map.put("test", "NO");
map.put("validDate", OffsetDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssXX")));
map.put("count", 1000);
return map;
}
}
```

Running the verification against the providers now pass. Yay!
## Step 8 - Test for the missing query parameter
In this step we are going to add a test for the case where the query parameter is missing or invalid. We do this by
adding additional tests and expectations to the consumer pact test. Our client code needs to be modified slightly to
be able to pass invalid dates in, and if the date parameter is null, don't include it in the request.
Here are the two additional tests:
*consumer/src/test/java/au/com/dius/pactworkshop/consumer/ClientPactTest.java:*
```java
@Pact(provider = "Our Provider", consumer = "Our Little Consumer")
public RequestResponsePact pactForMissingDateParameter(PactDslWithProvider builder) {
dateTime = LocalDateTime.now();
dateResult = OffsetDateTime.now().truncatedTo(ChronoUnit.SECONDS);
return builder
.given("data count > 0")
.uponReceiving("a request with a missing date parameter")
.path("/provider.json")
.method("GET")
.willRespondWith()
.status(400)
.body(
new PactDslJsonBody().stringValue("error", "validDate is required")
)
.toPact();
}
@Test
@PactVerification(value = "Our Provider", fragment = "pactForMissingDateParameter")
public void handlesAMissingDateParameter() throws UnirestException {
// Set up our HTTP client class
Client client = new Client(provider.getUrl());
// Invoke out client
List<Object> result = client.fetchAndProcessData(null);
assertThat(result, hasSize(2));
assertThat(result.get(0), is(0));
assertThat(result.get(1), nullValue());
}
@Pact(provider = "Our Provider", consumer = "Our Little Consumer")
public RequestResponsePact pactForInvalidDateParameter(PactDslWithProvider builder) {
dateTime = LocalDateTime.now();
dateResult = OffsetDateTime.now().truncatedTo(ChronoUnit.SECONDS);
return builder
.given("data count > 0")
.uponReceiving("a request with an invalid date parameter")
.path("/provider.json")
.method("GET")
.query("validDate=This is not a date")
.willRespondWith()
.status(400)
.body(
new PactDslJsonBody().stringValue("error", "'This is not a date' is not a date")
)
.toPact();
}
@Test
@PactVerification(value = "Our Provider", fragment = "pactForInvalidDateParameter")
public void handlesAInvalidDateParameter() throws UnirestException {
// Set up our HTTP client class
Client client = new Client(provider.getUrl());
// Invoke out client
List<Object> result = client.fetchAndProcessData("This is not a date");
assertThat(result, hasSize(2));
assertThat(result.get(0), is(0));
assertThat(result.get(1), nullValue());
}
```
After running our specs, the pact file will have 2 new interactions.
*consumer/build/pacts/Our Little Consumer-Our Provider.json:*
```json
[
{
"description": "a request with a missing date parameter",
"request": {
"method": "GET",
"path": "/provider.json"
},
"response": {
"status": 400,
"headers": {
"Content-Type": "application/json; charset=UTF-8"
},
"body": {
"error": "validDate is required"
},
"matchingRules": {
"header": {
"Content-Type": {
"matchers": [
{
"match": "regex",
"regex": "application/json(;\\s?charset=[\\w\\-]+)?"
}
],
"combine": "AND"
}
}
}
},
"providerStates": [
{
"name": "data count > 0"
}
]
},
{
"description": "a request with an invalid date parameter",
"request": {
"method": "GET",
"path": "/provider.json",
"query": {
"validDate": [
"This is not a date"
]
}
},
"response": {
"status": 400,
"headers": {
"Content-Type": "application/json; charset=UTF-8"
},
"body": {
"error": "'This is not a date' is not a date"
},
"matchingRules": {
"header": {
"Content-Type": {
"matchers": [
{
"match": "regex",
"regex": "application/json(;\\s?charset=[\\w\\-]+)?"
}
],
"combine": "AND"
}
}
}
},
"providerStates": [
{
"name": "data count > 0"
}
]
}
]
```
## Step 9 - Verify the provider with the missing/invalid date query parameter
Let us run this updated pact file with our providers (first run the `publishWorkshopPact` task). We get a 500 response as the provider can't handle the missing
or incorrect date.
Here is the springboot test output:
```console
Verifying a pact between Our Little Consumer and Our_Provider
[Using File /home/ronald/Development/Projects/Pact/pact-workshop-jvm/providers/springboot-provider/build/pacts/Our Little Consumer-Our Provider.json]
Given data count > 0
WARNING: State Change ignored as there is no stateChange URL
a request for json data
returns a response which
has status code 200 (OK)
has a matching body (OK)
Given data count > 0
WARNING: State Change ignored as there is no stateChange URL
a request with a missing date parameter
returns a response which
has status code 400 (FAILED)
has a matching body (FAILED)
Given data count > 0
WARNING: State Change ignored as there is no stateChange URL
a request with an invalid date parameter
returns a response which
has status code 400 (FAILED)
has a matching body (FAILED)
NOTE: Skipping publishing of verification results as it has been disabled (pact.verifier.publishResults is not 'true')
Failures:
1) Verifying a pact between Our Little Consumer and Our_Provider - a request with a missing date parameter Given data count > 0
1.1) StatusMismatch: expected status of 400 but was 500
1.2) BodyMismatch: $.error BodyMismatch: Expected 'validDate is required' (String) but received 'Internal Server Error' (String)
1.3) StatusMismatch: expected status of 400 but was 500
1.4) BodyMismatch: $.error BodyMismatch: Expected ''This is not a date' is not a date' (String) but received 'Internal Server Error' (String)
```
Time to update the providers to handle these cases.
## Step 10 - Update the providers to handle the missing/invalid query parameters
Let's fix our providers so they generate the correct responses for the query parameters.
### Dropwizard provider
The Dropwizard root resource gets updated to check if the parameter has been passed, and handle the date parse exception
if it is invalid. Two new exceptions get thrown for these cases.
```java
@GET
public Map<String, Serializable> providerJson(@QueryParam("validDate") Optional<String> validDate) {
if (validDate.isPresent()) {
try {
LocalDateTime validTime = LocalDateTime.parse(validDate.get());
Map<String, Serializable> result = new HashMap<>(3);
result.put("test", "NO");
result.put("validDate", OffsetDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssXX")));
result.put("count", 1000);
return result;
} catch (DateTimeParseException e) {
throw new InvalidQueryParameterException("'" + validDate.get() + "' is not a date", e);
}
} else {
throw new QueryParameterRequiredException("validDate is required");
}
}
```
Next step is to create exception mappers for the new exceptions, and register them with the Dropwizard environment.
```java
public class InvalidQueryParameterExceptionMapper implements ExceptionMapper<InvalidQueryParameterException> {
@Override
public Response toResponse(InvalidQueryParameterException exception) {
return Response.status(Response.Status.BAD_REQUEST)
.type(MediaType.APPLICATION_JSON_TYPE)
.entity("{\"error\": \"" + exception.getMessage() + "\"}")
.build();
}
}
```
The main provider run method becomes:
```groovy
void run(ServiceConfig configuration, Environment environment) {
environment.jersey().register(new InvalidQueryParameterExceptionMapper())
environment.jersey().register(new QueryParameterRequiredExceptionMapper())
environment.jersey().register(new RootResource())
}
```
Now running the `PactVerificationTest` will pass.
### Springboot provider
The Springboot root controller gets updated in a similar way to the Dropwizard resource.
```java
@RequestMapping("/provider.json")
public Map<String, Serializable> providerJson(@RequestParam(required = false) String validDate) {
if (StringUtils.isNotEmpty(validDate)) {
try {
LocalDateTime validTime = LocalDateTime.parse(validDate);
Map<String, Serializable> map = new HashMap<>(3);
map.put("test", "NO");
map.put("validDate", OffsetDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssXX")));
map.put("count", 1000);
return map;
} catch (DateTimeParseException e) {
throw new InvalidQueryParameterException("'" + validDate + "' is not a date", e);
}
} else {
throw new QueryParameterRequiredException("validDate is required");
}
}
```
Then, to get the exceptions mapped to the correct response, we need to create a controller advice.
```java
@ControllerAdvice(basePackageClasses = RootController.class)
public class RootControllerAdvice extends ResponseEntityExceptionHandler {
@ExceptionHandler({InvalidQueryParameterException.class, QueryParameterRequiredException.class})
@ResponseBody
public ResponseEntity<String> handleControllerException(HttpServletRequest request, Throwable ex) {
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
return new ResponseEntity<>("{\"error\": \"" + ex.getMessage() + "\"}", headers, HttpStatus.BAD_REQUEST);
}
}
```
Now running the `pactVerify` is all successful.
## Step 11 - Provider states
We have one final thing to test for. If the provider ever returns a count of zero, we will get a division by
zero error in our client. This is an important bit of information to add to our contract. Let us start with a
consumer test for this.
```java
@Pact(provider = "Our Provider", consumer = "Our Little Consumer")
public RequestResponsePact pactForWhenThereIsNoData(PactDslWithProvider builder) {
dateTime = LocalDateTime.now();
return builder
.given("data count == 0")
.uponReceiving("a request for json data")
.path("/provider.json")
.method("GET")
.query("validDate=" + dateTime.toString())
.willRespondWith()
.status(404)
.toPact();
}
@Test
@PactVerification(value = "Our Provider", fragment = "pactForWhenThereIsNoData")
public void whenThereIsNoData() throws UnirestException {
// Set up our HTTP client class
Client client = new Client(provider.getUrl());
// Invoke out client
List<Object> result = client.fetchAndProcessData(dateTime.toString());
assertThat(result, hasSize(2));
assertThat(result.get(0), is(0));
assertThat(result.get(1), nullValue());
}
```
This adds a new interaction to the pact file:
```json
{
"description": "a request for json data",
"request": {
"method": "GET",
"path": "/provider.json",
"query": {
"validDate": [
"2020-06-16T13:56:47.303"
]
}
},
"response": {
"status": 404
},
"providerStates": [
{
"name": "data count == 0"
}
]
}
```
## Step 12 - provider states for the providers
To be able to verify our providers, we need to be able to change the data that the provider returns. There are different
ways of doing this depending on how the provider is being verified.
### Dropwizard provider
The dropwizard provider is being verified by a test, so we can setup methods annotated with the state and then modify the
controller appropriately. First, we need some data store that we could manipulate. For out case, we are just going to
use a singleton class, but in a real project you would probably use a database.
```java
public class DataStore {
public static final DataStore INSTANCE = new DataStore();
private int dataCount = 1000;
private DataStore() { }
public int getDataCount() {
return dataCount;
}
public void setDataCount(int dataCount) {
this.dataCount = dataCount;
}
}
```
Next, we update our root resource to use the value from the data store, and throw an exception if there is no data.
```java
@GET
public Map<String, Serializable> providerJson(@QueryParam("validDate") Optional<String> validDate) {
if (validDate.isPresent()) {
if (DataStore.INSTANCE.getDataCount() > 0) {
try {
LocalDateTime validTime = LocalDateTime.parse(validDate.get());
Map<String, Serializable> result = new HashMap<>(3);
result.put("test", "NO");
result.put("validDate", OffsetDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssXX")));
result.put("count", DataStore.INSTANCE.getDataCount());
return result;
} catch (DateTimeParseException e) {
throw new InvalidQueryParameterException("'" + validDate.get() + "' is not a date", e);
}
} else {
throw new NoDataException();
}
} else {
throw new QueryParameterRequiredException("validDate is required");
}
}
```
We do the same exception mapping for the new exception as we did before.
```java
public class NoDataExceptionMapper implements ExceptionMapper<NoDataException> {
@Override
public Response toResponse(NoDataException exception) {
return Response.status(Response.Status.NOT_FOUND).build();
}
}
```
Now we can change the data store value in our test based on the provider state.
```java
@State("data count > 0")
public void dataCountGreaterThanZero() {
DataStore.INSTANCE.setDataCount(1000);
}
@State("data count == 0")
public void dataCountZero() {
DataStore.INSTANCE.setDataCount(0);
}
```
Running the test now passes.
### Springboot provider
Our Springboot provider is being verified by the Pact Gradle verification task, which requires the provider to be
running in the background. We can not directly manipulate it. The Gradle task has a state change URL feature that can
help us here. This is basically a special URL that will receive the state that the provider needs to be in.
First, lets enable the state change URL handling in the build gradle file.
```groovy
pact {
serviceProviders {
'Our_Provider' {
port = 8080
startProviderTask = startProvider
terminateProviderTask = stopProvider
stateChangeUrl = url('http://localhost:8080/pactStateChange')
hasPactWith('Our Little Consumer') {
pactFile = file("$buildDir/pacts/Our Little Consumer-Our Provider.json")
}
}
}
}
```
Now we create a new controller to handle this. As this controller is only for our test, we make sure it is only available
in the test profile. We also need to make sure the app runs in the test profile by adding a parameter to the start task.
```groovy
task startProvider(type: SpawnProcessTask, dependsOn: 'assemble') {
command "java -Dspring.profiles.active=test -jar ${jar.archivePath}"
ready 'Started MainApplication'
}
```
Here is the state change controller:
```java
@RestController
@Profile("test")
public class StateChangeController {
@RequestMapping(value = "/pactStateChange", method = RequestMethod.POST)
public void providerState(@RequestBody Map body) {
if (body.get("state").equals("data count > 0")) {
DataStore.INSTANCE.setDataCount(1000);
} else if (body.get("state").equals("data count == 0")) {
DataStore.INSTANCE.setDataCount(0);
}
}
}
```
This controller will change the value of the datastore. We then use the datastore in our normal controller.
```java
@RequestMapping("/provider.json")
public Map<String, Serializable> providerJson(@RequestParam(required = false) String validDate) {
if (StringUtils.isNotEmpty(validDate)) {
if (DataStore.INSTANCE.getDataCount() > 0) {
try {
LocalDateTime validTime = LocalDateTime.parse(validDate);
Map<String, Serializable> map = new HashMap<>(3);
map.put("test", "NO");
map.put("validDate", OffsetDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssXX")));
map.put("count", DataStore.INSTANCE.getDataCount());
return map;
} catch (DateTimeParseException e) {
throw new InvalidQueryParameterException("'" + validDate + "' is not a date", e);
}
} else {
throw new NoDataException();
}
} else {
throw new QueryParameterRequiredException("validDate is required");
}
}
```
and update our controller advice to return the 404 response when a `NoDataException` is raised.
```java
@ControllerAdvice(basePackageClasses = RootController.class)
public class RootControllerAdvice extends ResponseEntityExceptionHandler {
@ExceptionHandler({InvalidQueryParameterException.class, QueryParameterRequiredException.class})
@ResponseBody
public ResponseEntity<String> handleControllerException(HttpServletRequest request, Throwable ex) {
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
return new ResponseEntity<>("{\"error\": \"" + ex.getMessage() + "\"}", headers, HttpStatus.BAD_REQUEST);
}
@ExceptionHandler(NoDataException.class)
@ResponseBody
ResponseEntity handleNoDataException(HttpServletRequest request, Throwable ex) {
return new ResponseEntity(HttpStatus.NOT_FOUND);
}
}
```
Running the Gradle pact verification now passes.
# Step 13 - Using a Pact Broker
We've been publishing our pacts from the consumer project by coping the files over to the provider project, but we can
use a Pact Broker to do this instead.
### Running the Pact Broker locally or against a hosted broker (e.g. Pactflow)
If you'd like to play along locally we have a docker-compose example you can use. Start it by running:
```
docker-compose up
```
Afterwards, it should be running on port `9292`. Head to `http://localhost:9292` and you should see the OSS pact broker running.
The credentials are `pact_workshop` / `pact_workshop`.
The project properties (`gradle.properties`) defaults to using this with the relevant credentials. Update as required to your own hosted platform such as
or your pactflow.io account.
### Consumer
First, in the consumer project we need to add the Gradle Pact plugin and tell it about our broker.
```groovy
plugins {
id "au.com.dius.pact" version "4.1.7"
}
... omitted ...
pact {
publish {
pactBrokerUrl = 'https://test.pact.dius.com.au'
pactBrokerUsername = project.pactBrokerUser
pactBrokerPassword = project.pactBrokerPassword
}
}
```
Now, we can run `./gradlew consumer:pactPublish` after running the consumer tests to have the generated pact file
published to the broker. Afterwards, you can navigate to the Pact Broker URL and see the published pact against
the consumer and provider names setup in our consumer test.

### Dropwizard provider
In the `PactVerificationTest` we can change the source we fetch pacts from by using a `@PactBroker` annotation instead
of the `@PactFolder` one. We also need to pass the username and property through to the test.
Updated gradle build file:
```groovy
test {
systemProperty 'pactBrokerUser', pactBrokerUser
systemProperty 'pactBrokerPassword', pactBrokerPassword
}
```
Updated test:
```java
@RunWith(PactRunner.class)
@Provider("Our Provider")
@PactBroker(host = "test.pact.dius.com.au", protocol = "https", port = "443",
authentication = @PactBrokerAuth(username = "${pactBrokerUser}", password = "${pactBrokerPassword}"))
public class PactVerificationTest {
@ClassRule
public static final DropwizardAppRule<ServiceConfig> RULE = new DropwizardAppRule<ServiceConfig>(MainApplication.class,
ResourceHelpers.resourceFilePath("main-app-config.yaml"));
@TestTarget
public final Target target = new HttpTarget(8080);
@State("data count > 0")
public void dataCountGreaterThanZero() {
DataStore.INSTANCE.setDataCount(1000);
}
@State("data count == 0")
public void dataCountZero() {
DataStore.INSTANCE.setDataCount(0);
}
}
```
### Springboot provider
The springboot provider is using the Gradle plugin, so we can just configure its build to fetch the pacts from the
broker.
Updated build file:
```groovy
pact {
serviceProviders {
'Our Provider' {
port = 8080
startProviderTask = startProvider
terminateProviderTask = stopProvider
stateChangeUrl = url('http://localhost:8080/pactStateChange')
hasPactsFromPactBroker("https://test.pact.dius.com.au", authentication: ['Basic', pactBrokerUser, pactBrokerPassword])
}
}
}
```
Running either of the verification tests will now publish the result back to the broker. If you refresh the index page in the broker,
you will see the pacts marked as verified.

| 1 |
mstahv/jpa-invoicer | Jakarta EE, JPA, Vaadin example app | null | # Invoicing example application
Note: This project is currently beeing upgraded to Vaadin 24 & Jakarta EE 10. The old Vaadin 7 version is available in a [separate branch](https://github.com/mstahv/jpa-invoicer/tree/vaadin-7). This version misses certain things, like authentication has not been upgraded/tested.
Simple app to collaboratively create invoices. This application is built as an example application for Vaadin UI framework and various Java EE technologies. Still it should be perfectly usable as such in a real use.
Features:
* Based on Java EE stack, persistency with JPA to RDBMS, UI built with Vaadin
* Multiple organisations that can send invoices, shareable with other users
* Customer registry, used fluently via invoice view
* PDF/ODT export for invoices, configurable ODT template
* User backups via XML export
* Google OAuth2 based login
This is a suitable basis for small to medium sized apps. For larger applications,
consider using MVP to structure your UI code. See e.g. [this example
application](https://github.com/peterl1084/cdiexample).
## Quickstart
<del>
Start the application with an embedded wildfly:
```
mvn clean package wildfly:run
```
</del>
Current version does not make the latest Wildfly/Hibernate happy. Try with e.g.
Payara 6 that works well 👍
After startup the application is available here: [http://localhost:8080/invoicer](http://localhost:8080/invoicer)
| 1 |
jcasbin/casbin-spring-boot-starter | Spring Boot 2.x & 3.x Starter for Casbin, see example at: https://github.com/jcasbin/casbin-spring-boot-example | abac acl auth authorization authz casbin java jcasbin rbac spring spring-boot spring-boot-2 spring-boot-3 springboot springbootstarter | # Casbin Spring Boot Starter
[](https://codecov.io/gh/jcasbin/casbin-spring-boot-starter)
[](https://github.com/jcasbin/casbin-spring-boot-starter/actions)
[](https://mvnrepository.com/artifact/org.casbin/casbin-spring-boot-starter/latest)
[](http://www.apache.org/licenses/LICENSE-2.0.txt)
[](https://spring.io/projects/spring-boot)
[](https://casbin.org)
[](https://casbin.org)
Casbin Spring Boot Starter is designed to help you easily integrate [jCasbin](https://github.com/casbin/jcasbin) into
your Spring Boot project.
## how to use
1. Add ```casbin-spring-boot-starter``` to the Spring Boot project.
```Maven```
```xml
<dependency>
<groupId>org.casbin</groupId>
<artifactId>casbin-spring-boot-starter</artifactId>
<version>version</version>
</dependency>
```
```Gradle```
```groovy
implementation 'org.casbin:casbin-spring-boot-starter:version'
```
2. Inject the Enforcer where you need to use it
```java
@Component
public class Test {
@Autowired
private Enforcer enforcer;
}
```
3. Add configuration
```yaml
casbin:
#Whether to enable Casbin, it is enabled by default.
enableCasbin: true
#Whether to use thread-synchronized Enforcer, default false
useSyncedEnforcer: false
#Whether to enable automatic policy saving, if the adapter supports this function, it is enabled by default.
autoSave: true
#Storage type [file, jdbc], currently supported jdbc database [mysql (mariadb), h2, oracle, postgresql, db2]
#Welcome to write and submit the jdbc adapter you are using, see: org.casbin.adapter.OracleAdapter
#The jdbc adapter will actively look for the data source information you configured in spring.datasource
#Default use jdbc, and use the built-in h2 database for memory storage
storeType: jdbc
#Customized policy table name when use jdbc, casbin_rule as default.
tableName: casbin_rule
#Data source initialization policy [create (automatically create data table, no longer initialized if created), never (always do not initialize)]
initializeSchema: create
#Local model configuration file address, the default reading location: classpath: casbin/model.conf
model: classpath:casbin/model.conf
#If the model configuration file is not found in the default location and casbin.model is not set correctly, the built-in default rbac model is used, which takes effect by default.
useDefaultModelIfModelNotSetting: true
#Local policy configuration file address, the default reading location: classpath: casbin/policy.csv
#If the configuration file is not found in the default location, an exception will be thrown.
#This configuration item takes effect only when casbin.storeType is set to file.
policy: classpath:casbin/policy.csv
#Whether to enable the CasbinWatcher mechanism, the default is not enabled.
#If the mechanism is enabled, casbin.storeType must be jdbc, otherwise the configuration is invalid.
enableWatcher: false
#CasbinWatcher notification mode, defaults to use Redis for notification synchronization, temporarily only supports Redis
#After opening Watcher, you need to manually add spring-boot-starter-data-redis dependency.
watcherType: redis
exception:
... See Schedule A for exception settings.
```
4. The simplest configuration
- Do not use other add-on configurations
```yaml
casbin:
#If you are using a model profile at this address, no configuration is required
model: classpath:casbin/model.conf
```
- Turn on Watcher
```yaml
casbin:
#If the model profile you are using is located at this address, you do not need this configuration
model: classpath:casbin/model.conf
#When you open Watcher, the default use of RedisWatcher requires manual addition of spring-boot-starter-data-redis dependency.
enableWatcher: true
```
5. Use custom independent data sources
- Only increase ```@CasbinDataSource``` annotation when injecting custom data source
```java
@Configuration
public class CasbinDataSourceConfiguration {
@Bean
@CasbinDataSource
public DataSource casbinDataSource() {
return DataSourceBuilder.create().url("jdbc:h2:mem:casbin").build();
}
}
```
##### Schedule A
- ExceptionSettings(casbin.exception)
| name | description | default |
|--------------------|--------------------------------------------------|---------|
| removePolicyFailed | Throws an exception when the delete policy fails | false |
##### Note: If you do not set another data source, or set the storage file location for H2, the data is stored in memory by default using H2.
#### Notice:
Since version 0.0.11, casbin-spring-boot-starter adds an id field to the database table structure by default.
The version before 0.0.11 is upgraded to version 0.0.11 and later requires the user to manually add the id field.
See https://github.com/jcasbin/casbin-spring-boot-starter/issues/21 for details
| 1 |
thecodinglive/JPub-JavaWebService | here is book example | null | # 스프링부트로 배우는 자바 웹 서비스
here is book example
<img src="https://i.imgur.com/yfYoywG.jpg" width="350" height="450"/>
## download
[zip파일로 소스코드 전체 다운로드](https://github.com/thecodinglive/JPub-JavaWebService/archive/master.zip)
목차
CHAPTER 1 개발 환경의 변화와 자바 · 1
CHAPTER 2 서블릿 · 11
CHAPTER 3 스프링 프레임워크 · 49
CHAPTER 4 스프링 부트 웹 개발 · 81
CHAPTER 5 REST API 서버 만들기 · 117
CHAPTER 6 스프링 부트와 데이터 · 149
CHAPTER 7 커스텀 스프링 부트 스타터 · 221
CHAPTER 8 예외 처리 및 테스트 · 249
CHAPTER 9 배포 · 281
CHAPTER 10 모니터링 · 299
CHAPTER 11 캐시 · 311
CHAPTER 12 회원 관리 · 341 | 1 |
timothyrenner/kafka-streams-ex | A collection of examples and use-cases for Kafka Streams | null | # Kafka Streams Examples
This repository contains examples of use cases (ranging from trivial to somewhat complex) of Kafka Streams.
Each example is in it's own directory.
The repository contains the following examples:
* [Exclamation](https://github.com/timothyrenner/kafka-streams-ex/tree/master/exclamation): Trivial example that reads from the console consumer and appends two exclamation points.
* [Exclamation Advanced](https://github.com/timothyrenner/kafka-streams-ex/tree/master/exclamation-advanced): Slightly more complicated version of Exclamation that "alerts" on highly exclamated messages.
* [Hopping Windows](https://github.com/timothyrenner/kafka-streams-ex/tree/master/hopping-window): Example demonstrating the behavior of hopping windows by counting the elements on a single key.
* [Tumbling Windows](https://github.com/timothyrenner/kafka-streams-ex/tree/master/tumbling-window): Example demonstrating the behavior of tumbling windows by counting the elements on a single key.
* [Processor](https://github.com/timothyrenner/kafka-streams-ex/tree/master/processor): Example demonstrating the processor API, state stores, and custom serializers.
* [Instrumented Processor](https://github.com/timothyrenner/kafka-streams-ex/tree/master/processor-instrumented): A stripped down version of the processor example that logs the values in the state store - designed to run in two nodes (or just two terminals) to show what happens under failover conditions.
* [Not Looking at Facebook](https://github.com/timothyrenner/kafka-streams-ex/tree/master/not-looking-at-facebook): Implementation of a streaming pipeline for notifying users when they aren't looking at Facebook.
* [KTable](https://github.com/timothyrenner/kafka-streams-ex/tree/master/ktable): Literally a KTable.
* [Windowed Delay](https://github.com/timothyrenner/kafka-streams-ex/tree/master/windowed-delay): Demonstration of event-time ordering.
| 0 |
graalvm/graalvm-demos | This repository contains example applications to illustrate the different capabilities of GraalVM | graalvm graalvm-demos native-image | # GraalVM Demos
This repository contains demo applications and benchmarks written in Java, JavaScript, Python, and other languages.
These applications illustrate the diverse capabilities of [GraalVM](http://graalvm.org).
The demos are sorted by a framework, by a programming language, or by a technology.
Each directory contains demo sources; the instructions on how to run a particular demo are in its _README.md_ file.
To get started, clone or download this repository, enter the demo directory, and follow steps in the _README.md_ file.
```
git clone https://github.com/graalvm/graalvm-demos.git
cd graalvm-demos
```
### GraalVM JDK and Native Image
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/tiny-java-containers/">tiny-java-containers</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/tiny-java-containers.yml"><img alt="tiny-java-containers" src="https://github.com/graalvm/graalvm-demos/actions/workflows/tiny-java-containers.yml/badge.svg" /></a>
<td align="left" width="70%">Demonstrates how to build very small Docker container images with GraalVM Native Image and various lightweight base images. <br><strong>Technologies: </strong> Native Image, musl libc<br><strong>Reference: </strong><a href="https://www.graalvm.org/22.0/reference-manual/native-image/StaticImages/">Static and Mostly Static Images</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/hello-graal/">hello-graal</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/hello-graal.yml"><img alt="hello-graal" src="https://github.com/graalvm/graalvm-demos/actions/workflows/hello-graal.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to build native executables from a class file and a JAR file from the command line <br><strong>Technologies: </strong> Native Image <br><strong>Reference: </strong><a href="https://www.graalvm.org/dev/reference-manual/native-image/#build-a-native-executable">Native Image Getting Started</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/java-hello-world-maven/">java-hello-world-maven</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/java-hello-world-maven.yml"><img alt="java-hello-world-maven" src="https://github.com/graalvm/graalvm-demos/actions/workflows/java-hello-world-maven.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to generate a native executable using the Native Build Tools Maven plugin <br><strong>Technologies: </strong>Native Image, Native Build Tools Maven plugin<br><strong>Reference: </strong><a href="https://docs.oracle.com/en/graalvm/jdk/21/docs/getting-started/oci/code-editor/">Oracle GraalVM in OCI Code Editor</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-hello-module/">native-hello-module</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-hello-module.yml"><img alt="native-hello-module" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-hello-module.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to build a modular Java application into a native executable<br><strong>Technologies: </strong>Native Image, Maven<br><strong>Reference: </strong><a href="https://www.graalvm.org/dev/reference-manual/native-image/guides/build-java-modules-into-native-executable/">Build Java Modules into a Native Executable</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-list-dir/">native-list-dir</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-list-dir.yml"><img alt="native-list-dir" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-list-dir.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to compile a CLI application into a native executable and then apply Profile-Guided Optimizations (PGO) for more performance gains<br><strong>Technologies: </strong>Native Image, PGO
</tr>
<tr>
<td align="left" width="30%"><a href="/java-simple-stream-benchmark/">java-simple-stream-benchmark</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/java-simple-stream-benchmark.yml"><img alt="java-simple-stream-benchmark" src="https://github.com/graalvm/graalvm-demos/actions/workflows/java-simple-stream-benchmark.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how the Graal compiler can achieve better performance for highly abstracted programs like those using Streams, Lambdas<br><strong>Technologies: </strong>Graal compiler, C2<br><strong>Reference: </strong><a href="https://luna.oracle.com/lab/d502417b-df66-45be-9fed-a3ac8e3f09b1/steps#task-2-run-demos-java-microbenchmark-harness-jmh">Simple Java Stream Benchmark</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/streams/">streams</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/streams.yml"><img alt="streams" src="https://github.com/graalvm/graalvm-demos/actions/workflows/streams.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demontrates how GraalVM efficiently optimizes the Java Streams API application and how to apply PGO<br><strong>Technologies: </strong>Native Image, Native Build Tools Maven Plugin <br><strong>Reference: </strong><a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/optimize-native-executable-with-pgo/">Optimize a Native Executable with Profile-Guided Optimizations</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/fortune-demo/">fortune-demo</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/fortune-demo.yml"><img alt="fortune-demo" src="https://github.com/graalvm/graalvm-demos/actions/workflows/fortune-demo.yml/badge.svg" /></a></td>
<td align="left" width="70%">A fortune teller Unix program. Run it in JIT, build a native executable, or build a mostly-static native executable, using Gradle or Maven build tools.<br><strong>Technologies: </strong>Native Image, Native Build Tools<br><strong>Reference: </strong><a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/use-graalvm-dashboard/">Use GraalVM Dashboard to Optimize the Size of a Native Executable</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/multithreading-demo/">multithreading-demo</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/streams.yml"><img alt="streams" src="https://github.com/graalvm/graalvm-demos/actions/workflows/streams.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to optimize a Java application that does synchronous and asynchronous threads execution<br><strong>Technologies: </strong>Native Image, Native Build Tools Maven plugin, GraalVM Dashboard <br><strong>Reference: </strong><a href="https://medium.com/graalvm/making-sense-of-native-image-contents-741a688dab4d">Making sense of Native Image contents</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-image-configure-examples/">native-image-configure-examples</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/streams.yml"><img alt="streams" src="https://github.com/graalvm/graalvm-demos/actions/workflows/streams.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how you can influence the classes initialization at the image build time<br><strong>Technologies: </strong>Native Image, Maven<br><strong>Reference: </strong><a href="https://medium.com/graalvm/understanding-class-initialization-in-graalvm-native-image-generation-d765b7e4d6ed">Understanding Class Initialization in GraalVM Native Image Generation</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-netty-plot/">native-netty-plot</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-netty-plot.yml"><img alt="native-netty-plot" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-netty-plot.yml/badge.svg" /></a></td>
<td align="left" width="70%">A web server application, using the Netty framework, to demonstrate the use of isolates with Native Image<br><strong>Technologies: </strong>Native Image, Maven, Netty<br><strong>Reference: </strong><a href="https://medium.com/graalvm/instant-netty-startup-using-graalvm-native-image-generation-ed6f14ff7692">Instant Netty Startup using GraalVM Native Image Generation</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/javagdbnative/">javagdbnative</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/javagdbnative.yml"><img alt="javagdbnative" src="https://github.com/graalvm/graalvm-demos/actions/workflows/javagdbnative.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to debug a Java application, built into a native executable in VS Code<br><strong>Technologies: </strong>Native Image, Maven, GraalVM Tools for Java<br><strong>Reference: </strong><a href="https://medium.com/graalvm/native-image-debugging-in-vs-code-2d5dda1989c1">Native Image Debugging in VS Code</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-image-logging-examples/">native-image-logging-examples</a><br><a href="https://github.com/graalvm/graalvm-demos/blob/ni-logging-demo/.github/workflows/native-image-logging-examples.yml"><img alt="native-image-logging-examples" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-image-logging-examples.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to initialize Loggers with Native Image at the executable build or run time<br><strong>Technologies: </strong> Native Image<br><strong>Reference: </strong><a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/add-logging-to-native-executable/">Add Logging to a Native Executable</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-jfr-demo/">native-jfr-demo</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/ni-native-executable-jfr-demo/.github/workflows/native-jfr-demo.yml"><img alt="native-jfr-demo" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-jfr-demo.yml/badge.svg"/></a></td>
<td align="left" width="70%">Demonstrates how to create a custom JDK Flight Recorder (JFR) event and use that in a native executable<br><strong>Technologies: </strong> Native Image, JFR, VisualVM <br><strong>Reference: </strong><a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/build-and-run-native-executable-with-jfr/">Build and Run Native Executables with JFR</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-shared-library/">native-shared-library</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-shared-library.yml"><img alt="native-shared-library" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-shared-library.yml/badge.svg"/></a></td>
<td align="left" width="70%">Demonstrates how to create a Java class library, use Native Image to create a native shared library, and then create a small C application that uses that shared library<br><strong>Technologies: </strong> Native Image, LLVM toolchain <br><strong>Reference: </strong><a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/build-native-shared-library/">Build a Native Shared Library</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-image-reflection-example/">native-image-reflection-example</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-image-reflection-example.yml"><img alt="native-image-reflection-example" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-image-reflection-example.yml/badge.svg"/></a></td>
<td align="left" width="70%">Demonstrates how to provide metadata for Native Image in the form of JSON configuration files using a tracing agent<br><strong>Technologies: </strong> Native Image</td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-static-images/">native-static-images</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-static-images.yml"><img alt="native-static-images" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-static-images.yml/badge.svg"/></a></td>
<td align="left" width="70%">Demonstrates how to build a fully static and a mostly-static native executable.<br><strong>Technologies: </strong> Native Image <br><strong>Reference: </strong><a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/build-static-executables/">Build a Statically Linked or Mostly-Statically Linked Native Executable</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-heapdump-examples/">native-heapdump-examples</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-heapdump-examples.yml"><img alt="native-heapdump-examples" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-heapdump-examples.yml/badge.svg"/></a></td>
<td align="left" width="70%">Demonstrates different ways to generate a heap dump from a running native executable.<br><strong>Technologies: </strong> Native Image, VisualVM <br><strong>Reference: </strong><a href="https://www.graalvm.org/latest/reference-manual/native-image/guides/create-heap-dump/">Create a Heap Dump from a Native Executable</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-image-jmx-demo/">native-image-jmx-demo</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-image-jmx-demo.yml"><img alt="nnative-image-jmx-demo" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-image-jmx-demo.yml/badge.svg"/></a></td>
<td align="left" width="70%">This demo covers the steps required to build, run, and interact with a native executable using JMX.<br><strong>Technologies: </strong> Native Image, JMX, VisualVM <br><strong>Reference: </strong><a href="https://www.graalvm.org/dev/reference-manual/native-image/guides/build-and-run-native-executable-with-remote-jmx/">Build and Run Native Executables with Remote JMX</a></td>
</tr>
</tbody>
</table>
### Native Image on Cloud Platforms
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/native-aws-fargate/">native-aws-fargate</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-aws-fargate.yml"><img alt="native-aws-fargate" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-aws-fargate.yml/badge.svg"/></a></td>
<td align="left" width="70%">This demo covers the steps required to create a container image of a native executable application and deploy the image on AWS Fargate.<br><strong>Technologies: </strong> Native Image, Apache Maven, Docker, AWS Fargate <br>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-aws-lambda/">native-aws-lambda</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-aws-lambda.yml"><img alt="native-aws-lambda" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-aws-lambda.yml/badge.svg"/></a></td>
<td align="left" width="70%">This demo covers the steps required to deploy a native executable application on AWS Lambda.<br><strong>Technologies: </strong> Native Image, Apache Maven, Docker, AWS Lambda <br>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-google-cloud-run/">native-google-cloud-run</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-google-cloud-run.yml"><img alt="native-google-cloud-run" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-google-cloud-run.yml/badge.svg"/></a></td>
<td align="left" width="70%">This demo covers the steps required to create a container image of a native executable application and deploy the image on Google Cloud Run.<br><strong>Technologies: </strong> Native Image, Apache Maven, Docker, Google Cloud CLI, Google Cloud Run <br>
</tr>
<tr>
<td align="left" width="30%"><a href="/native-oci-container-instances/">native-oci-container-instances</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/native-oci-container-instances.yml"><img alt="native-oci-container-instances" src="https://github.com/graalvm/graalvm-demos/actions/workflows/native-oci-container-instances.yml/badge.svg" /></a></td>
<td align="left" width="70%">This demo covers the steps required to create a container image of a native executable application and deploy the image on OCI Container Instances.<br><strong>Technologies: </strong> Native Image, Apache Maven, Docker, OCI Container Instances<br></td>
</tr>
</tbody>
</table>
### Java on Truffle (Espresso)
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/espresso-jshell/">espresso-jshell</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/espresso-jshell.yml"><img alt="espresso-jshell" src="https://github.com/graalvm/graalvm-demos/actions/workflows/espresso-jshell.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to build a native executable of JShell, that executes the dynamically generated bytecodes on Espresso<br><strong>Technologies: </strong>Java on Truffle, Native Image, JShell<br><strong>Reference: </strong><a href="https://www.graalvm.org/dev/reference-manual/java-on-truffle/demos/#mixing-aot-and-jit-for-java">Mixing AOT and JIT for Java</a>, <a href="https://medium.com/graalvm/java-on-truffle-going-fully-metacircular-215531e3f840">Java on Truffle — Going Fully Metacircular</a></td>
</tr>
</tbody>
</table>
### Micronaut
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/micronaut-hello-rest-maven/">micronaut-hello-rest-maven</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/micronaut-hello-rest-maven.yml"><img alt="micronaut-hello-rest-maven" src="https://github.com/graalvm/graalvm-demos/actions/workflows/micronaut-hello-rest-maven.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to package a Micronaut REST application into a native executable with Native Build Tools Maven plugin<br><strong>Technologies: </strong>Native Image, Micronaut, Native Build Tools Maven plugin<br><strong>Reference: </strong><a href="https://github.com/oracle-devrel/oci-code-editor-samples/tree/main/java-samples/graalvmee-java-micronaut-hello-rest">Try in OCI Code Editor</a></td>
</tr>
</tbody>
</table>
### Spring Boot
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/spring-native-image/">spring-native-image</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/spring-native-image.yml"><img alt="spring-native-image" src="https://github.com/graalvm/graalvm-demos/actions/workflows/spring-native-image.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to compile a Spring Boot application into a native executable using the Native Build Tools Maven plugin and a Maven profile <br> <strong>Technologies: </strong>Spring Boot, Native Image, Native Build Tools Maven plugin <br><strong>Reference: </strong><a href="https://luna.oracle.com/lab/fdfd090d-e52c-4481-a8de-dccecdca7d68/steps">GraalVM Native Image, Spring and Containerisation</a>, <a href="https://docs.oracle.com/en/graalvm/jdk/21/docs/getting-started/oci/cloud-shell/">Oracle GraalVM in OCI Cloud Shell</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/spring-r/">spring-r</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/spring-r.yml"><img alt="spring-r" src="https://github.com/graalvm/graalvm-demos/actions/workflows/spring-r.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates GraalVM's polyglot feature by loading an R script into a Java host application
<br><strong>Technologies: </strong> Spring, FastR <br><strong>Reference: </strong><a href="https://medium.com/graalvm/enhance-your-java-spring-application-with-r-data-science-b669a8c28bea">Enhance your Java Spring application with R data science</a></td>
</tr>
</tbody>
</table>
### Helidon
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/js-java-async-helidon/">js-java-async-helidon</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/js-java-async-helidon.yml"><img alt="js-java-async-helidon" src="https://github.com/graalvm/graalvm-demos/actions/workflows/js-java-async-helidon.yml/badge.svg" /></a></td>
<td align="left" width="70%">An HTTP web service that demonstrates how multiple JavaScript contexts can be executed in parallel to handle asynchronous operations with Helidon in Java <br><strong>Technologies: </strong>Native Image, Helidon, Native Build Tools Maven plugin <br><strong>Reference: </strong><a href="https://medium.com/graalvm/asynchronous-polyglot-programming-in-graalvm-javascript-and-java-2c62eb02acf0">Asynchronous Polyglot Programming in GraalVM Using Helidon and JavaScript</a></td>
</tr>
</tbody>
</table>
### Scala
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/scalac-native/">scalac-native</a></td>
<td align="left" width="70%">Demonstrates how to build a native executable of the Scala compiler. The resulting binary has no dependencies on the JDK. <br><strong>Technologies: </strong>Scala 2.12.x, Native Image <br><strong>Reference: </strong><a href="https://medium.com/graalvm/compiling-scala-faster-with-graalvm-86c5c0857fa3">Compiling Scala Faster with GraalVM</a></td>
</tr>
</tbody>
</table>
### Kotlin
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/java-kotlin-aot/">java-kotlin-aot</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/java-kotlin-aot.yml"><img alt="java-kotlin-aot" src="https://github.com/graalvm/graalvm-demos/actions/workflows/java-kotlin-aot.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to interoperate between Java and Kotlin and build a native executable <br><strong>Technologies: </strong>Native Image, Kotlin, Maven</td>
</tr>
</tbody>
</table>
### Python
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/graalpy-notebook-example/">graalpy-notebook-example</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/graalpy-notebook-example.yml"><img alt="graalpy-notebook-example" src="https://github.com/graalvm/graalvm-demos/actions/workflows/graalpy-notebook-example.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to embed Python in a Java application. It creates a Python `venv`, and installs required Python packages through a Maven configuration. <br><strong>Technologies: </strong>GraalPy</td>
</tr>
<tr>
<td align="left" width="30%"><a href="/graalpy-embedding-demo/">graalpy-embedding-demo</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/graalpy-embedding-demo.yml"><img alt="graalpy-embedding-demo" src="https://github.com/graalvm/graalvm-demos/actions/workflows/graalpy-embedding-demo.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to embed GraalPy in a Java application using Maven. <br><strong>Technologies: </strong>GraalPy</td>
</tr>
</tbody>
</table>
### Polyglot
<table>
<thead>
<tr>
<th align="left">Name</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" width="30%"><a href="/polyglot-chat-app/">polyglot-chat-app</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/polyglot-chat-app.yml"><img alt="polyglot-chat-app" src="https://github.com/graalvm/graalvm-demos/actions/workflows/polyglot-chat-app.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to build a polyglot chat application by embedding Python and R into the Java host language <br><strong>Technologies: </strong>Java, GraalPy, FastR, Micronaut</td>
</tr>
<tr>
<td align="left" width="30%"><a href="/polyglot-debug/">polyglot-debug</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/polyglot-debug.yml"><img alt="polyglot-debug" src="https://github.com/graalvm/graalvm-demos/actions/workflows/polyglot-debug.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to debug a polyglot Java and JavaScript application using GraalVM Tools for Java in VS Code <br><strong>Technologies: </strong>Java, JavaScript, Maven, GraalVM Extension Pack</td>
</tr>
<tr>
<td align="left" width="30%"><a href="/polyglot-javascript-java-r/">polyglot-javascript-java-r</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/polyglot-javascript-java-r.yml"><img alt="polyglot-javascript-java-r" src="https://github.com/graalvm/graalvm-demos/actions/workflows/polyglot-javascript-java-r.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates the polyglot capabilities of GraalVM and how to run a JavaScript-Java-R application <br><strong>Technologies: </strong>JavaScript, Node.js, Java, R <br><strong>Reference: </strong><a href="https://medium.com/graalvm/graalvm-ten-things-12d9111f307d#656f">Top 10 Things To Do With GraalVM</a></td>
</tr>
<tr>
<td align="left" width="30%"><a href="/functionGraphDemo/">functionGraphDemo</a><br><a href="https://github.com/graalvm/graalvm-demos/actions/workflows/functionGraphDemo.yml"><img alt="functionGraphDemo" src="https://github.com/graalvm/graalvm-demos/actions/workflows/functionGraphDemo.yml/badge.svg" /></a></td>
<td align="left" width="70%">Demonstrates how to run a polyglot JavaScript-Java-R application on the GraalVM Node.js runtime <br><strong>Technologies: </strong>JavaScript, Node.js, Java, R</td>
</tr>
</tbody>
</table>
## Compatibility
The demos are normal applications and benchmarks written in Java, JavaScript, Python, etc., so they are compatible with any virtual machine capable of running Java, JavaScript and so on.
These demos are [tested against the latest GraalVM release using GitHub Actions](https://github.com/graalvm/graalvm-demos/actions/workflows/main.yml). If you come accross an issue, please submit it [here](https://github.com/graalvm/graalvm-demos/issues).
## License
Unless specified otherwise, all code in this repository is licensed under the [Universal Permissive License (UPL)](http://opensource.org/licenses/UPL).
Note that the submodule `fastR-examples` which is a reference to the [graalvm/examples](https://github.com/graalvm/examples) repository has a separate license.
## Learn More
* [GraalVM website](https://www.graalvm.org)
* [Graal project on GitHub](https://github.com/oracle/graal/tree/master/compiler)
* [GraalVM blog](https://medium.com/graalvm)
| 1 |
PacktPublishing/Spring-5.0-By-Example | Spring 5.0 By Example, published by Packt | null |
# Spring 5.0 By Example
This is the code repository for [Spring 5.0 By Example](https://www.packtpub.com/application-development/spring-50-example?utm_source=github&utm_medium=repository&utm_campaign=9781788624398), published by [Packt](https://www.packtpub.com/?utm_source=github). It contains all the supporting project files necessary to work through the book from start to finish.
## About the Book
Spring makes application development extremely simple and also improves developer productivity by reducing initial configuration time and providing tools to increase efficiency.
In the first part of the book, we will learn how to construct a CMS Portal using Spring's support for building REST APIs. We will also integrate these APIs with AngularJS. We will later develop this application in a reactive fashion by using Project Reactor, Spring WebFlux and Spring Data.
In the second part, we will build an amazing messaging application, which will consume the Twitter API and perform some filtering and transformations. We will then play around with Server Sent Events and explore Spring’s support for Kotlin which makes application development quick and efficient.
In the last part, we will build a real microservice application by using the most important techniques and patterns such as service discovery, circuit breakers, security, data streams, monitoring, and a lot more from this architectural style.
By the end of the book, you will be comfortable with the concepts of spring boot and spring cloud.
## Instructions and Navigation
All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.
Chapter01 does not contain code files.
All the remaining chapters contains code files pressent in their respective folders.
The code will look like the following:
```
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
```
The readers are expected to have a basic knowledge of Java. Notion about Distributed
Systems is an added advantage.
To execute code files in this book, you would need to have the following
software/dependencies:
* IntelliJ IDEA Community Edition
* Docker CE
* pgAdmin
* Docker Compose
You will be assisted with installation processes, and so on through this book.
## Related Products
* [Mastering Spring 5.0](https://www.packtpub.com/application-development/mastering-spring-50?utm_source=github&utm_medium=repository&utm_campaign=9781787123175)
* [Spring 5.0 Microservices - Second Edition](https://www.packtpub.com/application-development/spring-50-microservices-second-edition?utm_source=github&utm_medium=repository&utm_campaign=9781787127685)
* [Spring 5.0 Cookbook](https://www.packtpub.com/application-development/spring-50-cookbook?utm_source=github&utm_medium=repository&utm_campaign=9781787128316)
### Download a free PDF
<i>If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.<br>Simply click on the link to claim your free PDF.</i>
<p align="center"> <a href="https://packt.link/free-ebook/9781788624398">https://packt.link/free-ebook/9781788624398 </a> </p> | 0 |
odnoklassniki/jvm-serviceability-examples | Sample code for the presentation on JVM Serviceability Tools | null | null | 1 |
mianshenglee/spring-batch-example | example for spring batch | null |
## 项目说明
由于需要对数据进行批处理,使用`Spring Batch `进行学习与开发,本项目(`spring-batch-example `)旨在提供基于`Spring Batch`进行批处理的示例,每个示例都以解决某一问题为目标,以帮助`Spring Batch`使用者更方便学习,以实践带动学习,欢迎大家可以一起交流学习,欢迎`fork`和添加更多的示例。
## 示例列表
当前示例列表如下:
- [spring-batch-helloworld](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-helloworld)
- [spring-batch-file2db](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-file2db)
- [spring-batch-db2db](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-db2db)
- [spring-batch-beetlsql](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-beetlsql)
- [spring-batch-increment](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-increment)
- [spring-batch-xxl-executor](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-xxl-executor) 与 [xxl-job](https://github.com/mianshenglee/spring-batch-example/tree/master/xxl-job)
- [spring-batch-mysql2mongo](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-mysql2mongo)
- [spring-batch-param](https://github.com/mianshenglee/spring-batch-example/tree/master/spring-batch-param)
示例说明如下:
### `spring-batch-helloworld`
示例功能:很简单的示例,读字符串数组,转为大写,输出到控制,示例虽小,五脏俱全,通过此示例,可以对`Spring Batch`作一个基础的了解。
- 配套文章:[快速了解组件-spring batch(2)之helloworld][2]
### `spring-batch-file2db`
示例功能:从文本读数据,转为`User`实体,输出到数据库中进行存储,通过此示例,可以对`Spring Batch`的默认组件(读文件、写数据库)有一定的了解。
- 配套文章:[快速使用组件-spring batch(3)读文件数据到数据库][3]
### `spring-batch-db2db`
示例功能:从数据库读数据,转为`User`实体,输出到数据库中进行存储,通过此示例,可以对多数据源配置、`Spring Batch`的默认组件(读数据库、写数据库)有一定的了解。
- 配套文章:[决战数据库-spring batch(4)数据库到数据库][4]
### `spring-batch-beetlsql`
示例功能:与`spring-batch-db2db`一致,只是更改了读数据库和写数据库的组件,改为使用`BeetlSql`,更简单,更灵活。
- 配套文章:[便捷的数据读写-spring batch(5)结合beetlSql进行数据读写][5]
### `spring-batch-increment`
示例功能:对数据同步实现增量同步,结合`Spring Batch`和`BeetlSql`,实现基于时间戳实现数据的增量同步。
- 配套文章:[增量同步-spring batch(6)动态参数绑定与增量同步][6]
### `spring-batch-xxl-executor` 和 `xxl-job`
示例功能:在增量同步的基础上,实现企业级的数据同步和调度框架结合,结合`xxl-job`,实现任务调度,并查看数据同步结果。
- 配套文章:[调度与监控-spring batch(7)结合xxl-job进行批处理][7]
### `spring-batch-mysql2mongo`
示例功能: 使用Mongo相组件,实现mysql --> mongodb 的数据同步。
- 配套文章:[mongo同步-spring batch(8)的mongo读写组件使用][8]
### `spring-batch-param`
示例功能: 在 Spring Batch 中进行数据及参数传递的方法。
- 配套文章:[数据共享-spring batch(9)上下文处理][9]
## 示例使用
示例都是基于spring boot建立的java工程,使用maven进行包管理。因此直接使用开发工具如`eclipse`或`idea`导入maven工程即可使用,有几点需要注意:
1. 使用开发工具导入工程后,需要使用maven进行依赖管理,下载相应的jar包,特别是spring batch相关的包。
2. 示例结合文章的说明一起使用,可以先文章,再运行示例。
3. 有一些示例是需要结合数据脚本来运行的,因此运行前请先按提供的sql脚本进行建库,建表,添加测试数据。
## 文章列表
本项目中的示例代码,与我写的[`Spring Batch`系列文章](https://mianshenglee.github.io/)有对应关系,每个示例均可独立运行,学习者可直接阅读文章,结合代码示例进行学习。
- [数据批处理神器-Spring Batch(1)简介及使用场景][1]
- [快速了解组件-spring batch(2)之helloworld][2]
- [快速使用组件-spring batch(3)读文件数据到数据库][3]
- [决战数据库-spring batch(4)数据库到数据库][4]
- [便捷的数据读写-spring batch(5)结合beetlSql进行数据读写][5]
- [增量同步-spring batch(6)动态参数绑定与增量同步][6]
- [调度与监控-spring batch(7)结合xxl-job进行批处理][7]
- [mongo同步-spring batch(8)的mongo读写组件使用][8]
- [数据共享-spring batch(9)上下文处理][9]
## 与我交流
可以使用以下几种方式一起交流:
- [在项目中提issue](https://github.com/mianshenglee/spring-batch-example/issues):`https://github.com/mianshenglee/spring-batch-example/issues`
- 微信公众号:
- [我的博客](https://mianshenglee.github.io/):`https://mianshenglee.github.io/`
[1]: https://mianshenglee.github.io/2019/06/04/springbatch(1).html
[2]: https://mianshenglee.github.io/2019/06/07/spring-batch(2).html
[3]: https://mianshenglee.github.io/2019/06/08/spring-batch(3).html
[4]: https://mianshenglee.github.io/2019/06/09/spring-batch(4).html
[5]: https://mianshenglee.github.io/2019/06/10/spring-batch(5).html
[6]: https://mianshenglee.github.io/2019/06/11/spring-batch(6).htm
[7]: https://mianshenglee.github.io/2019/06/12/spring-batch(7).html
[8]: https://mianshenglee.github.io/2019/08/09/spring-batch(8).html
[9]: https://mianshenglee.github.io/2020/11/30/spring-batch(9).html | 1 |
readlearncode/Java-EE-8-Sampler | Code examples demonstrating the new capabilities of Java EE 8 | javaee8 | # Java-EE-8-Sampler
Code examples demonstrating the new capabilities of Java EE 8
| 0 |
healenium/healenium-example-maven | Test automation examples on Java with Maven. | null | # healenium-example-maven
Java + Maven + Junit5 project with healenium usage example
### To setup Healenium see the tutorial: https://www.youtube.com/watch?v=Ed5HyfwZhq4
## How to start
### 1.Start Healenium backend from infra folder
```cd infra```
```docker-compose up -d```
To download this file into your project use this command:
```$ curl https://raw.githubusercontent.com/healenium/healenium-example-maven/master/infra/docker-compose.yml -o docker-compose.yml```
Create /db/sql folder on the same level in your project. Add init.sql file into ./db/sql/init.sql folder in your project via command:
```$ curl https://raw.githubusercontent.com/healenium/healenium-client/master/example/init.sql -o init.sql```
Verify that images ```healenium/hlm-backend:3.4.1``` and ```postgres:11-alpine``` and ```healenium/hlm-selector-imitator:1.2``` are up and running
### 2. Project structure
```
|__infra
|__db/sql
|__init.sql
|__docker-compose.yml
|__src/main/java/
|__src/test/java/
|__pom.xml
```
### 3.Run test in terminal with maven
In ```BaseTest.java``` class select necessary driver: **LOCAL**, **PROXY** or **REMOTE** and browser to run: chrome, firefox or edge.
```driver = new DriverContext(DriverType.LOCAL).getDriver(BrowserType.CHROME);```
**LOCAL** - used for local run. It's been set by default in BaseTest.java class. For this driver should be used docker-compose file from test example.
**PROXY** - used if you're running tests using healenium-proxy. For this driver you need to set docker-compose containers as in example by link:
https://github.com/healenium/healenium-example-dotnet/blob/master/infra/docker-compose.yml
**REMOTE** - used if you-re running test on remote machine. Do not forget to provide necessary host. In this test example it's been used remote machine with Selenoid.
In ```BaseTest.java``` class select necessary framework: **SELENIUM** or **SELENIDE**.
```pages = new FrameworkContext(FrameworkType.SELENIDE, driver).setFramework();```
If you want to execute all tests, please use the command: ```mvn clean test```
### 4.After test execution you should see generated report link in command line logs

Report contains only healed locators with old-new values and a button that tells if healing was successful for further algorithm corrections

### 5. Screenshots
Also you could take a screenshots for your com.epam.healenium.tests like it implements here: BaseTest.screenshot
```
public byte[] screenshot() {
return ((TakesScreenshot) driver.getDelegate()).getScreenshotAs(OutputType.BYTES);
}
```
### 6. @DisableHealing annotation
If don't want to use Healenium in some methods just use @DisableHealing annotation.
> The example of usage you can find here: MainPageWithFindBy.checkLocatorTestButtonDontHealing

### 7. Plugin Healenium for Intellij IDE
For updating broken locators you could use Plugin "Healenium" for Intellij IDE (https://plugins.jetbrains.com/plugin/14178-healenium).
With this plugin you can update your locators:
* on class level

* or on variable level


| 0 |
ttulka/ddd-example-ecommerce | Domain-driven design example in Java with Spring framework | architecture ddd design domain-driven-design event-driven example hexagonal-architecture high-cohesion java low-coupling modular-monolith ood oop rich-domain-model screaming-architecture service-oriented-architecture services soa spring spring-boot | # DDD Example Project in Java: eCommerce
The purpose of this project is to provide a sample implementation of an e-commerce product following **Domain-Driven Design (DDD)** and **Service-Oriented Architecture (SOA)** principles.
Programming language is Java with heavy use of Spring framework.
```sh
# build
./mvnw clean install
# run
./mvnw spring-boot:run
# open in browser http://localhost:8080
```
## Table of Contents
- [Domains](#domains)
+ [Core Domain](#core-domain)
+ [Supporting Subdomains](#supporting-subdomains)
+ [Event Workflow](#event-workflow)
+ [Services Dependencies](#services-dependencies)
- [Architectural Overview](#architectural-overview)
+ [Screaming Architecture](#screaming-architecture)
+ [Packaging](#packaging)
+ [Assembling](#assembling)
+ [Anatomy of a Service](#anatomy-of-a-service)
- [Conclusion](#conclusion)
+ [Where to Next](#where-to-next)
## Domains
Several [Business Capabilities][vcha] have been identified:
[vcha]: http://bill-poole.blogspot.com/2008/07/value-chain-analysis.html
### Core Domain
- **Sales**
- put a product for sale
- categorize a product
- update a product
- change a product price
- validate an order
- place an order
### Supporting Subdomains
- **Warehouse**
- stack goods
- fetch goods for shipping
- **Billing**
- collect a payment
- **Shipping**
- dispatch a delivery
Later, we can think about more supporting domains (not implemented in this project):
- **Marketing**
- discount a product
- promote a product
- **User Reviews**
- add a product review
- **Customer Care**
- resolve a complain
- answer a question
- provide help
- loyalty program
The e-commerce system is a web application using a **Portal** component implementing the [Backends For Frontends (BFF)][bff] pattern.
The idea of [Microfrontends][microf] is implemented in an [alternative branch](https://github.com/ttulka/ddd-example-ecommerce/tree/microfrontend).
[bff]: https://samnewman.io/patterns/architectural/bff/
[microf]: https://martinfowler.com/articles/micro-frontends.html
### Event Workflow
The communication among domains is implemented via events:

When the customer places an order the following process starts up (the happy path):
1. Shipping prepares a new delivery.
1. Sales creates a new order and publishes the `OrderPlaced` event.
1. Shipping accepts the delivery.
1. Billing collects payment for the order and publishes the `PaymentCollected` event.
1. Warehouse fetches goods from the stock and publishes the `GoodsFetched` event.
1. Shipping dispatches the delivery and publishes the `DeliveryDispatched` event.
1. Warehouse updates the stock.
There is only the basic "happy path" workflow implemented with a big room for improvement, for example when Shipping doesn't get bot Events within a time period, the delivery process should be cancelled etc..
### Services Dependencies
Services cooperate together to work out the Business Capabilities: sale and deliver goods.
The actual dependencies come only from Listeners which fulfill the role of the Anti-Corruption Layer and depend only on Domain Events.

Events contain no Domain Objects.
For communication across Services an Event Publisher abstraction is used, located in the package `..ecommerce.common.events`. The interface is an Output Port (in the Hexagonal Architecture) and as a cross-cutting concern is its implementation injected by the Application.
## Architectural Overview
While no popular architecture ([Onion][onion], [Clean][clean], [Hexagonal][hexagonal], [Trinity][trinity]) was strictly implemented, the used architectural style follows principles and good practices found over all of them.
- Low coupling, high cohesion
- Implementation hiding
- Rich domain model
- Separation of concerns
- The Dependency Rule
The below proposed architecture tries to solve one problem often common for these architectural styles: [exposing internals of objects](https://blog.ttulka.com/object-oriented-design-vs-persistence) and breaking their encapsulation. The proposed architecture employs full object encapsulation and rejects anti-patterns like Anemic Domain Model or JavaBean. An Object is a solid unit of behavior. A Service is an Object on higher level of architectural abstraction.
[onion]: http://jeffreypalermo.com/blog/the-onion-architecture-part-1
[clean]: https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html
[hexagonal]: https://alistair.cockburn.us/hexagonal-architecture/
[trinity]: https://github.com/oregor-projects/trinity-demo-java
### Screaming Architecture
The architecture "screams" its intentions just by looking at the code structure:
```
..ecommerce
billing
payment
sales
category
order
product
shipping
delivery
warehouse
```
Going deeper the technical concepts are visible too:
```
..ecommerce
billing
payment
jdbc
listeners
rest
```
### Packaging
As shown in the previous section, the code is structured by the domain together with packages for technical concerns (`jdbc`, `rest`, `web`, etc.).
Such a packaging style is the first step for a further modularization.
The semantic of a package is following: `company.product.domain.service.[entity|impl]`, where `entity` and `impl` are optional. Full example: `com.ttulka.ecommerce.billing.payment.jdbc`.
### Assembling
While a physically monolithic deployment is okay for most cases, a logically monolithic design, where everything is coupled with everything, is evil.
To show that the Monolith architectural pattern is not equal to the Big Ball Of Mud, a modular monolithic architecture was chosen as the start point.
The services can be further cut into separate modules (eg. Maven artifacts) by feature:
```
com.ttulka.ecommerce:ecommerce-application
com.ttulka.ecommerce.sales:catalog-service
com.ttulka.ecommerce.sales:cart-service
com.ttulka.ecommerce.sales:order-service
com.ttulka.ecommerce.billing:payment-service
com.ttulka.ecommerce.shipping:delivery-service
com.ttulka.ecommerce.warehouse:warehouse-service
```
Or by [component](https://blog.ttulka.com/package-by-component-with-clean-modules-in-java):
```
com.ttulka.ecommerce.billing:payment-domain
com.ttulka.ecommerce.billing:payment-jdbc
com.ttulka.ecommerce.billing:payment-rest
com.ttulka.ecommerce.billing:payment-events
com.ttulka.ecommerce.billing:payment-listeners
```
In detail:
```
com.ttulka.ecommerce.billing:payment-domain
..billing
payment
Payment
PaymentId
CollectPayment
FindPayments
com.ttulka.ecommerce.billing:payment-jdbc
..billing.payment.jdbc
PaymentJdbc
CollectPaymentJdbc
FindPaymentsJdbc
com.ttulka.ecommerce.billing:payment-rest
..billing.payment.rest
PaymentController
com.ttulka.ecommerce.billing:payment-events
..billing.payment
PaymentCollected
com.ttulka.ecommerce.billing:payment-listeners
..billing.payment.listeners
OrderPlacedListener
```
Which can be brought together with a Spring Boot Starter, containing only Configuration classes and dependencies on other modules:
```
com.ttulka.ecommerce.billing:payment-spring-boot-starter
..billing.payment
jdbc
PaymentJdbcConfig
listeners
PaymentListenersConfig
META-INF
spring.factories
```
Note: Events are actually part of the domain, that's why they are in the package `..ecommerce.billing.payment` and not in `..ecommerce.billing.payment.events`. They are in a separate module to break the build cyclic dependencies: a dependent module (Listener) needs to know only Events and not the entire Domain.
See this approach in an alternative branch: [modulith](https://github.com/ttulka/ddd-example-ecommerce/tree/modulith).
### Anatomy of a Service
**[Service](http://udidahan.com/2010/11/15/the-known-unknowns-of-soa/)** is the technical authority for a specific business capability.
- There is a one-to-one mapping between a Bounded Context and a Subdomain (ideal case).
- A Bounded Context defines the boundaries of the biggest services possible.
- A Bounded Context can be decomposed into multiple service boundaries.
- For example, Sales domain contains Catalog, Cart and Order services.
- A service boundaries are based on service responsibilities and behavior.
- A service is defined by its logical boundaries, not a physical deployment unit.
**Application** is a deployment unit. A monolithic Application can have more Services.
- Bootstrap (application container etc.).
- Cross-cutting concerns (security, transactions, messaging, logging, etc.).

**Configuration** assemblies the Service as a single component.
- Has dependencies to all inner layers.
- Can be implemented by Spring's context `@Configuration` or simply by object composition and Dependency Injection.
- Implements the Dependency Inversion Principle.
**Gateways** create the published API of the Service.
- Driving Adapters in the Hexagonal Architecture.
- REST, SOAP, or web Controllers,
- Event Listeners,
- CLI.
**Use-Cases** are entry points to the service capabilities and together with **Entities** form the _Domain API_.
- Ports in the Hexagonal Architecture.
- No implementation details.
- None or minimal dependencies.
_Domain Implementation_ fulfills the Business Capabilities with particular technologies.
- Driven Adapters in the Hexagonal Architecture.
- Tools and libraries,
- persistence,
- external interfaces access.
Source code dependencies point always inwards and, except Configuration, are strict: allows coupling only to the one layer below it (for example, Gateways mustn't call Entities directly, etc.).

#### Example of a Service Anatomy
As a concrete example consider the Business Capability to find payments in Billing service:
- Application is implemented via Spring Boot Application.
- `PaymentJdbcConfig` configures the JDBC implementations for the Domain.
- Gateway is implemented as a REST Controller.
- Use-Case interface `FindPayments` is implemented with `PaymentsJdbc` in Use-Cases Implementation.
- Entity `Payment` is implemented with `PaymentJdbc` in Entities Implementation.

There is no arrow from Configuration to Gateways because `PaymentController` is annotated with Spring's `@Component` which makes it available for component scanning the application is based on. This is only one possible approach. Another option would be to put the Controller as a Bean into the Configuration, etc..
## Conclusion
The goal of this project is to demonstrate basic principles of Domain-Driven Design in a simple but non-trivial example.
For the sake of simplicity a very well-known domain (e-commerce) was chosen. As every domain differs in context of business, several assumption must have been made.
Although all fundamental use-case were implemented, there is still a room for improvement. Cross-cutting concerns like authentication, authorization or monitoring are not implemented.
### Where to Next
Check out the alternative branches and repos to see additional concepts and technologies in action:
- [Modulith](https://github.com/ttulka/ddd-example-ecommerce/tree/modulith): A separate Maven module per service.
- [Microfrontends](https://github.com/ttulka/ddd-example-ecommerce/tree/microfrontend): Service Web Components as part of the service codebase.
- [Microservices](https://github.com/ttulka/ddd-example-ecommerce-microservices): Deployments with Docker and Kubernetes.
- [Kotlin](https://github.com/ttulka/ddd-example-ecommerce-kotlin): The same project again, this time in Kotlin.
| 1 |
Perfecto-Quantum/Quantum-Starter-Kit | Get started with Quantum! Clone or download this repository to start, contains examples of tests and step definitions. | null | <img src="https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/blob/master/DOC/image/perfecto.jpg" height="75" width="300"/>

# Quantum Starter Kit
This Quantum starter kit is designed to get you up and running using the Quantum framework (sponsored by [Perfecto](https://www.perfecto.io) and powered by [QAF](https://github.com/qmetry/qaf)) within few simple steps, and enable you to start writing your tests using simple [Cucumber](https://cucumber.io/).
Begin with installing the dependencies below, and continue with the Getting Started procedure below.
### Dependencies
There are several prerequisite dependencies you should install on your machine prior to starting to work with Quantum:
* [Java 8](https://www.oracle.com/in/java/technologies/javase/javase8-archive-downloads.html)
* An IDE to write your tests on - [Eclipse IDE for Java Developers](https://www.eclipse.org/downloads/packages/) or [IntelliJ](https://www.jetbrains.com/idea/download/#)
* [Maven](https://maven.apache.org/) (Optional - Needed only for command line executions as IDEs have Maven in-built.)
* Download the necessary app files from [here](https://github.com/PerfectoMobileSA/PerfectoJavaSample/tree/master/libs), upload it to your Perfecto Media Repository and configure that locator path to driver.capabilities.app capability in your testng xml file.
Eclipse users should also install:
1. Eclipse has in-built Maven plugin
- Optional - [Maven Plugin](http://marketplace.eclipse.org/content/m2e-connector-maven-dependency-plugin)
2. [TestNG Plugin](http://testng.org/doc/download.html)
3. QAF BDD Plugin - Or go to install new software option in eclipse, and download from this url https://qmetry.github.io/qaf/editor/bdd/eclipse/
In case, of network constraints, one can follow the instruction mentioned in [QAF BDD Offline](https://developers.perfectomobile.com/display/PD/Quantum+framework+introduction#expand-InstallanofflineversionoftheQAFBDDplugininEclipse)
IntelliJ IDEA users should also install:
1. [Cucumber Plugin (Community version only)](https://plugins.jetbrains.com/plugin/7212)
- In case after installing the above plugin you are still not able to navigate to the step definition code then install this plugin - [Cucumber for Groovy Plugin](https://plugins.jetbrains.com/plugin/7213-cucumber-for-groovy)
TestNG Plugin is built-in in the IntelliJ IDEA, from version 7 onwards.
#### Optional Installations
* For source control management, you can install [git](https://git-scm.com/downloads).
## Downloading the Quantum Project
[Download](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/archive/master.zip) the Quantum-Started-Kit repository.
After downloading and unzipping the project to your computer, open it from your IDE by choosing the folder containing the pom.xml file (_Quantum-Starter-Kit-master_, you might consider renaming it).
Project directory structure is documented in the end of this page.
**********************
# Getting Started
This procedure leads you through the various Quantum framework's aspects:
* [Running one of the samples](README.md#running-sample-as-is) in the Quantum project as is.
* [Creating your first test](README.md#creating-your-first-test) using the Quantum-Starter-Kit
* [Parallel execution](README.md#parallel-execution) of all Quantum samples.
* [Diversifying test execution](README.md#diversifying-test-execution) by manipulating test suites.
* [Viewing test execution results](README.md#viewing-test-execution-results-in-perfecto-reporting)
* [Advanced Quantum features](README.md#advanced-quantum-features)
## Running sample as is
Run a single Quantum sample from the samples provided in the Starter Kit.
The samples are located under the _src/main/resources/scenarios_ folder.
1. Configure your cloud and credentials in the _application.properties_ file (under the top _resources/_ folder).
2. Run your test via the right-click menu while hovering on the TestNG.xml file in the project pane (on the left).
The sample opens device browser at Google, searches for Perfecto Mobile, enters the site, and searches for Perfecto Object Spy.
## Creating your first test
1. Download the Quantum-Starter-Kit as zip to your computer, and rename it.
2. Open the project from its _pom.xml_ file, to open it as a Maven project with all the required dependencies.
3. Update your CQ Lab name under remote.server, and your Perfecto's security token in the _application.properties_ file.
4. Add a _.feature_ file under the _scenarios/_ folder, and proceed to create your test using the [test writing guidelines](README.md#test-writing-guidelines).
5. Add a _.loc_ file under the _common/_ folder, and proceed to create the Object Repository using the [Object Repository creation guidelines](README.md#object-repository-creation-guidelines).
6. Clean your test from the object definitions until all lines become syntax highlighted.
7. [Configure the testng file](README.md#testng-guidelines), and run your test from it.
### Test writing guidelines
* Begin with @featuretagname, Feature: name of feature, @scenariotagname (can be the same as the feature's tag).
* Write your scenario using [Given/When/Then/And](https://github.com/cucumber/cucumber/wiki/Given-When-Then) BDD statements. Use the commands in the pull-down list for accurate steps syntax, and easy step insertion.
* Write your first scenario for the app's initial starting point, and later create scenarios for other cases; name them differently to enable easy identification in execution report, and name their tags differently if you want to run them separately.
* Name your app's objects as _functionality.purpose_, for example _button.route_, _edit.start_, etc.
* If you have a Perfecto plugin - use Perfecto's [Object Spy](https://community.perfectomobile.com/series/18628-object-spy) to obtain smart object locators for your app's objects; if you do not - use other tools, such as Firebug or Chrome's Developer Tools, for that purpose. Put each object locator at the end of the line using that object - it will be used later for creating the Object Repository.<br>When using Object Spy, remember to set your object type to _DOM_ or _Native_ depending on your app's type being Web or Native, respectively.
* If you want to run your app's steps using the Object Spy, check the _Execute on Add_ checkbox.
* Add steps for taking screenshots to allow close examination of test results later on.
* Add steps for waiting a few seconds upon app's page loading.
### Object Repository creation guidelines
1. Copy-Paste your test to the _.loc_ file.
2. Remove lines unrelated to objects.
3. From each object related line, create a line formatted as <br>`objectname = locatortype=objectlocator`<br>For example <br>`edit.start = xpath=//*[@label="Start location"]`
### Testng guidelines
1. Under the _config/_ folder, open the _testng_appium.xml_ or _testng_web.xml_ file, depending on your app type.
2. Copy the first test suite, and verify it's the only one with a **true** _enabled_ property, to prevent the other test suites from running in parallel.
3. Copy your feature/scenario tag to the _name_ property in the _include_ clause. Use a space-separated tags' list to include more scenarios and features.
4. Add a parameter specifying the type of device, or naming a specific one, to be used for your test execution, for example, <br>`<parameter name="driver.capabilities.model" value="iPhone.*"></parameter>`
## Parallel execution
To run all samples in parallel, you need to configure the _TestNG.xml_ file, which is located under the _src/test/resources/config/_ folder.
1. For each of the test suites (enclosed within <test>...</test>), set the _enabled_ property value to **_true_**.
2. Run your test as before.
This results in running 2 additional samples, both searching terms in Perfecto Community; one uses hard coded search terms, and the other retrieves them from an external input file.
## Diversifying test execution
You can set each of the test suites to run on a different type of device, and to include different scenarios. For that, you need to manipulate the contents of the various test suites in the _TestNG.xml_ file.
Modify **only** the test suites not related to the Google sample we started with.
1. Replace the current tag in the community samples, so that in the _CommunityExample.feature_ sample all tags are **@sampletag**, and in the _CommunityDataDrivenExample.feature_ sample - **@sampletagdd**. <br>You may of course use other values, or leave the tags as is, but use these tag values for demonstration's sake.
2. In the _TestNG.xml_ file, set the tag parameter value in one suite to **@sampletag**, and in the other - to **@sampletagdd**.<br>That means, that the first test suite runs the CommunityExample sample, and the second - the CommunityDataDrivenExample sample.
3. To vary the devices used for each of the test suites, replace the capability parameter ("driver.capabilities.someCapability") in both suites with<br>`<parameter name="driver.capabilities.platformName" value="Android"/>`.<br>Set the value to "iOS" in the second test suite.<br>By that, you specify that the CommunityExample sample will run on an Android device (randomly allocated), and the CommunityDataDrivenExample sample - on an iOS device.<br>**Note:** Generally, you can use any of the numerous device selection capabilities.
4. Run your test in the same manner as before.<br>You can follow your test execution on Perfecto Dashboard and see the three samples running on the specified device types.
## Viewing test execution results in Perfecto Reporting
All the previous executions were recorded, and may be viewed in Perfecto execution center, Reporting.
Let's proceed to naming your tests, so you can easily detect them in Perfecto Reporting and drill down to examine them in more detail.
1. In each of the feature files (the samples), set the Feature line at the top to<br>`Feature: community search sample`
2. Run your test as before.
3. To view the test execution report within Perfecto Reporting:
* Enter your CQ Lab at https://\<your CQ Lab\>.perfectomobile.com.
* Select the Reporting tab, and click the link to Perfecto Reporting (on the right).
* Login using your CQ Lab credentials.<br><br>
All the last execution tests are listed in the Reporting execution center. The feature name you set in the sample before, appears as the test name on the left.
4. To drill down into any of the specific test executions, click the test to view its Single Test Report for more execution details.
## Advanced Quantum features
Quantum has additional features to allow better customization to your specific application:
* Understanding driver names and capabilities - [Link](https://developers.perfectomobile.com/display/PD/Quantum+driver+names+and+capabilities)
* Understand configuration manager - [Managing Configuration Manager](https://developers.perfectomobile.com/display/PD/ConfigurationManager+%7C+Pass+elements+across+steps+and+test+cases)
* Create your own [Object Repository](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/Object%20Repository) file to match your application objects.
* Create a [[customized steps|Creating customized steps]] file to ease performing actions common in your application operation.
* Write tests using either [BDD](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/BDD-Implementation) or [Java](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/Java-Implementation).
* Configure the [TestNG.xml](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/Quantum%20TestNG%20File) to filter the tests to execute and the devices used in the test.
* Configuration of the [application properties](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/The%20application.properties%20file) and the [TestNG.xml file](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/Quantum%20TestNG%20File), as well as creating object definitions in the [Object Repository](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/Object%20Repository) and [creating customized steps](https://github.com/Perfecto-Quantum/Quantum-Starter-Kit/wiki/Creating%20customized%20steps), require knowledge of Java, TestNG, and XPath.
## Quantum Course
Automation Architects and developers should go through this Quantum Course as it deep dives in technical depth of Quantum features and discusses samples of advanced features of Quantum.
[Course Link](https://developers.perfectomobile.com/display/PSC/Quantum)
**********************
# Project Directory Structure
```
.
│ pom.xml # Maven pom file for build and dependencies
│ README.md # The current readme file
│
├───resources # Default resources dir
│ application.properties # set credentials and other project properties
│
└───src
└───main
├───java # All code for project inside java directory
│ └───com
│ └───quantum # com.quantum namespace
│ ├───java # Package namespace for pure java tests
│ │ └───pages # Package for Java test Page Object Models
│ │ MainscreenTestPage.java # Example POM
│ │
│ └───steps # Package namespace for Gherkin/Cucumber step definitions
│ ExpenseTrackerSteps.java # Step definitions for appium feature file
│ GoogleStepDefs.java # Step definitions for webSearch feature file
│
└───resources # All project specific files here
│ assertMessages.properties # Property definitions used in qaf library AssertionService class
│ log4j.properties # Controls all logging to console and log files
│
├───android # Additional Android properties. Specified in testng_appium file.
│ env.properties # Android specific additional environment variables
│ expensetracker.loc # Android specific object locators for appium test objects
│
├───common # Common resources dir. Set with env.resources in application.properties
│ search.loc # Common object locators used in webSearch feature file
│ testdata.xml # Data used in xml scenario in webSearch feature
│
├───config # TestNG xml test file directory
│ testng_appium.xml # TestNG file that runs appium feature file with @appium tag
│ testng_web.xml # TestNG file that runs webSearch feature file with @Web tag
│
├───data # Data used in data driven tests stored here
│ testData.csv # csv data file used in csv webSearch scenario
│ testData.json # example of json data file
│ testData.xls # example of Excel data file
│
├───ios # Addition iOS properties. Specified in testng_appium file.
│ env.properties # iOS specific additional environment properties
│ expensetracker.loc # Android specific object locators for appium test objects
│
└───scenarios # Cucumber/Gherkin feature files directory
appium.feature # Appium test feature file called by testng_appium xml file
webSearch.feature # Web Google Search feature file driven by testng_web xml file
```
| 0 |
tvd12/master-design-patterns | design patterns example | design-pattern desing-patterns java-design-pattern java-examples oop-design | null | 1 |
tipsy/spark-basic-structure | Example of one possible way of structuring a Spark application | null | # spark-basic-structure
This is an example of one possible way of structuring a Spark application
The application has filters, controllers, views, authentication, localization, error handling, and more.
It contains the source code for the tutorial found at https://sparktutorials.github.io/2016/06/10/spark-basic-structure.html
## Critique welcome
If you find anything you disagree with, please feel free to create an issue.
## Screenshot

| 1 |
malmstein/MaterialAnimations | Material Animations examples | null | # MaterialAnimations
Material Animations examples
| 0 |
codurance/task-list | This is an example of code obsessed with primitives. | null | # Task List [](https://travis-ci.org/codurance/task-list)
This is an example of code obsessed with primitives.
A *primitive* is any concept technical in nature, and not relevant to your business domain. This includes integers, characters, strings, and collections (lists, sets, maps, etc.), but also things like threads, readers, writers, parsers, exceptions, and anything else purely focused on technical concerns. By contrast, the business concepts in this project, "task", "project", etc. should be considered part of your *domain model*. The domain model is the language of the business in which you operate, and using it in your code base helps you avoid speaking different languages, helping you to avoid misunderstandings. In our experience, misunderstandings are the biggest cause of bugs.
## Exercise
Try implementing the following features, refactoring primitives away as you go. Try not to implement any new behaviour until the code you're about to change has been completely refactored to remove primitives, i.e. **_Only refactor the code you're about to change, then make your change. Don't refactor unrelated code._**
One set of criteria to identify when primitives have been removed is to only allow primitives in constructor parameter lists, and as local variables and private fields. They shouldn't be passed into methods or returned from methods. The only exception is true infrastructure code—code that communicates with the terminal, the network, the database, etc. Infrastructure requires serialisation to primitives, but should be treated as a special case. You could even consider your infrastructure as a separate domain, technical in nature, in which primitives *are* the domain.
You should try to wrap tests around the behaviour you're refactoring. At the beginning, these will mostly be high-level system tests, but you should find yourself writing more unit tests as you proceed.
### Features
1. Deadlines
1. Give each task an optional deadline with the `deadline <ID> <date>` command.
2. Show all tasks due today with the `today` command.
2. Customisable IDs
1. Allow the user to specify an identifier that's not a number.
2. Disallow spaces and special characters from the ID.
3. Deletion
1. Allow users to delete tasks with the `delete <ID>` command.
4. Views
1. View tasks by date with the `view by date` command.
2. View tasks by deadline with the `view by deadline` command.
3. Don't remove the functionality that allows users to view tasks by project, but change the command to `view by project`.
### Considerations and Approaches
Think about *behaviour attraction*. Quite often, you can reduce the amount of behaviour that relies upon primitives from the outside world (as opposed to internal primitives stored as private fields or locals) simply by moving the behaviour to a *value object* which holds the primitives. If you don't have a value object, create one. These value objects are known as *behaviour attractors* because once they're created, they make it far more obvious where behaviour should live.
A related principle is to consider the type of object you've created. Is it a true value object (or *record*), which simply consists of `getFoo` methods that return their internal primitives (to be used only with infrastructure, of course), or is it an object with behaviour? If it's the latter, you should avoid exposing any internal state at all. The former should not contain any behaviour. Treating something as both a record and an object generally leads to disaster.
Your approach will depend on whether you learn toward a functional or an object-oriented style for modelling your domain. Both encourage encapsulation, but *information hiding* techniques are generally only used in object-oriented code. They also differ in the approach used to extract behaviour; functional programming often works with closed sets of behaviour through *tagged unions*, whereas in object-oriented code, we use *polymorphism* to achieve the same ends in an open, extensible manner.
Separate your commands and queries. Tell an object to do something, or ask it about something, but don't do both.
Lastly, consider SOLID principles when refactoring:
* Aim to break large chunks of behaviour into small ones, each with a single responsibility.
* Think about the dimensions in which it should be easy to extend the application.
* Don't surprise your callers. Conform to the interface.
* Segregate behaviour based upon the needs.
* Depend upon abstractions.
| 1 |
robinhuy/react-native-typescript-examples | Learn React Native by examples. | react-native typescript | # React Native Typescript examples
Learn React Native (version 0.70 with Typescript) by easy-to-difficult examples.
_For more basic examples, see [React Native Expo examples](https://github.com/robinhuy/react-native-expo-examples)_
## Run project in development
- Setting up the development environment: https://reactnative.dev/docs/environment-setup.
- Install dependencies: `yarn` (or `npm install`). On iOS run: `npx pod-install`.
- Run on Android: `yarn android` (or `npm run android`).
- Run on iOS: `yarn ios` (or `npm run ios`).
## Change example
Modify code in `App.tsx`, each example is an application.
## Preview
### 1. Quiz Game
Learn how to use: **Type Script static type checking**, **React Hook useEffect + Timer**
<img src="https://user-images.githubusercontent.com/12640832/101762123-9842e080-3b0f-11eb-951a-82fae0c2481b.gif" width="250" alt="Quiz Game" />
### 2. Booking Car
Learn how to use: **Native Base + React Native Vector Icons**, **React Native Maps + React Native Maps Directions**, **Google Map API**, **Keyboard + Keyboard Event**
<img src="https://user-images.githubusercontent.com/12640832/101765164-85320f80-3b13-11eb-8066-a5d4436ebd90.gif" width="250" alt="Booking Car" />
Note: To run this example, you must get & config Google Map API KEY for [Android](https://developers.google.com/maps/documentation/android-sdk/get-api-key) or [iOS](https://developers.google.com/maps/documentation/ios-sdk/get-api-key)
### 3. Gmail clone
Learn how to use: **API Sauce**, **MobX + MobX React Lite**, **React Context**, **React Navigation Authentication flows + useFocusEffect**, **React Native Web View**
<img src="https://user-images.githubusercontent.com/12640832/102325797-2d355600-3fb6-11eb-9975-dd8849782b48.gif" width="250" alt="Gmail clone" />
Note: To run this example, you must start the server ([https://github.com/robinhuy/fake-api-nodejs](https://github.com/robinhuy/fake-api-nodejs)) in folder `server`:
```
cd server
yarn
yarn start
```
| 0 |
natanfudge/fabric-example-mod-kotlin | Example Kotlin Fabric Mod | null | # Fabric Example Mod - Kotlin

## Setup
0. Create a new mod repository by pressing the "Use this template" button and clone the created repository.
1. Import build.gradle file with IntelliJ IDEA
2. Edit build.gradle and mod.json to suit your needs.
* The "mixins" object can be removed from mod.json if you do not need to use mixins.
* Please replace all occurences of "modid" with your own mod ID - sometimes, a different string may also suffice.
3. Run!
## License
This template is available under the CC0 license. Feel free to learn from it and incorporate it in your own projects.
| 1 |
rolandkrueger/vaadin-by-example | Learn Vaadin by working example projects. | vaadin | Welcome to _vaadin-by-example_
==============================
__Learn Vaadin by working example projects.__
- - - - - - - - - - - - - - - - - - - - - - - - - - -
The goal of this project is to provide learners of the [Vaadin](http://www.vaadin.com/ "Vaadin") toolkit with working example projects for various problems which one might encounter while coding with Vaadin. The idea behind this is to complement articles and tutorials on Vaadin topics with code examples that will work out of the box.
Oftentimes, tutorials only provide code snippets to illustrate how the things they describe will work. These snippets only focus on the described problem. Beginners often have difficulties transferring these excerpts into a working piece of code.
This project aims at fixing that. It will provide working examples for tutorials that can be found either in the example project itself or at some given location in the web. To make understanding the concepts easier, the examples contain as much code as necessary and as litte code as possible.
Licensing
---------
Since the main intention of this project is to offer people example code that actually works, this code should also be usable as a template for copy & pasting parts of the examples into own projects. Therefore, the licensing for the examples' source code should be as unrestrictive as possible. People shall be able to use parts of the code as they see fit without having to bother with the requisites of more or less restrictive software licenses. The examples should therefore be licensed under unrestrictive licenses, such as the MIT license or similar.
Contribute!
-----------
Contributions to this project are welcome! This is not planned as a one-man-show, so go ahead and fork this project. There are some things to take into consideration, though, when contributing example projects.
* __Licensing__
You should use a license that is as unrestrictive as possible. See the section above about licensing for an explanation of the reason for that. Each example should contain the license text in a text file named 'LICENSE'.
* __Example Size__
Example projects should be kept as concise as possible so as to not distract learners too much from the core intention of the example code.
* __Working Out Of The Box__
Examples should be runnable without requiring a complex setup. Ideally, they should be accompanied with portable build mechanisms, such as a Maven pom.xml making the example quickly accessible with a simple _mvn jetty:run_.
* __No JAR Dependencies__
Examples should not contain the necessary dependencies as binary JAR files. These should be obtainable by users through other channels. Again, using Apache Maven or Ivy builds will facilitate that.
* __Accompanying Tutorials__
An example project should not stand all by itself. Every example should be accompanied by one or more tutorials that illustrate the background of the code. These tutorials could be contained directly in the example. Another option would be to provide a hyperlink to the location of the tutorial on the web in some read-me file. Besides that link, such a read-me file should contain an abstract of the respective tutorial.
* __Project Naming__
Names for example projects should be chosen such that the example's intention becomes clear already from the name. That said, refrain from generic project names such as 'HelloWorld' or 'VaadinExample'.
Disclaimer
----------
The name Vaadin and related trademarks/trade names are the property of Vaadin Ltd. and may not be used other than stated in [Vaadin's Terms of Service](https://vaadin.com/terms-of-service). Copyright to the Vaadin Framework is owned by Vaadin Ltd. | 1 |
gxercavins/dataflow-samples | Examples using Google Cloud Dataflow - Apache Beam | null | # Dataflow-samples
This repository contains some Google Cloud Dataflow / Apache Beam samples.
## Quickstart
Each folder contains specific instructions for the corresponding example.
Use the below button to clone this repository into Cloud Shell and start right away:
[](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/gxercavins/dataflow-samples&page=editor&tutorial=README.md)
## Examples
Currently, these are the examples available:
* **Adaptive triggers (Java)**: modify the behavior of triggers at the start and end of the same window so that you can have some degree of control on the output rate.
* **Assign sessions (Java)**: assign timestamped events into a given list of all possible sessions.
* **Batch Schema auto-detect (Java)**: how to load multiple JSON files with disparate schemas into BigQuery.
* **BigQuery dead letters (Python)**: how to handle rows that could not be correctly streamed into BigQuery.
* **BigQuery Storage API (Java)**: how to read directly from a BigQuery table using the new Storage API.
* **Data-driven Triggers (Java)**: how to use the State API to simulate data-driven triggers.
* **Dynamic destinations (Java)**: write dynamically to different BigQuery tables according to the schema of the processed record.
* **Empty windows (Java)**: how to log/emit information even when the input source has no data for that window.
* **Filename match (Python)**: read from multiple files and prepend to each record the name of the matching file (optionally enrich with BigQuery).
* **Lag function (Python)**: how to compare an event with the equivalent one from the previous window.
* **Logging GroupByKey (Java)**: some ideas to log information about grouped elements using Stackdriver and BigQuery.
* **Normalize values (Python)**: normalize all PCollection values after calculating the maximum and minimum per each key.
* **Quick, Draw! dataset (Python)**: download raw data from a public dataset, convert to images and save them in `png` format.
* **RegEx pattern (Java)**: tag every path pattern and be able to associate each matched file with it.
* **Session windows (Python)**: example to demonstrate how to group events per user and session.
* **Timestamps in path (Java)**: process hourly files where timestamp needs to be inferred from folder structure.
* **Top10 distinct combiner (Python)**: we'll modify `TopCombineFn` to have unique keys when accumulating fired panes.
* **When are Pub/Sub messages ACKed? (Java)**: example to see what happens with `PubsubIO` in Dataflow.
* **With Timestamps (Java)**: assign processing time as element timestamp and shift to the past if needed.
In addition, the `UTILS` folder contains simple Dataflow snippets: adding labels, stopping jobs programmatically, process files selectively according to their format, understanding wall time, ensuring custom options are globally available, retrieving job ID or SDK version, writing BigQuery results in CSV format, enrich a PCollection with data from a BigQuery table, processing files using Pub/Sub notifications for GCS, etc.
The `BEAM-PATTERNS` folder contains common usage patterns that have been contributed to the Beam documentation.
The `TEMPLATES` folder groups examples that make for some convenient template use cases.
The `PLAYGROUND` folder recaps other more experimental examples that can be interesting to share such as trying to zip a PCollection, throttling a step or BeamSQL tests.
## License
These examples are provided under the Apache License 2.0.
## Issues
Report any issue to the GitHub issue tracker.
| 0 |
hantsy/spring-graphql-sample | Spring GraphQL examples using Netflix DGS, GraphQL Java and Spring GraphQL | graphql graphql-java netflix-dgs spring spring-boot spring-graphql | # spring-graphql-sample
Spring GraphQL examples using the following frameworks and libraries:
* [Netflix DGS(Domain Graph Service) framework](https://netflix.github.io/dgs/)
* [Spring GraphQL](https://github.com/spring-projects/spring-graphql)
* [GraphQL Java Kickstart](https://www.graphql-java-kickstart.com/)
* [GraphQL Java](https://www.graphql-java.com/)
* [GraphQL SPQR(GraphQL Schema Publisher & Query Resolver, pronounced like speaker)](https://github.com/leangen/graphql-spqr)
* [ExpediaGroup GraphQL Kotlin](https://opensource.expediagroup.com/graphql-kotlin/docs)
Other GraphQL Java integration examples with Java frameworks.
* [GraphQL with Quarkus](https://github.com/hantsy/quarkus-sandbox)
* [GraphQL with Vertx](https://github.com/hantsy/vertx-sandbox)
## Guide
TBD
## Example Codes
| Example name | Description |
| ---- | ---- |
|[dgs](./dgs) | Simple Netflix DGS example|
|[dgs-webflux](./dgs-webflux)| Simple Netflix DGS example with Spring WebFlux|
|[dgs-subscription-ws](./dgs-subscription-ws) | Simple Netflix DGS Subscription example using WebSocket protocol|
|[dgs-subscription-ui](./dgs-subscription-ui) | Angular Client app for dgs-subscription-ws|
|[dgs-subscription-sse](./dgs-subscription-sse) | Simple Netflix DGS Subscription example using Http/SSE protocol|
|[dgs-codegen](./dgs-codegen) | Netflix DGS example with Spring Jdbc and Gradle codegen plugin|
|[dgs-fileupload](./dgs-fileupload) | Netflix DGS file upload example|
|[dgs-client](./dgs-client) | Netflix DGS Typesafe Client example|
|[dgs-kotlin-co](./dgs-kotlin-co) | **A complete Netflix DGS example** with WebFlux, Kotlin Coroutines, Spring Data R2dbc and Spring Security|
|[dgs-kotlin](./dgs-kotlin) | **A complete Netflix DGS example** with WebMvc/Kotlin, Spring Data Jdbc, Spring Security and Spring Session/Spring Data Redis|
|[graphql-kotlin](./graphql-kotlin) | ExpediaGroup Graphql Kotlin Spring Boot example|
|[spring-graphql](./spring-graphql) | Spring GraphQL example|
|[spring-graphql-webmvc](./spring-graphql-webmvc) | Spring GraphQL with WebMvc Controller annotation example|
|[spring-graphql-querydsl](./spring-graphql-querydsl)| Spring GraphQL/JPA/QueryDSl Data Fetchers example|
|[spring-graphql-webflux](./spring-graphql-webflux) | Spring GraphQL/WebFlux example with WebSocket transport protocol |
|[spring-graphql-rsocket-kotlin-co](./spring-graphql-rsocket-kotlin-co) | Spring GraphQL/WebFlux/Kotlin Coroutines example with RSocket transport protocol |
### Legacy Codes
Some example codes are moved to legacy folder, because the upstream project is discontinued or under an inactive development status.
| Example name | Description |
| ---- | ---- |
|[graphql-java](./legacy/graphql-java) | GraphQL Java vanilla Spring Boot example, upstream project is discontinuned, replaced by Spring GraphQL|
|[graphql-spqr](./legacy/graphql-spqr)| GraphQL SPQR Spring example, inactive|
|[graphql-java-kickstart](./graphql-java-kickstart) | GraphQL Java Kickstart Spring Boot example|
|[graphql-java-kickstart-webclient](./graphql-java-kickstart-webclient) | GraphQL Java Kickstart Spring WebClient example|
|[graphql-java-kickstart-annotations](./graphql-java-kickstart-annotations) | GraphQL Java Kickstart Spring Boot example(Code first)|
## Prerequisites
Make sure you have installed the following software.
* Java 21
* Apache Maven 3.8.x / Gradle 7.x
* Docker
Some sample codes are written in Kotlin. If you are new to Kotlin, start to learn it from the [the Kotlin homepage](https://kotlinlang.org/).
## Build
Clone the source codes from Github.
```bash
git clone https://github.com/hantsy/spring-graphql-sample/
```
Open a terminal, and switch to the root folder of the project, and run the following command to build the whole project.
```bash
docker-compose up postgres // start up a postgres it is required
cd examplename // change to the example folder
mvn clean install // build the project
//or
./gradlew build
```
Run the application.
```bash
mvn spring-boot:run
//or
./gradlew bootRun
// or from command line after building
java -jar target/xxx.jar
```
## Contribution
Any suggestions are welcome, filing an issue or submitting a PR is also highly recommended.
## References
* [Getting started with GraphQL Java and Spring Boot](https://www.graphql-java.com/tutorials/getting-started-with-spring-boot/)
* [Getting Started with GraphQL and Spring Boot](https://www.baeldung.com/spring-graphql)
* [Open Sourcing the Netflix Domain Graph Service Framework: GraphQL for Spring Boot](https://netflixtechblog.com/open-sourcing-the-netflix-domain-graph-service-framework-graphql-for-spring-boot-92b9dcecda18)
* [Netflix Open Sources Their Domain Graph Service Framework: GraphQL for Spring Boot ](https://www.infoq.com/news/2021/02/netflix-graphql-spring-boot/)
* [Netflix Embraces GraphQL Microservices for Rapid Application Development ](https://www.infoq.com/news/2021/03/netflix-graphql-microservices/)
* [GraphQL Reference Guide: Building Flexible and Understandable APIs ](https://www.infoq.com/articles/GraphQL-ultimate-guide/)
| 0 |
jboss-developer/jboss-picketlink-quickstarts | The quickstarts demonstrate PicketLink and a few additional technologies. They provide small, specific, working examples that can be used as a reference for your own project. | null | # PicketLink Quickstarts
## Introduction
These quickstarts run JBoss Enterprise Application Platform 6 and WildFly.
We recommend using the ZIP distribution file for both JBoss Enterprise Application Platform 6 and WildFly.
You can also run PicketLink in Apache TomEE or Glassfish. In this case, you may need some additional configuration in order to get them
up and running. For PicketLink JEE Security examples, you must ship JBoss Logging jars in your deployments.
## System Requirements
To run these quickstarts with the provided build scripts, you need the following:
1. Java 1.6 or Java 1.7, depending if you're using JBoss EAP or WildFly to run the quickstarts. You can choose from the following:
* OpenJDK
* Oracle Java SE
* Oracle JRockit
2. Maven 3.0.0 or newer, to build and deploy the examples
* If you have not yet installed Maven, see the [Maven Getting Started Guide](http://maven.apache.org/guides/getting-started/index.html) for details.
* If you have installed Maven, you can check the version by typing the following in a command line:
mvn --version
3. The JBoss Enterprise Application Platform 6 distribution ZIP or the WildFly distribution ZIP.
* For information on how to install and run those servers, refer to the their documentation.
## Check Out the Source
1. To clone this Git repository, use the following command:
git clone git@github.com:jboss-developer/jboss-picketlink-quickstarts.git
2. If you want the quickstarts for a particular version (eg.: 2.5.2.Final) execute the following command:
cd jboss-picketlink-quickstarts
git checkout v2.5.2.Final
The command above will checkout a TAG corresponding to the version you want to use. For each release of PicketLink we also release and TAG
a version for the quickstarts. Each TAG uses a specific PicketLink version. Make sure you're using the TAG for the version you're looking for.
We recommend to always consider the latest version of the quickstarts, so you can check the latest changes and updates to PicketLink.
## Run the Quickstarts
The root folder of each individual quickstart contains a README file with specific details on how to build and run the example. In most cases you do the following:
* [Start the JBoss server](#start-the-jboss-server)
* [Build and deploy the quickstarts](#build-and-deploy-the-quickstarts)
## About the PicketLink Federation Quickstarts
The *PicketLink Federation Quickstarts* provide a lot of examples about how to use *PicketLink Federation SAML Support* to enable SSO for your applications.
Before running them you need to understand how they are related with each other. Basically, each Identity Provider is meant to be used by a group of Service Providers.
In order to get the whole Single Sing-On functionality demonstrated, you need to deploy them together.
| SAML Configuration | Identity Provider | Service Provider(s) |
| ------------- |:-------------------------------------------------:| --------------------------------------------------------------------------------------------------------:|
| Basic | picketlink-federation-saml-idp-basic | picketlink-federation-saml-sp-post-basic, picketlink-federation-saml-sp-redirect-basic |
| Encryption | picketlink-federation-saml-idp-with-encryption | picketlink-federation-saml-sp-with-encryption |
| Metadata | picketlink-federation-saml-idp-with-metadata | picketlink-federation-saml-sp-with-metadata |
| Signatures | picketlink-federation-saml-idp-with-signature | picketlink-federation-saml-sp-post-with-signature, picketlink-federation-saml-sp-redirect-with-signature |
| HTTP CLIENT_CERT and FORM Authentication | picketlink-federation-saml-idp-ssl | picketlink-federation-saml-sp-post-basic, picketlink-federation-saml-sp-redirect-basic |
| IDP Servlet Filter | picketlink-federation-saml-idp-servlet-filter | picketlink-federation-saml-sp-post-with-signature, picketlink-federation-saml-sp-redirect-with-signature |
The table above describes what are the Identity Provider and Service Providers required to test a specific configuration. It is important that you respect these dependencies to get the
functionality properly working.
### Using SAML Tracer Firefox Add-On to Debug the SAML SSO Flow
If you want to understand even better how IdPs and SPs communicate with each other, you may want to configure the [SAML Tracer Add-On](https://addons.mozilla.org/en-US/firefox/addon/saml-tracer/) to your Mozilla Firefox.
This is a nice way to debug and view SAML Messages, so you can take a look on how the IdP and SP exchange messages when establishing a SSO session.
### Start the JBoss Server
Before you deploy a quickstart, in most cases you need a running JBoss Enterprise Application Platform 6 or WildFlyserver. A few of the Arquillian tests do not require a running server. This will be noted in the README for that quickstart.
The JBoss server can be started a few different ways.
* [Start the JBoss Server With the _web_ profile](#start-the-jboss-server-with-the-web-profile): This is the default configuration. It defines minimal subsystems and services.
* [Start the JBoss Server with the _full_ profile](#start-the-jboss-server-with-the-full-profile): This profile configures many of the commonly used subsystems and services.
* [Start the JBoss Server with a custom configuration](#start-the-jboss-server-with-custom-configuration-options): Custom configuration parameters can be specified on the command line when starting the server.
The README for each quickstart will specify which configuration is required to run the example.
#### Start the JBoss Server with the Web Profile
To start JBoss Enterprise Application Platform 6 or WildFly with the Web Profile:
1. Open a command line and navigate to the root of the JBoss server directory.
2. The following shows the command line to start the JBoss server with the web profile:
For Linux: JBOSS_HOME/bin/standalone.sh
For Windows: JBOSS_HOME\bin\standalone.bat
#### Start the JBoss Server with the Full Profile
To start JBoss Enterprise Application Platform 6 or WildFly with the Full Profile:
1. Open a command line and navigate to the root of the JBoss server directory.
2. The following shows the command line to start the JBoss server with the full profile:
For Linux: JBOSS_HOME/bin/standalone.sh -c standalone-full.xml
For Windows: JBOSS_HOME\bin\standalone.bat -c standalone-full.xml
#### Start the JBoss Server with Custom Configuration Options
To start JBoss Enterprise Application Platform 6 or WildFly with custom configuration options:
1. Open a command line and navigate to the root of the JBoss server directory.
2. The following shows the command line to start the JBoss server. Replace the CUSTOM_OPTIONS with the custom optional parameters specified in the quickstart.
For Linux: JBOSS_HOME/bin/standalone.sh CUSTOM_OPTIONS
For Windows: JBOSS_HOME\bin\standalone.bat CUSTOM_OPTIONS
### Build and Deploy the Quickstarts
See the README file in each individual quickstart folder for specific details and information on how to run and access the example.
#### Build the Quickstart Archive
In some cases, you may want to build the application to test for compile errors or view the contents of the archive.
1. Open a command line and navigate to the root directory of the quickstart you want to build.
2. Use this command if you only want to build the archive, but not deploy it:
For EAP 6: mvn clean package
For WildFly: mvn -Pwildfly clean package
#### Build and Deploy the Quickstart Archive
1. Make sure you [start the JBoss server](#start-the-jboss-server) as described in the README.
2. Open a command line and navigate to the root directory of the quickstart you want to run.
3. Use this command to build and deploy the archive:
For EAP 6: mvn clean package jboss-as:deploy
For WildFly: mvn -Pwildfly clean package wildfly:deploy
#### Undeploy an Archive
The command to undeploy the quickstart is simply:
For EAP 6: mvn jboss-as:undeploy
For WildFly: mvn -Pwildfly wildfly:undeploy
PicketLink Documentation
------------
The documentation is available from the following [link](http://docs.jboss.org/picketlink/2/latest/).
| 0 |
patniemeyer/learningjava | Example Code for Learning Java, O'Reilly & Associates, 4th Edition | null | learningjava
============
Example Code for Learning Java, O'Reilly & Associates, 4th Edition
| 1 |
find-sec-bugs/juliet-test-suite | :microscope: A collection of test cases in the Java language. It contains examples for 112 different CWEs. | application code sample vulnerable | # Juliet Test Suite
A collection of test cases in the Java language. It contains examples for 112 different CWEs.
The test suite is taken from [NIST website](https://samate.nist.gov/SRD/testsuite.php)
This repository add alternative build integration : Gradle and Maven
## Gradle
```
gradle build
```
## Maven
```
mvn compile
```
| 0 |
v5tech/dubbo-example | dubbo example | dubbo dubbo-example | # dubbo-example
dubbo 分布式服务配置案例
升级到dubbox 2.8.4
dubbox 2.8.4编译安装
```
https://github.com/dangdangdotcom/dubbox/archive/dubbox-2.8.4.zip
修改根pom中curator_version版本为<curator_version>2.6.0</curator_version>
mvn install -Dmaven.test.skip=true
```
### 1. 项目结构介绍
dubbo-service 公共接口服务
dubbo-provider 公共接口服务实现(dubbo provider) 服务提供者
dubbo-consumer (dubbo consumer) dubbo服务消费者
### 2. 具体描述
* dubbo-service 为公共服务接口,在该模块中只声明对外提供的接口,在dubbo provider和 dubbo consumer均有引用
* dubbo-provider 公共接口服务实现,服务提供者.为dubbo consumer提供服务。
* 示例代码如下
```java
import net.aimeizi.dubbo.entity.User;
import net.aimeizi.dubbo.service.UserService;
import com.alibaba.dubbo.config.annotation.Service;
@Service
public class UserServiceImpl implements UserService {
@Override
public User save(User user) {
user.setUserId(++UserIdGenerator.id);
return user;
}
}
```
* dubbo provider 核心配置
```xml
<!-- 提供方应用信息,用于计算依赖关系 -->
<dubbo:application name="dubbo-provider" />
<!-- 使用zookeeper注册中心暴露服务地址 -->
<dubbo:registry address="zookeeper://127.0.0.1:2181" />
<!-- 用dubbo协议在20880端口暴露服务 -->
<dubbo:protocol name="dubbo" port="20880" />
<!-- 扫描注解包路径,多个包用逗号分隔,不填pacakge表示扫描当前ApplicationContext中所有的类 -->
<dubbo:annotation package="net.aimeizi.dubbo.service"/>
```
注:`<dubbo:annotation package="net.aimeizi.dubbo.service"/>`配置会扫描该包下的@Service(com.alibaba.dubbo.config.annotation.Service)注解. 这里的服务注入使用dubbo @Service注解
* maven依赖
```xml
<dependency>
<groupId>net.aimeizi</groupId>
<artifactId>dubbo-service</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>dubbo</artifactId>
<version>2.8.4</version>
<exclusions>
<exclusion>
<artifactId>spring</artifactId>
<groupId>org.springframework</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.4.6</version>
<exclusions>
<exclusion>
<groupId>com.sun.jmx</groupId>
<artifactId>jmxri</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jdmk</groupId>
<artifactId>jmxtools</artifactId>
</exclusion>
<exclusion>
<groupId>javax.jms</groupId>
<artifactId>jms</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.github.sgroschupf</groupId>
<artifactId>zkclient</artifactId>
<version>0.1</version>
</dependency>
```
* dubbo-consumer 消费者.这里只依赖公共服务接口,不需要直接依赖dubbo provider
* 示例代码如下
```java
import com.alibaba.dubbo.config.annotation.Reference;
import net.aimeizi.dubbo.service.DemoService;
import net.aimeizi.dubbo.service.UserService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import javax.annotation.Resource;
/**
*
* dubbo 消费者
*
* @Reference 注解需要在 dubbo consumer中做如下配置
*
* <dubbo:annotation/>
* <context:component-scan base-package="net.aimeizi.dubbo.controller">
* <context:include-filter type="annotation" expression="com.alibaba.dubbo.config.annotation.Reference"/>
* </context:component-scan>
*
* 若要使用@Autowired或@Resource注解需要显式声明bean
*
* 使用@Autowired或@Resource注解时需要使用dubbo:reference来声明
* <dubbo:reference interface="net.aimeizi.dubbo.service.UserService" id="userService"/>
* <dubbo:reference interface="net.aimeizi.dubbo.service.DemoService" id="demoService"/>
*
* 以上的配置均需要在spring mvc的DispatcherServlet配置中显式配置dubbo consumer的配置.如/WEB-INF/applicationContext-dubbo-consumer.xml 否则在Controller中服务报NullPointException
* <servlet>
* <servlet-name>mvc-dispatcher</servlet-name>
* <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
* <init-param>
* <param-name>contextConfigLocation</param-name>
* <param-value>/WEB-INF/applicationContext*.xml,/WEB-INF/mvc-dispatcher-servlet.xml</param-value>
* </init-param>
* <load-on-startup>1</load-on-startup>
* </servlet>
*
*/
@Controller
public class HelloController {
@Reference
//@Autowired
//@Resource
private DemoService demoService;
@Reference
//@Autowired
//@Resource
private UserService userService;
@RequestMapping(value = "/test", method = RequestMethod.GET)
public String printWelcome(ModelMap model) {
model.addAttribute("message", "Hello world!");
return "hello";
}
}
```
注意:
① @Reference 注解需要在 dubbo consumer配置文件中做如下配置
```xml
<dubbo:annotation/>
<context:component-scan base-package="net.aimeizi.dubbo.controller">
<context:include-filter type="annotation" expression="com.alibaba.dubbo.config.annotation.Reference"/>
</context:component-scan>
```
② 若要使用@Autowired或@Resource注解需要显式声明bean
```xml
<!-- 使用@Resource注解时需要使用dubbo:reference来声明 -->
<dubbo:reference interface="net.aimeizi.dubbo.service.UserService" id="userService"/>
<dubbo:reference interface="net.aimeizi.dubbo.service.DemoService" id="demoService"/>
```
③ 以上的配置均需要在spring mvc的DispatcherServlet配置中显式配置dubbo consumer的配置.如/WEB-INF/applicationContext-dubbo-consumer.xml 否则在Controller中服务报NullPointException
```xml
<servlet>
<servlet-name>mvc-dispatcher</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/applicationContext*.xml,/WEB-INF/mvc-dispatcher-servlet.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
```
### 3. 演示
完整演示

错误演示Controller中服务报NullPointException

使用@Autowired或@Resource注解

dubbo管理控制台演示

# 与我联系
* QQ:*184675420*
* Email:*sxyx2008#gmail.com*(#替换为@)
* HomePage:*[aimeizi.net](http://aimeizi.net)*
* Weibo:*[http://weibo.com/qq184675420](http://weibo.com/qq184675420)*(荧星诉语)
* Twitter:*[https://twitter.com/sxyx2008](https://twitter.com/sxyx2008)*
# License
MIT
Copyright (c) 2015 雪山飞鹄 | 1 |
ElanYoung/spring-boot-learning-examples | 🤖 Spring Boot 2.x 实践案例(实用) | actuator druid easy-excel jasypt jwt minio quartz spring-boot spring-security websocket | <h1 align="center"><a href="https://github.com/ElanYoung" target="_blank">🤖 Spring Boot 2.x 实践案例</a></h1>
<p align="center">
<a href="https://travis-ci.com/ElanYoung/spring-boot-learning-examples"><img alt="Travis-CI" src="https://travis-ci.com/xkcoding/spring-boot-demo.svg?branch=master"/></a>
<a href="https://www.codacy.com/app/ElanYoung/spring-boot-learning-examples?utm_source=github.com&utm_medium=referral&utm_content=xkcoding/spring-boot-demo&utm_campaign=Badge_Grade"><img alt="Codacy" src="https://api.codacy.com/project/badge/Grade/1f2e3d437b174bfc943dae1600332ec1"/></a>
<a href="https://doc.starimmortal.com"><img alt="author" src="https://img.shields.io/badge/author-ElanYoung-blue.svg"/></a>
<a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html"><img alt="JDK" src="https://img.shields.io/badge/JDK-1.8.0_312-orange.svg"/></a>
<a href="https://docs.spring.io/spring-boot/docs/2.7.11/reference/html/"><img alt="Spring Boot" src="https://img.shields.io/badge/Spring Boot-2.7.11-brightgreen.svg"/></a>
<a href="https://github.com/ElanYoung/spring-boot-learning-examples/blob/master/LICENSE"><img alt="LICENSE" src="https://img.shields.io/github/license/ElanYoung/spring-boot-learning-examples.svg"/></a>
</p>
<p align="center">
<a href="https://github.com/ElanYoung/spring-boot-learning-examples/stargazers"><img alt="star" src="https://img.shields.io/github/stars/ElanYoung/spring-boot-learning-examples.svg?label=Stars&style=social"/></a>
<a href="https://github.com/ElanYoung/spring-boot-learning-examples/network/members"><img alt="star" src="https://img.shields.io/github/forks/ElanYoung/spring-boot-learning-examples.svg?label=Fork&style=social"/></a>
<a href="https://github.com/ElanYoung/spring-boot-learning-examples/watchers"><img alt="star" src="https://img.shields.io/github/watchers/ElanYoung/spring-boot-learning-examples.svg?label=Watch&style=social"/></a>
</p>
<p align="center">
<span>English | <a href="./README.zh-CN.md">简体中文</a></span>
</p>
## Introduction
`spring-boot-learning-examples` is developed based on the `Spring Boot 2.7.x` version. It integrates the technology
stack and middleware commonly used in development. It is a project for deep learning and actual combat
with `Spring Boot`.
> If you have an example to contribute or needs to meet, it is very welcome to submit
> a [issue](https://github.com/ElanYoung/spring-boot-learning-examples/issues/new).
## Environment
- **JDK 1.8 +**
- **Maven 3.5 +**
- **Mysql 5.7 +**
- **IntelliJ IDEA 2018.2 +** (*Note: Please use IDEA and make sure plugin `lombok` installed.*)
## Getting Started
### Get Project
```bash
git clone https://github.com/ElanYoung/spring-boot-learning-examples.git
```
### Import Project
> Open `spring-boot-learning-examples` project in `IntelliJ IDEA`.
### Run Project
> Find the `Application` class in each module, right-click `Run 'Application'` to run each practice case.
## Learning Examples
| Module | Description | Code | Article |
|------------------------------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| spring-boot-banner | Spring Boot 自定义 Banner | [spring-boot-banner](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-banner) | [《Spring Boot 自定义 Banner》](https://blog.csdn.net/qq991658923/article/details/121302050) |
| spring-boot-actuator | Spring Boot 集成 Actuator 监控工具 | [spring-boot-actuator](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-actuator) | [《Spring Boot 集成 Actuator 监控工具》](https://blog.csdn.net/qq991658923/article/details/127112107) |
| spring-boot-druid | Spring Boot 集成 Druid 连接池 | [spring-boot-druid](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-druid) | [《Spring Boot 集成 Druid 连接池》](https://blog.csdn.net/qq991658923/article/details/127112527) |
| spring-boot-jasypt | Spring Boot 集成 jasypt 实现敏感信息加密 | [spring-boot-jasypt](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-jasypt) | [《Spring Boot 集成 jasypt 实现敏感信息加密》](https://blog.csdn.net/qq991658923/article/details/127112431) |
| spring-boot-websocket-native | Spring Boot 集成 WebSocket(原生注解) | [spring-boot-websocket-native](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-websocket-native) | [《Spring Boot 集成 WebSocket(原生注解与Spring封装)》](https://blog.csdn.net/qq991658923/article/details/127022522) |
| spring-boot-websocket-spring | Spring Boot 集成 WebSocket(Spring封装) | [spring-boot-websocket-spring](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-websocket-spring) | [《Spring Boot 集成 WebSocket(原生注解与Spring封装)》](https://blog.csdn.net/qq991658923/article/details/127022522) |
| spring-boot-jwt | Spring Boot 集成 JWT | [spring-boot-jwt](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-jwt) | [《Spring Boot 集成 JWT》](https://blog.csdn.net/qq991658923/article/details/127027528) |
| spring-boot-minio | Spring Boot 集成 MinIO(分布式文件存储系统) | [spring-boot-minio](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-minio) | [《Spring Boot 集成 MinIO》](https://blog.csdn.net/qq991658923/article/details/124623495) |
| spring-boot-quartz | Spring Boot 集成 Quartz(定时任务) | [spring-boot-quartz](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-quartz) | [《Spring Boot 集成 Quartz》](https://blog.csdn.net/qq991658923/article/details/127078993) |
| spring-boot-easy-excel | Spring Boot 集成 EasyExcel | [spring-boot-easy-excel](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-easy-excel) | [《Spring Boot 集成 EasyExcel》](https://blog.csdn.net/qq991658923/article/details/128153012) |
| spring-boot-h2 | Spring Boot 集成 H2(轻量级数据库) | [spring-boot-h2](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-h2) | |
| spring-boot-spring-security | Spring Boot 集成 Spring Security 5.7.x(安全框架) | [spring-boot-spring-security](https://github.com/ElanYoung/spring-boot-learning-examples/tree/master/spring-boot-spring-security) | [《Spring Boot 优雅集成 Spring Security 5.7.x(安全框架)》](https://juejin.cn/post/7244089396567982136) |
## Stargazers over time
[](https://starchart.cc/ElanYoung/spring-boot-learning-examples)
## License
[MIT](http://opensource.org/licenses/MIT)
Copyright (c) 2022 ElanYoung
| 0 |
yida-lxw/solr-book | Solr book Example Code | null | # solr-book
Solr book Example Code
| 1 |
appiumbook/appiumbook | This repository contains all the sample examples discussed on Appium Book. | null | null | 1 |
Mikuu/Pact-JVM-Example | Example Consumer & Provider projects for Pact JVM | consumer-driven-contract-testing contract-testing pact pact-jvm | null | 1 |
eip-work/kuboard-example | eip-service-dashboard-example | null | # kuboard-example
kuboard 的主要特点:
* 面向运维场景的设计
* 微服务分层显示
* 关联到微服务上下文的监控
详细文档请参考 Kuboard 官网,
[https://kuboard.cn](https://kuboard.cn)
kuboard-example 完成部署后的效果如下所示:
<p>
<a href="http://demo.eip.work/#/login?isReadOnly=true&token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJvYXJkLXZpZXdlci10b2tlbi1mdGw0diIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJvYXJkLXZpZXdlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YWFiMmQxLTQxMjYtNDU5Yi1hZmNhLTkyYzMwZDk0NTQzNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJvYXJkLXZpZXdlciJ9.eYqN3FLIT6xs0-lm8AidZtaiuHeX70QTn9FhJglhEyh5dlyMU5lo8UtR-h1OY8sTSeYdYKJAS83-9SUObKQhp6XNmRgOYAfZblKUy4mvbGVQ3dn_qnzxYxt6zdGCwIY7E34eNNd9IjMF7G_Y4eJLWE7NvkSB1O8zbdn8En9rQXv_xJ9-ugCyr4CYB1lDGuZl3CIXgQ1FWcQdUBrxTT95tzcNTB0l6OUOGhRxOfw-RyIOST83GV5U0iVzxnD4sjgSaJefvCU-BmwXgpxAwRVhFyHEziXXa0CuZfBfJbmnQW308B4wocr4QDm6Nvmli1P3B6Yo9-HNF__d2hCwZEr7eg">在线演示</a>
</p>

| 0 |
paulcwarren/spring-content-examples | Examples projects showing how to use Spring Content | null | [](https://travis-ci.org/paulcwarren/spring-content-examples)
# Spring Content Examples
Examples projects showing how to use each Spring Content module.
While each `boot-starter` example does not specify a meta-data store, the standard examples always specify HSQL.
## Spring-Eg-Content-FS
- This example stores content on the local file system, under your temp directory (OS dependent) followed by `/spring-content-CURRENTTIME-INSTANCENUM/`.
- No environment variables are need to run this example. If you desire to change the location of files being stored add the following bean to your class and change the return value.
```java
@Bean
public File fileSystemRoot() throws IOException {
return //MY_DIRECTORY_PATH;
}
```
## Spring-Eg-Content-Jpa
- This example stores content in a JPA compatible database which in this case is HSQL.
- No environment variables are need to run this example.
- To change the underlying content database the `dataSource()` method must be changed to return a DataSource handler to your database. See `ClaimTestConfig.java` for the HSQL example.
## Spring-Eg-Content-Mongo
- This example stores content in a MongoDB.
- If no environment variables are specified when running this example, we assume you have a MongoDB running locally without username/password.
- You can change the mongoDB host / username / password with the following ENV variables:
- `spring_eg_content_mongo_host` -> MongoDB host
- `spring_eg_content_mongo_port` -> MongoDB host port
- `spring_eg_content_mongo_username` -> MongoDB username
- `spring_eg_content_mongo_password` -> MongoDB password
## Spring-Eg-Content-S3
- This example stores content in a S3 Bucket
- The following ENV variables need to be set when running this example:
- `AWS_BUCKET` -> AWS S3 bucket that has been previously setup for storing content
- `AWS_REGION` -> AWS Region in which the bucket above is provisioned.
> Availability Zones:
- us-gov-west-1
- us-east-1
- us-west-1
- us-west-2
- eu-west-1
- eu-central-1
- ap-south-1
- ap-southeast-1
- ap-southeast-2
- ap-northeast-1
- ap-northeast-2
- sa-east-1
- cn-north-1
- `AWS_ACCESS_KEY_ID` -> AWS Key ID that has access to the bucket above
- `AWS_SECRET_KEY` -> AWS Secret Key that corresponds the ID above
## Spring-Eg-Content-Solr
- This example:
- by default is used in conjunction with the JPA interface as well as HSQL, this can be changed for your own scenario.
- indexes your content when storing, later allowing for querying.
- The following ENV variables need to be set:
- Solr Server with Basic Auth (SSL inclusive):
- `EXAMPLES_AUTH_URL` -> https/http url of solr server including port.
- `EXAMPLES_USERNAME` -> Username with access to create new entries and run queries.
- `EXAMPLES_PASSWORD` -> Password for username above.
- Insecure Solr Server:
- `EXAMPLES_SOLR_URL` -> http url of solr server including port.
| 0 |
pact-foundation/pact-workshop-jvm-spring | Example Spring Boot project for the Pact workshop | hacktoberfest | # Example Spring Boot project for the Pact workshop
This workshop should take about 2 hours, depending on how deep you want to go into each topic.
This workshop is setup with a number of steps that can be run through. Each step is in a branch, so to run through a
step of the workshop just check out the branch for that step (i.e. `git checkout step1`).
## Requirements
* JDK 8+
* Docker for step 11
## Workshop outline:
* [step 1: **Simple Consumer calling Provider**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step1#step-1---simple-consumer-calling-provider)
* [step 2: **Client Tested but integration fails**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step2#step-2---client-tested-but-integration-fails)
* [step 3: **Pact to the rescue**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step3#step-3---pact-to-the-rescue)
* [step 4: **Verify the provider**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step4#step-4---verify-the-provider)
* [step 5: **Back to the client we go**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step5#step-5---back-to-the-client-we-go)
* [step 6: **Consumer updates contract for missing products**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step6#step-6---consumer-updates-contract-for-missing-products)
* [step 7: **Adding the missing states**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step7#step-7---adding-the-missing-states)
* [step 8: **Authorization**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step8#step-8---authorization)
* [step 9: **Implement authorisation on the provider**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step9#step-9---implement-authorisation-on-the-provider)
* [step 10: **Request Filters on the Provider**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step10#step-10---request-filters-on-the-provider)
* [step 11: **Using a Pact Broker**](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step11#step-11---using-a-pact-broker)
_NOTE: Each step is tied to, and must be run within, a git branch, allowing you to progress through each stage incrementally. For example, to move to step 2 run the following: git checkout step2_
## Scenario
There are two components in scope for our workshop.
1. Product Catalog application (Consumer). It provides a console interface to query the Product service for product information.
1. Product Service (Provider). Provides useful things about products, such as listing all products and getting the details of an individual product.
## Step 1 - Simple Consumer calling Provider
We need to first create an HTTP client to make the calls to our provider service:

The Consumer has implemented the product service client which has the following:
- `GET /products` - Retrieve all products
- `GET /products/{id}` - Retrieve a single product by ID
The diagram below highlights the interaction for retrieving a product with ID 10:

You can see the client interface we created in `consumer/src/main/au/com/dius/pactworkshop/consumer/ProductService.java`:
```java
@Service
public class ProductService {
private final RestTemplate restTemplate;
@Autowired
public ProductService(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
public List<Product> getAllProducts() {
return restTemplate.exchange("/products",
HttpMethod.GET,
null,
new ParameterizedTypeReference<List<Product>>(){}).getBody();
}
public Product getProduct(String id) {
return restTemplate.getForEntity("/products/{id}", Product.class, id).getBody();
}
}
```
We can run the client with `./gradlew consumer:bootRun` - it should fail with the error below, because the Provider is not running.
```console
Caused by: org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:8085/products": Connection refused: connect; nested exception is java.net.ConnectException: Connection refused: connect
```
Move on to [step 2](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step2#step-2---client-tested-but-integration-fails)
## Step 2 - Client Tested but integration fails
Now let's create a basic test for our API client. We're going to check 2 things:
1. That our client code hits the expected endpoint
1. That the response is marshalled into an object that is usable, with the correct ID
You can see the client interface test we created in `consumer/src/test/java/au/com/dius/pactworkshop/consumer/ProductServiceTest.java`:
```java
class ProductServiceTest {
private WireMockServer wireMockServer;
private ProductService productService;
@BeforeEach
void setUp() {
wireMockServer = new WireMockServer(options().dynamicPort());
wireMockServer.start();
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(wireMockServer.baseUrl())
.build();
productService = new ProductService(restTemplate);
}
@AfterEach
void tearDown() {
wireMockServer.stop();
}
@Test
void getAllProducts() {
wireMockServer.stubFor(get(urlPathEqualTo("/products"))
.willReturn(aResponse()
.withStatus(200)
.withHeader("Content-Type", "application/json")
.withBody("[" +
"{\"id\":\"9\",\"type\":\"CREDIT_CARD\",\"name\":\"GEM Visa\",\"version\":\"v2\"},"+
"{\"id\":\"10\",\"type\":\"CREDIT_CARD\",\"name\":\"28 Degrees\",\"version\":\"v1\"}"+
"]")));
List<Product> expected = Arrays.asList(new Product("9", "CREDIT_CARD", "GEM Visa", "v2"),
new Product("10", "CREDIT_CARD", "28 Degrees", "v1"));
List<Product> products = productService.getAllProducts();
assertEquals(expected, products);
}
@Test
void getProductById() {
wireMockServer.stubFor(get(urlPathEqualTo("/products/50"))
.willReturn(aResponse()
.withStatus(200)
.withHeader("Content-Type", "application/json")
.withBody("{\"id\":\"50\",\"type\":\"CREDIT_CARD\",\"name\":\"28 Degrees\",\"version\":\"v1\"}")));
Product expected = new Product("50", "CREDIT_CARD", "28 Degrees", "v1");
Product product = productService.getProduct("50");
assertEquals(expected, product);
}
}
```

Let's run this test and see it all pass:
```console
> ./gradlew consumer:test
BUILD SUCCESSFUL in 2s
```
Meanwhile, our provider team has started building out their API in parallel. Let's run our website against our provider (you'll need two terminals to do this):
```console
# Terminal 1
❯ ./gradlew provider:bootRun
...
...
Tomcat started on port(s): 8085 (http) with context path ''
Started ProviderApplication in 1.67 seconds (JVM running for 2.039)
```
```console
# Terminal 2
> ./gradlew consumer:bootRun --console plain
...
...
Started ConsumerApplication in 1.106 seconds (JVM running for 1.62)
Products
--------
1) Gem Visa
2) MyFlexiPay
3) 28 Degrees
Select item to view details:
```
You should now see 3 different products. Choosing an index number should display detailed product information.
Let's see what happens!

Doh! We are getting 404 every time we try to view detailed product information. On closer inspection, the provider only knows about `/product/{id}` and `/products`.
We need to have a conversation about what the endpoint should be, but first...
Move on to [step 3](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step3#step-3---pact-to-the-rescue)
## Step 3 - Pact to the rescue
Unit tests are written and executed in isolation of any other services. When we write tests for code that talk to other services, they are built on trust that the contracts are upheld. There is no way to validate that the consumer and provider can communicate correctly.
> An integration contract test is a test at the boundary of an external service verifying that it meets the contract expected by a consuming service — [Martin Fowler](https://martinfowler.com/bliki/IntegrationContractTest.html)
Adding contract tests via Pact would have highlighted the `/product/{id}` endpoint was incorrect.
Let us add Pact to the project and write a consumer pact test for the `GET /products/{id}` endpoint.
*Provider states* is an important concept of Pact that we need to introduce. These states help define the state that the provider should be in for specific interactions. For the moment, we will initially be testing the following states:
- `product with ID 10 exists`
- `products exist`
The consumer can define the state of an interaction using the `given` property.
Note how similar it looks to our unit test:
In `consumer/src/test/java/au/com/dius/pactworkshop/consumer/ProductConsumerPactTest.java`:
```java
@ExtendWith(PactConsumerTestExt.class)
public class ProductConsumerPactTest {
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact getAllProducts(PactDslWithProvider builder) {
return builder.given("products exist")
.uponReceiving("get all products")
.method("GET")
.path("/products")
.willRespondWith()
.status(200)
.headers(headers())
.body(newJsonArrayMinLike(2, array ->
array.object(object -> {
object.stringType("id", "09");
object.stringType("type", "CREDIT_CARD");
object.stringType("name", "Gem Visa");
})
).build())
.toPact();
}
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact getOneProduct(PactDslWithProvider builder) {
return builder.given("product with ID 10 exists")
.uponReceiving("get product with ID 10")
.method("GET")
.path("/products/10")
.willRespondWith()
.status(200)
.headers(headers())
.body(newJsonBody(object -> {
object.stringType("id", "10");
object.stringType("type", "CREDIT_CARD");
object.stringType("name", "28 Degrees");
}).build())
.toPact();
}
@Test
@PactTestFor(pactMethod = "getAllProducts")
void getAllProducts_whenProductsExist(MockServer mockServer) {
Product product = new Product();
product.setId("09");
product.setType("CREDIT_CARD");
product.setName("Gem Visa");
List<Product> expected = Arrays.asList(product, product);
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
List<Product> products = new ProductService(restTemplate).getAllProducts();
assertEquals(expected, products);
}
@Test
@PactTestFor(pactMethod = "getOneProduct")
void getProductById_whenProductWithId10Exists(MockServer mockServer) {
Product expected = new Product();
expected.setId("10");
expected.setType("CREDIT_CARD");
expected.setName("28 Degrees");
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
Product product = new ProductService(restTemplate).getProduct("10");
assertEquals(expected, product);
}
private Map<String, String> headers() {
Map<String, String> headers = new HashMap<>();
headers.put("Content-Type", "application/json; charset=utf-8");
return headers;
}
}
```

This test starts a mock server on a random port that acts as our provider service. To get this to work we update the URL in the `Client` that we create, after initialising Pact.
To run only the Pact tests:
```console
> ./gradlew consumer:test --tests '*PactTest'
```
Running this test still passes, but it creates a pact file which we can use to validate our assumptions on the provider side, and have conversation around.
```console
❯ ./gradlew consumer:test --tests '*PactTest'
BUILD SUCCESSFUL in 6s
```
A pact file should have been generated in *consumer/build/pacts/FrontendApplication-ProductService.json*
*NOTE*: even if the API client had been graciously provided for us by our Provider Team, it doesn't mean that we shouldn't write contract tests - because the version of the client we have may not always be in sync with the deployed API - and also because we will write tests on the output appropriate to our specific needs.
Move on to [step 4](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step4#step-4---verify-the-provider)
## Step 4 - Verify the provider
We will need to copy the Pact contract file that was produced from the consumer test into the Provider module. This will help us verify that the provider can meet the requirements as set out in the contract.
Copy the contract located in `consumer/build/pacts/FrontendApplication-Productservice.json` to `provider/src/test/resources/pacts/FrontendApplication-Productservice.json`. Or run the Gradle task
```console
> ./gradlew consumer:copyPacts
BUILD SUCCESSFUL in 1s
```
Now let's make a start on writing Pact tests to validate the consumer contract:
In `provider/src/test/java/au/com/dius/pactworkshop/provider/ProductPactProviderTest.java`:
```java
@Provider("ProductService")
@PactFolder("pacts")
@ExtendWith(SpringExtension.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class ProductPactProviderTest {
@LocalServerPort
int port;
@BeforeEach
void setUp(PactVerificationContext context) {
context.setTarget(new HttpTestTarget("localhost", port));
}
@TestTemplate
@ExtendWith(PactVerificationInvocationContextProvider.class)
void verifyPact(PactVerificationContext context) {
context.verifyInteraction();
}
@State("products exist")
void toProductsExistState() {
}
@State("product with ID 10 exists")
void toProductWithIdTenExistsState() {
}
}
```
To run only the verification tests:
```console
> ./gradlew provider:test --tests '*Pact*Test'
```
We now need to validate the pact generated by the consumer is valid, by executing it against the running service provider, which should fail:
```console
❯ ./gradlew provider:test --tests '*Pact*Test'
...
...
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get product with ID 10 FAILED
java.lang.AssertionError at ProductPactProviderTest.java:33
2020-10-09 06:21:52.555 INFO 6404 --- [extShutdownHook] o.s.s.concurrent.Thread
2 tests completed, 1 failed
> Task :provider:test FAILED
```

The test has failed, as the expected path `/products/{id}` is returning 404. We incorrectly believed our provider was following a RESTful design, but the authors were too lazy to implement a better routing solution 🤷🏻♂️.
The correct endpoint which the consumer should call is `/product/{id}`.
Move on to [step 5](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step5#step-5---back-to-the-client-we-go)
## Step 5 - Back to the client we go
We now need to update the consumer client and tests to hit the correct product path.
First, we need to update the GET route for the client:
In `consumer/src/main/au/com/dius/pactworkshop/consumer/ProductService.java`:
```java
...
public Product getProduct(String id) {
return restTemplate.getForEntity("/product/{id}", Product.class, id).getBody();
}
```
Then we need to update the Pact test `ID 10 exists` to use the correct endpoint in `path`.
In `consumer/src/test/java/au/com/dius/pactworkshop/consumer/ProductConsumerPactTest.java`:
```java
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact getOneProduct(PactDslWithProvider builder) {
return builder.given("product with ID 10 exists")
.uponReceiving("get product with ID 10")
.method("GET")
.path("/product/10")
.willRespondWith()
.status(200)
.headers(headers())
.body(newJsonBody(object -> {
object.stringType("id", "10");
object.stringType("type", "CREDIT_CARD");
object.stringType("name", "28 Degrees");
}).build())
.toPact();
}
...
```

Let's run and generate an updated pact file on the client:
```console
❯ ./gradlew consumer:test --tests *PactTest
BUILD SUCCESSFUL in 7s
```
Now we run the provider tests again with the updated contract
Copy the updated contract located in `consumer/build/pacts/FrontendApplication-ProductService.json` to `provider/src/test/resources/pacts/FrontendApplication-Productservice.json` by running the command:
```console
> ./gradlew consumer:copyPacts
BUILD SUCCESSFUL in 1s
```
Run the command:
```console
❯ ./gradlew provider:test --tests '*Pact*Test'
...
...
BUILD SUCCESSFUL in 10s
```
Yay - green ✅!
Move on to [step 6](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step6#step-6---consumer-updates-contract-for-missing-products)
## Step 6 - Consumer updates contract for missing products
We're now going to add 2 more scenarios for the contract
- What happens when we make a call for a product that doesn't exist? We assume we'll get a `404`.
- What happens when we make a call for getting all products but none exist at the moment? We assume a `200` with an empty array.
Let's write a test for these scenarios, and then generate an updated pact file.
In `consumer/src/test/java/au/com/dius/pactworkshop/consumer/ProductConsumerPactTest.java`:
```java
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact noProductsExist(PactDslWithProvider builder) {
return builder.given("no products exist")
.uponReceiving("get all products")
.method("GET")
.path("/products")
.willRespondWith()
.status(200)
.headers(headers())
.body("[]")
.toPact();
}
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact productDoesNotExist(PactDslWithProvider builder) {
return builder.given("product with ID 11 does not exist")
.uponReceiving("get product with ID 11")
.method("GET")
.path("/product/11")
.willRespondWith()
.status(404)
.toPact();
}
@Test
@PactTestFor(pactMethod = "noProductsExist")
void getAllProducts_whenNoProductsExist(MockServer mockServer) {
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
List<Product> products = new ProductService(restTemplate).getAllProducts();
assertEquals(Collections.emptyList(), products);
}
@Test
@PactTestFor(pactMethod = "productDoesNotExist")
void getProductById_whenProductWithId11DoesNotExist(MockServer mockServer) {
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
HttpClientErrorException e = assertThrows(HttpClientErrorException.class,
() -> new ProductService(restTemplate).getProduct("11"));
assertEquals(404, e.getStatusCode().value());
}
```
Notice that our new tests look almost identical to our previous tests, and only differ on the expectations of the _response_ - the HTTP request expectations are exactly the same.
```console
❯ ./gradlew consumer:test --tests '*PactTest'
BUILD SUCCESSFUL in 1s
```
What does our provider have to say about this new test. Again, copy the updated pact file into the provider's pact directory:
```console
> ./gradlew consumer:copyPacts
BUILD SUCCESSFUL in 1s
```
and run the command:
```console
❯ ./gradlew provider:test --tests '*Pact*Test'
...
...
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get all products FAILED
java.lang.AssertionError at ProductPactProviderTest.java:33
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get product with ID 11 FAILED
java.lang.AssertionError at ProductPactProviderTest.java:33
2020-10-09 08:27:31.030 INFO 18048 --- [extShutdownHook] o.s.s.concurrent.Threa
4 tests completed, 2 failed
> Task :provider:test FAILED
FAILURE: Build failed with an exception.
```
We expected this failure, because the product we are requesting does in fact exist! What we want to test for, is what happens if there is a different *state* on the Provider. This is what is referred to as "Provider states", and how Pact gets around test ordering and related issues.
We could resolve this by updating our consumer test to use a known non-existent product, but it's worth understanding how Provider states work more generally.
Move on to [step 7](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step7#step-7---adding-the-missing-states)
## Step 7 - Adding the missing states
Our code already deals with missing users and sends a `404` response, however our test data fixture always has product ID 10 and 11 in our database.
In this step, we will add state handlers to our provider Pact verifications, which will update the state of our data store depending on which states the consumers require.
States are invoked prior to the actual test function being invoked. You can see the full [lifecycle here](https://github.com/pact-foundation/pact-go#lifecycle-of-a-provider-verification).
We're going to add handlers for all our states:
- products exist
- no products exist
- product with ID 10 exists
- product with ID 11 does not exist
Let's open up our provider Pact verifications in `provider/src/test/java/au/com/dius/pactworkshop/provider/ProductPactProviderTest.java`:
```java
@State("products exist")
void toProductsExistState() {
when(productRepository.fetchAll()).thenReturn(
Arrays.asList(new Product("09", "CREDIT_CARD", "Gem Visa", "v1"),
new Product("10", "CREDIT_CARD", "28 Degrees", "v1")));
}
@State({
"no products exist",
"product with ID 11 does not exist"
})
void toNoProductsExistState() {
when(productRepository.fetchAll()).thenReturn(Collections.emptyList());
}
@State("product with ID 10 exists")
void toProductWithIdTenExistsState() {
when(productRepository.getById("10")).thenReturn(Optional.of(new Product("10", "CREDIT_CARD", "28 Degrees", "v1")));
}
```
Let's see how we go now:
```console
❯ ./gradlew provider:test --tests *Pact*Test
BUILD SUCCESSFUL in 11s
```
_NOTE_: The states are not necessarily a 1 to 1 mapping with the consumer contract tests. You can reuse states amongst different tests. In this scenario we could have used `no products exist` for both tests which would have equally been valid.
Move on to [step 8](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step8#step-8---authorization)
## Step 8 - Authorization
It turns out that not everyone should be able to use the API. After a discussion with the team, it was decided that a time-bound bearer token would suffice. The token must be in `yyyy-MM-ddTHHmm` format and within 1 hour of the current time.
In the case a valid bearer token is not provided, we expect a `401`. Let's update the consumer to pass the bearer token, and capture this new `401` scenario.
In `consumer/src/main/au/com/dius/pactworkshop/consumer/ProductService.java`:
```java
@Service
public class ProductService {
private final RestTemplate restTemplate;
@Autowired
public ProductService(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
public List<Product> getAllProducts() {
return restTemplate.exchange("/products",
HttpMethod.GET,
getRequestEntity(),
new ParameterizedTypeReference<List<Product>>(){}).getBody();
}
public Product getProduct(String id) {
return restTemplate.exchange("/product/{id}",
HttpMethod.GET,
getRequestEntity(),
Product.class, id).getBody();
}
private HttpEntity<String> getRequestEntity() {
HttpHeaders headers = new HttpHeaders();
headers.add(HttpHeaders.AUTHORIZATION, generateAuthToken());
return new HttpEntity<>(headers);
}
private String generateAuthToken() {
return "Bearer " + new SimpleDateFormat("yyyy-MM-dd'T'HH:mm").format(new Date());
}
}
```
In `consumer/src/test/java/au/com/dius/pactworkshop/consumer/ProductConsumerPactTest.java`:
```java
@ExtendWith(PactConsumerTestExt.class)
public class ProductConsumerPactTest {
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact getAllProducts(PactDslWithProvider builder) {
return builder.given("products exist")
.uponReceiving("get all products")
.method("GET")
.path("/products")
.matchHeader("Authorization", "Bearer (19|20)\\d\\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01])T([01][1-9]|2[0123]):[0-5][0-9]")
.willRespondWith()
.status(200)
.headers(headers())
.body(newJsonArrayMinLike(2, array ->
array.object(object -> {
object.stringType("id", "09");
object.stringType("type", "CREDIT_CARD");
object.stringType("name", "Gem Visa");
})
).build())
.toPact();
}
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact noProductsExist(PactDslWithProvider builder) {
return builder.given("no products exist")
.uponReceiving("get all products")
.method("GET")
.path("/products")
.matchHeader("Authorization", "Bearer (19|20)\\d\\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01])T([01][1-9]|2[0123]):[0-5][0-9]")
.willRespondWith()
.status(200)
.headers(headers())
.body("[]")
.toPact();
}
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact allProductsNoAuthToken(PactDslWithProvider builder) {
return builder.given("products exist")
.uponReceiving("get all products with no auth token")
.method("GET")
.path("/products")
.willRespondWith()
.status(401)
.toPact();
}
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact getOneProduct(PactDslWithProvider builder) {
return builder.given("product with ID 10 exists")
.uponReceiving("get product with ID 10")
.method("GET")
.path("/product/10")
.matchHeader("Authorization", "Bearer (19|20)\\d\\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01])T([01][1-9]|2[0123]):[0-5][0-9]")
.willRespondWith()
.status(200)
.headers(headers())
.body(newJsonBody(object -> {
object.stringType("id", "10");
object.stringType("type", "CREDIT_CARD");
object.stringType("name", "28 Degrees");
}).build())
.toPact();
}
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact productDoesNotExist(PactDslWithProvider builder) {
return builder.given("product with ID 11 does not exist")
.uponReceiving("get product with ID 11")
.method("GET")
.path("/product/11")
.matchHeader("Authorization", "Bearer (19|20)\\d\\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01])T([01][1-9]|2[0123]):[0-5][0-9]")
.willRespondWith()
.status(404)
.toPact();
}
@Pact(consumer = "FrontendApplication", provider = "ProductService")
RequestResponsePact singleProductnoAuthToken(PactDslWithProvider builder) {
return builder.given("product with ID 10 exists")
.uponReceiving("get product by ID 10 with no auth token")
.method("GET")
.path("/product/10")
.willRespondWith()
.status(401)
.toPact();
}
@Test
@PactTestFor(pactMethod = "getAllProducts")
void getAllProducts_whenProductsExist(MockServer mockServer) {
Product product = new Product();
product.setId("09");
product.setType("CREDIT_CARD");
product.setName("Gem Visa");
List<Product> expected = Arrays.asList(product, product);
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
List<Product> products = new ProductService(restTemplate).getAllProducts();
assertEquals(expected, products);
}
@Test
@PactTestFor(pactMethod = "noProductsExist")
void getAllProducts_whenNoProductsExist(MockServer mockServer) {
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
List<Product> products = new ProductService(restTemplate).getAllProducts();
assertEquals(Collections.emptyList(), products);
}
@Test
@PactTestFor(pactMethod = "allProductsNoAuthToken")
void getAllProducts_whenNoAuth(MockServer mockServer) {
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
HttpClientErrorException e = assertThrows(HttpClientErrorException.class,
() -> new ProductService(restTemplate).getAllProducts());
assertEquals(401, e.getStatusCode().value());
}
@Test
@PactTestFor(pactMethod = "getOneProduct")
void getProductById_whenProductWithId10Exists(MockServer mockServer) {
Product expected = new Product();
expected.setId("10");
expected.setType("CREDIT_CARD");
expected.setName("28 Degrees");
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
Product product = new ProductService(restTemplate).getProduct("10");
assertEquals(expected, product);
}
@Test
@PactTestFor(pactMethod = "productDoesNotExist")
void getProductById_whenProductWithId11DoesNotExist(MockServer mockServer) {
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
HttpClientErrorException e = assertThrows(HttpClientErrorException.class,
() -> new ProductService(restTemplate).getProduct("11"));
assertEquals(404, e.getStatusCode().value());
}
@Test
@PactTestFor(pactMethod = "singleProductnoAuthToken")
void getProductById_whenNoAuth(MockServer mockServer) {
RestTemplate restTemplate = new RestTemplateBuilder()
.rootUri(mockServer.getUrl())
.build();
HttpClientErrorException e = assertThrows(HttpClientErrorException.class,
() -> new ProductService(restTemplate).getProduct("10"));
assertEquals(401, e.getStatusCode().value());
}
private Map<String, String> headers() {
Map<String, String> headers = new HashMap<>();
headers.put("Content-Type", "application/json; charset=utf-8");
return headers;
}
}
```
Generate a new Pact file:
```console
❯ ./gradlew consumer:test --tests *PactTest
BUILD SUCCESSFUL in 9s
```
We should now have two new interactions in our pact file.
Let's test the provider. Copy the updated pact file into the provider's pact directory and run the command:
```console
> ./gradlew consumer:copyPacts
BUILD SUCCESSFUL in 1s
❯ ./gradlew provider:test --tests *Pact*Test
...
...
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get product by ID 10 with no auth token FAILED
java.lang.AssertionError at ProductPactProviderTest.java:43
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get all products with no auth token FAILED
java.lang.AssertionError at ProductPactProviderTest.java:43
2020-10-09 15:07:37.909 INFO 17464 --- [extShutdownHook] o.s.s.concurrent.Threa
6 tests completed, 2 failed
> Task :provider:test FAILED
FAILURE: Build failed with an exception.
```
Now with the most recently added interactions where we are expecting a response of 401 when no authorization header is sent, we are getting 200...
Move on to [step 9](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step9#step-9---implement-authorisation-on-the-provider)
## Step 9 - Implement authorisation on the provider
We will add a filter to check the Authorization header and deny the request with `401` if the token is older than 1 hour.
In `provider/src/main/java/au/com/dius/pactworkshop/provider/AuthFilter.java`
```java
@Component
public class AuthFilter implements Filter {
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
String authHeader = ((HttpServletRequest) request).getHeader("Authorization");
if (authHeader == null) {
((HttpServletResponse) response).sendError(401, "Unauthorized");
return;
}
authHeader = authHeader.replaceAll("Bearer ", "");
if (!isValidAuthTimestamp(authHeader)) {
((HttpServletResponse) response).sendError(401, "Unauthorized");
return;
}
chain.doFilter(request, response);
}
private boolean isValidAuthTimestamp(String timestamp) {
SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm");
try {
Date headerDate = formatter.parse(timestamp);
long diff = (System.currentTimeMillis() - headerDate.getTime()) / 1000;
return diff >= 0 && diff <= 3600;
} catch (ParseException e) {
e.printStackTrace();
}
return false;
}
}
```
This means that a client must present an HTTP `Authorization` header that looks as follows:
```
Authorization: Bearer 2006-01-02T15:04
```
Let's test this out:
```console
❯ ./gradlew provider:test --tests *Pact*Test
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get all products FAILED
java.lang.AssertionError at ProductPactProviderTest.java:43
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get product with ID 10 FAILED
java.lang.AssertionError at ProductPactProviderTest.java:43
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get product with ID 11 FAILED
java.lang.AssertionError at ProductPactProviderTest.java:43
au.com.dius.pactworkshop.provider.ProductPactProviderTest > FrontendApplication - get all products FAILED
java.lang.AssertionError at ProductPactProviderTest.java:43
2020-10-12 10:28:12.744 INFO 17984 --- [extShutdownHook] o.s.s.concurrent.Threa
6 tests completed, 4 failed
> Task :provider:test FAILED
FAILURE: Build failed with an exception.
```
Oh, dear. _More_ tests are failing. Can you understand why?
Move on to [step 10](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step10#step-10---request-filters-on-the-provider)
## Step 10 - Request Filters on the Provider
Because our pact file has static data in it, our bearer token is now out of date, so when Pact verification passes it to the Provider we get a `401`. There are multiple ways to resolve this - mocking or stubbing out the authentication component is a common one. In our use case, we are going to use a process referred to as _Request Filtering_, using a `RequestFilter`.
_NOTE_: This is an advanced concept and should be used carefully, as it has the potential to invalidate a contract by bypassing its constraints. See https://docs.pact.io/implementation_guides/jvm/provider/junit5/#modifying-the-requests-before-they-are-sent for more details on this.
The approach we are going to take to inject the header is as follows:
1. If we receive any Authorization header, we override the incoming request with a valid (in time) Authorization header, and continue with whatever call was being made
1. If we don't receive an Authorization header, we do nothing
_NOTE_: We are not considering the `403` scenario in this example.
In `provider/src/test/java/au/com/dius/pactworkshop/provider/ProductPactProviderTest.java`:
```java
@TestTemplate
@ExtendWith(PactVerificationInvocationContextProvider.class)
void verifyPact(PactVerificationContext context, HttpRequest request) {
replaceAuthHeader(request);
context.verifyInteraction();
}
private void replaceAuthHeader(HttpRequest request) {
if (request.containsHeader("Authorization")) {
String header = "Bearer " + new SimpleDateFormat("yyyy-MM-dd'T'HH:mm").format(new Date());
request.removeHeaders("Authorization");
request.addHeader("Authorization", header);
}
}
```
We can now run the Provider tests
```console
❯ ./gradlew provider:test --tests *Pact*Test
BUILD SUCCESSFUL in 1s
```
Move on to [step 11](https://github.com/pact-foundation/pact-workshop-jvm-spring/tree/step11#step-11---using-a-pact-broker)
## Step 11 - Using a Pact Broker

We've been publishing our pacts from the consumer project by essentially sharing the file system with the provider. But this is not very manageable when you have multiple teams contributing to the code base, and pushing to CI. We can use a [Pact Broker](https://pactflow.io) to do this instead.
Using a broker simplifies the management of pacts and adds a number of useful features, including some safety enhancements for continuous delivery which we'll see shortly.
In this workshop we will be using the open source Pact broker.
### Running the Pact Broker with docker-compose
In the root directory, run:
```console
docker-compose up
```
### Publish contracts from consumer
First, in the consumer project we need to tell Pact about our broker.
In `consumer/build.gradle`:
```groovy
...
...
def getGitHash = { ->
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'rev-parse', '--short', 'HEAD'
standardOutput = stdout
}
return stdout.toString().trim()
}
def getGitBranch = { ->
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'rev-parse', '--abbrev-ref', 'HEAD'
standardOutput = stdout
}
return stdout.toString().trim()
}
static def getOrDefault(env, defaultVal) {
def val = System.getenv(env)
if (val == null || val.isEmpty()) {
val = defaultVal
}
return val
}
pact {
publish {
pactDirectory = 'consumer/build/pacts'
pactBrokerUrl = 'http://localhost:8000/'
pactBrokerUsername = getOrDefault('PACT_BROKER_USERNAME', 'pact_workshop')
pactBrokerPassword = getOrDefault('PACT_BROKER_PASSWORD', 'pact_workshop')
consumerBranch = getGitBranch()
consumerVersion = getGitHash()
}
}
```
Now run
```console
❯ ./gradlew consumer:test --tests '*PactTest*' pactPublish
> Task :consumer:pactPublish
Publishing 'FrontendApplication-ProductService.json'
OK
BUILD SUCCESSFUL in 11s
```
*NOTE*: For real projects, you should only publish pacts from CI builds
Have a browse around the broker on http://localhost:8000 (with username/password: `pact_workshop`/`pact_workshop`) and see your newly published contract!
### Verify contracts on Provider
All we need to do for the provider is update where it finds its pacts, from local URLs, to one from a broker.
In `provider/src/test/java/au/com/dius/pactworkshop/provider/ProductPactProviderTest.java`:
```java
//replace
@PactFolder("pacts")
// with
@PactBroker(
host = "localhost",
port = "8000",
authentication = @PactBrokerAuth(username = "pact_workshop", password = "pact_workshop")
)
```
In `provider/build.gradle`:
```groovy
...
...
def getGitHash = { ->
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'rev-parse', '--short', 'HEAD'
standardOutput = stdout
}
return stdout.toString().trim()
}
def getGitBranch = { ->
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'rev-parse', '--abbrev-ref', 'HEAD'
standardOutput = stdout
}
return stdout.toString().trim()
}
test {
useJUnitPlatform()
systemProperty 'pact.provider.branch', getGitBranch()
if (System.getProperty('pactPublishResults') == 'true') {
systemProperty 'pact.provider.version', getGitHash()
systemProperty 'pact.verifier.publishResults', 'true'
}
}
```
Let's run the provider verification one last time after this change:
```console
❯ ./gradlew -DpactPublishResults=true provider:test --tests *Pact*Test
BUILD SUCCESSFUL in 16s
```
*NOTE*: For real projects, you should only publish verification results from CI builds
As part of this process, the results of the verification - the outcome (boolean) and the detailed information about the failures at the interaction level - are published to the Broker also.
This is one of the Broker's more powerful features. Referred to as [Verifications](https://docs.pact.io/pact_broker/advanced_topics/provider_verification_results), it allows providers to report back the status of a verification to the broker. You'll get a quick view of the status of each consumer and provider on a nice dashboard. But, it is much more important than this!
### Can I deploy?
With just a simple use of the `pact-broker` [can-i-deploy tool](https://docs.pact.io/pact_broker/advanced_topics/provider_verification_results) - the Broker will determine if a consumer or provider is safe to release to the specified environment.
You can run the `pact-broker can-i-deploy` checks as follows:
```console
❯ docker run --rm --network host \
-e PACT_BROKER_BASE_URL=http://localhost:8000 \
-e PACT_BROKER_USERNAME=pact_workshop \
-e PACT_BROKER_PASSWORD=pact_workshop \
pactfoundation/pact-cli:latest \
broker can-i-deploy \
--pacticipant FrontendApplication \
--latest
Computer says yes \o/
CONSUMER | C.VERSION | PROVIDER | P.VERSION | SUCCESS?
--------------------|-----------|----------------|-----------|---------
FrontendApplication | 2955ca5 | ProductService | 2955ca5 | true
All required verification results are published and successful
----------------------------
❯ docker run --rm --network host \
-e PACT_BROKER_BASE_URL=http://localhost:8000 \
-e PACT_BROKER_USERNAME=pact_workshop \
-e PACT_BROKER_PASSWORD=pact_workshop \
pactfoundation/pact-cli:latest \
broker can-i-deploy \
--pacticipant ProductService \
--latest
Computer says yes \o/
CONSUMER | C.VERSION | PROVIDER | P.VERSION | SUCCESS?
--------------------|-----------|----------------|-----------|---------
FrontendApplication | 2955ca5 | ProductService | 2955ca5 | true
All required verification results are published and successful
```
That's it - you're now a Pact pro. Go build 🔨
| 1 |
mzgreen/HideOnScrollExample | This is an example on how to show/hide views when scrolling a list. | null | HideOnScrollExample
=============
This example shows how to show/hide views (i.e. Toolbar or FAB) when a list is scrolled up/down.
There is a blog post explaining the code:
[part 1 - outdated](http://mzgreen.github.io/2015/02/15/How-to-hideshow-Toolbar-when-list-is-scroling%28part1%29/)
[part 2 - outdated](http://mzgreen.github.io/2015/02/28/How-to-hideshow-Toolbar-when-list-is-scrolling%28part2%29/)
[part 3](https://mzgreen.github.io/2015/06/23/How-to-hideshow-Toolbar-when-list-is-scrolling%28part3%29/)
| 1 |
rakeshcusat/Code4Reference | This repo has code which can be referred as example code. These code have been written for learning purpose. | null | null | 1 |
fhopf/akka-crawler-example | Some example code of using Akka from Java | null | # Simple Producer Consumer example for Akka in Java
This repository contains 3 examples of a simple web crawler:
* A sequential example
* An example where the logic is split in 3 Actors
* An example where the retrieval of the pages is handled by multiple Actors in parallel.
* An example where retrieval fails and the application hangs
* A supervised example where failing messages are resend
To start the sequential execution run gradle runSequential
To start the simple actor execution run gradle runActors
To start the parallel page fetching run gradle runParallelActors
To start the failing or supervised examples run gradle runFA or gradle runSA
The code is only meant as an example on how to implement a producer consumer example in Akka. More information:
* http://blog.florian-hopf.de/2012/08/getting-rid-of-synchronized-using-akka.html
* http://blog.florian-hopf.de/2013/10/cope-with-failure-actor-supervision-in.html
*Please don't use the crawler as it is on sites where you didn't contact the site owner*
| 1 |
DNBbank/getting-started | Code examples to get your first working request to DNB APIs | null | # Getting started examples for DNB Open Banking
In this repo you will find code sorted by languages to get you started with DNB's
Open Banking APIs. The aim of this repo is to provide code that can just be used once
the developer's credentials are setup in each file. You will find a readme file in
each subfolder that will add complementary information about the language-specific code.
## tldr;
1. Register an application at [developer.dnb.no][].
2. Configure credentials in `.env` file for running code examples. If you want
to run postman please follow the [postman readme][].
3. Run one of the examples and check out the code to see what is happening. Make sure that you have attached all API's at [developer.dnb.no][] for the examples to run correctly.
### Getting API key
In order to call the api, you will need API key. It
can be obtained from [developer.dnb.no][]. Register and login to create an example
app and you will get the credentials needed. Each of the examples has documentation
on how to configure it with the credentials.
_Never store your API key available publicly!_
### Configuring the credentials
If you put the credentials in a `.env` file in the root directory it will work
for all the examples except postman. It is also possible to put the `.env` file
in the directory of the example you want to run. See [.env.example][] for a template
for the `.env` file.
### Running the examples
Each example contains a readme with documentation on how to run them. There is also
script that is helpful for running the examples:
```shell
./run <example> # e.g. ./run nodejs
```
[developer.dnb.no]: https://developer.dnb.no
[postman readme]: ./postman/README.md
All the examples are licensed under MIT license.
| 0 |
giltene/GilExamples | null | null | null | 0 |
trishagee/mongodb-getting-started | Some examples of how to use the MongoDB Java driver, via unit tests. | null | Getting started with MongoDB and Java
=======================
Step by step examples (via unit tests) of how to do simple operations with the 2.12 version of the MongoDB Java driver. Use this code to play along with the Getting Started guides:
[Getting Started with MongoDB and Java: Part I](http://blog.mongodb.org/post/94065240033/getting-started-with-mongodb-and-java-part-i)
[Getting Started with MongoDB and Java: Part II](http://blog.mongodb.org/post/94724924068/getting-started-with-mongodb-and-java-part-ii)
| 0 |
nats-io/java-nats-examples | Repo for java-nats-examples | null | 
# NATS - Java Examples
A [Java](http://java.com) examples for the [NATS messaging system](https://nats.io).
[](https://www.apache.org/licenses/LICENSE-2.0)
[](https://github.com/nats-io/java-nats-examples/actions/workflows/build.yml)
## Introduction
These Java examples of using the NATS Java client [nats.java project](https://github.com/nats-io/nats.java)
In addition to this repository, there are also more examples in the [examples directory](https://github.com/nats-io/nats.java/tree/main/src/examples) of the NATS Java client project.
There is also [NATS by Example](https://natsbyexample.com) - An evolving collection of runnable, cross-client reference examples for NATS.
The [nats-by-example](nats-by-example/README.md) directory contains the same examples.
### Starters
The starters projects are provided to give you a jump start to adding the NATS Java client to your project.
* starter-maven [pom.xml](starter-maven/pom.xml) for maven users
* starter-gradle-groovy [build.gradle](starter-gradle-groovy/build.gradle) for gradle users who like the Groovy DSL
* starter-gradle-kotlin [build.gradle.kts](starter-gradle-kotlin/build.gradle.kts) for gradle users who like the Kotlin DSL
As a side note for Kotlin users, there is a small example in the [kotlin-nats-examples](https://github.com/nats-io/kotlin-nats-examples) project.
### Hello World
The [Hello World](hello-world/README.md) examples does things like create streams, publish and subscribe.
### Core Request Reply
The [Core Request Reply](core-request-reply-patterns/README.md)
is an example demonstrating that using core nats, there are multiple ways to do request reply.
### JS Multi Tool
The [JS Multi Tool](js-multi-tool/README.md) is a tool to assist exercising and benchmarking of the NATS Java client in your own environment.
### Example Dev Cluster Configs
The files in the [Example Dev Cluster Configs](example-dev-cluster-configs/README.md) directory are templates for building a simple dev cluster.
### OCSP
The [OCSP](ocsp/README.md) project has instructions and examples for building a OCSP aware SSL Context.
### SSL Context Factory
The [SSL Context Factory](ssl-context-factory/README.md) example demonstrates how to implement the SSLContextFactory,
an alternative way to provide an SSL Context to the Connection Options.
### Auth Callout
The [Auth Callout](auth-callout/README.md) example demonstrates a basic Auth Callout handler
### Recreate Consumer
The [Recreate Consumer](recreate-consumer/README.md) example demonstrates creating a durable consumer that will start where another one left off.
### Robust Push Subscription
The [Robust Push Subscription](robust-push-subscription/README.md) is an application with more robust error handler including recreating the consumer if heartbeat alarm occurs.
### Encoding
The [Encoding](encoding/README.md) project has examples that encoded/decode message payload.
### Chain Of Command
The [Chain Of Command](chain-of-command/README.md) example shows subscribing with wildcard subjects to form a chain of command.
Both "publish style" and "request style" workflow are demonstrated.
The "publish style" does not know if messages were received.
The "request style" knows if the request was received, so it could handle the case when it is not.
### Object Store / File Transfer
The [Manual File Transfer](file-transfer-manual/README.md) project was a proof of concept project was done
as part of the design of Object Store.
The [File Transfer Object Store](file-transfer-object-store/README.md) project demonstrates
transferring a file using the completed Object Store API.
### Error and Heartbeat Experiments
The [Error and Heartbeat Experiments](error-and-heartbeat-experiments/README.md) project
are experiments to demonstrate how heartbeats and error listening works.
### Js Over Core
The [Js Over Core](js-over-core/README.md) uses core nats to publish (with Publish Acks!) and subscribe.
### Server Pool
The [Server Pool](server-pool/README.md) is an example how the developer can provide the connection/reconnection info themselves / dynamically.
### Multi Subject Worker
The [Multi Subject Worker](multi-subject-worker/README.md) is an example that processes multiple subjects.
### Original Functional Examples
The [Original Functional Examples](functional-examples/README.md) where the original examples
for the JNATS client and demonstrate one feature at a time.
## License
Unless otherwise noted, the NATS source files are distributed
under the Apache Version 2.0 license found in the LICENSE file.
| 0 |
bezkoder/spring-boot-security-postgresql | Spring Boot, Spring Security, PostgreSQL: JWT Authentication & Authorization example | authentication authorization jwt-authentication postgresql spring-boot spring-data-jpa spring-security | # Spring Boot, Spring Security, PostgreSQL: JWT Authentication & Authorization example
## User Registration, User Login and Authorization process.
The diagram shows flow of how we implement User Registration, User Login and Authorization process.

## Spring Boot Server Architecture with Spring Security
You can have an overview of our Spring Boot Server with the diagram below:

## Configure Spring Datasource, JPA, App properties
Open `src/main/resources/application.properties`
```
spring.datasource.url= jdbc:postgresql://localhost:5432/testdb
spring.datasource.username= postgres
spring.datasource.password= 123
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation= true
spring.jpa.properties.hibernate.dialect= org.hibernate.dialect.PostgreSQLDialect
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto= update
# App Properties
bezkoder.app.jwtSecret= ======================BezKoder=Spring===========================
bezkoder.app.jwtExpirationMs= 86400000
```
## Run Spring Boot application
```
mvn spring-boot:run
```
## Run following SQL insert statements
```
INSERT INTO roles(name) VALUES('ROLE_USER');
INSERT INTO roles(name) VALUES('ROLE_MODERATOR');
INSERT INTO roles(name) VALUES('ROLE_ADMIN');
```
For more detail, please visit:
> [Spring Boot, Spring Security, PostgreSQL: JWT Authentication & Authorization example](https://bezkoder.com/spring-boot-security-postgresql-jwt-authentication/)
> [For MySQL](https://bezkoder.com/spring-boot-jwt-authentication/)
> [For MongoDB](https://bezkoder.com/spring-boot-jwt-auth-mongodb/)
## Refresh Token

For instruction: [Spring Boot Refresh Token with JWT example](https://bezkoder.com/spring-boot-refresh-token-jwt/)
## More Practice:
> [Spring Boot File upload example with Multipart File](https://bezkoder.com/spring-boot-file-upload/)
> [Exception handling: @RestControllerAdvice example in Spring Boot](https://bezkoder.com/spring-boot-restcontrolleradvice/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Rest Controller Unit Test with @WebMvcTest](https://www.bezkoder.com/spring-boot-webmvctest/)
> [Spring Boot Pagination & Sorting example](https://www.bezkoder.com/spring-boot-pagination-sorting-example/)
> Validation: [Spring Boot Validate Request Body](https://www.bezkoder.com/spring-boot-validate-request-body/)
> Documentation: [Spring Boot and Swagger 3 example](https://www.bezkoder.com/spring-boot-swagger-3/)
> Caching: [Spring Boot Redis Cache example](https://www.bezkoder.com/spring-boot-redis-cache-example/)
Associations:
> [Spring Boot One To Many example with Spring JPA, Hibernate](https://www.bezkoder.com/jpa-one-to-many/)
> [Spring Boot Many To Many example with Spring JPA, Hibernate](https://www.bezkoder.com/jpa-many-to-many/)
> [JPA One To One example with Spring Boot](https://www.bezkoder.com/jpa-one-to-one/)
## Fullstack Authentication
> [Spring Boot + Vue.js JWT Authentication](https://bezkoder.com/spring-boot-vue-js-authentication-jwt-spring-security/)
> [Spring Boot + Angular 8 JWT Authentication](https://bezkoder.com/angular-spring-boot-jwt-auth/)
> [Spring Boot + Angular 10 JWT Authentication](https://bezkoder.com/angular-10-spring-boot-jwt-auth/)
> [Spring Boot + Angular 11 JWT Authentication](https://bezkoder.com/angular-11-spring-boot-jwt-auth/)
> [Spring Boot + Angular 12 JWT Authentication](https://www.bezkoder.com/angular-12-spring-boot-jwt-auth/)
> [Spring Boot + Angular 13 JWT Authentication](https://www.bezkoder.com/angular-13-spring-boot-jwt-auth/)
> [Spring Boot + Angular 14 JWT Authentication](https://www.bezkoder.com/angular-14-spring-boot-jwt-auth/)
> [Spring Boot + Angular 15 JWT Authentication](https://www.bezkoder.com/angular-15-spring-boot-jwt-auth/)
> [Spring Boot + Angular 16 JWT Authentication](https://www.bezkoder.com/angular-16-spring-boot-jwt-auth/)
> [Spring Boot + Angular 17 JWT Authentication](https://www.bezkoder.com/angular-17-spring-boot-jwt-auth/)
> [Spring Boot + React JWT Authentication](https://bezkoder.com/spring-boot-react-jwt-auth/)
## Fullstack CRUD App
> [Vue.js + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-vue-js-postgresql/)
> [Angular 8 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-spring-boot-postgresql/)
> [Angular 10 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-10-spring-boot-postgresql/)
> [Angular 11 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-11-spring-boot-postgresql/)
> [Angular 12 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/angular-12-spring-boot-postgresql/)
> [Angular 13 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-13-postgresql/)
> [Angular 14 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-14-postgresql/)
> [Angular 15 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-15-postgresql/)
> [Angular 16 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-16-postgresql/)
> [Angular 17 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-17-postgresql/)
> [React + Spring Boot + PostgreSQL example](https://bezkoder.com/spring-boot-react-postgresql/)
Run both Back-end & Front-end in one place:
> [Integrate Angular with Spring Boot Rest API](https://bezkoder.com/integrate-angular-spring-boot/)
> [Integrate React.js with Spring Boot Rest API](https://bezkoder.com/integrate-reactjs-spring-boot/)
> [Integrate Vue.js with Spring Boot Rest API](https://bezkoder.com/integrate-vue-spring-boot/)
| 1 |
fpj/zookeeper-book-example | This is a code example that complements the material in the ZooKeeper O'Reilly book. | null | # Apache ZooKeeper - O'Reilly Book Example
This is some code example we have developed for the Apache ZooKeeper book.
This code constitutes complementary material to the book and it has been
written to illustrate how to implement an application with ZooKeeper. It
hasn't been heavily tested or debugged, and it misses features, so don't
take it as production-ready code. In fact, if you're able to fix bugs and
extend this implementation, it probably means that you have learned how
to program with ZooKeeper!
## Components
This example implements a simple master-worker system. There is a primary
master assigning tasks and it supports backup masters to replace the primary
in the case it crashes. Workers execute tasks assigned to it. The task
consists of reading the content of the task znode, nothing more. A real app
will most likely do something more complex than that. Finally, clients
submit tasks and wait for a status znode.
Here is a summary of the code flow for each of the components:
### Master:
1. Before taking leadership
1. Try to create the master znode
2. If it goes through, then take leadership
3. Upon connection loss, needs to check if znode is there and who owns it
4. Upon determining that someone else owns it, watch the master znode
2. After taking leadership
1. Get workers
1. Set a watch on the list of workers
2. Check for dead workers and reassign tasks
3. For each dead worker
1. Get assigned tasks
2. Get task data
3. Move task to the list of unassigned tasks
4. Delete assignment
2. Recover tasks (tasks assigned to dead workers)
3. Get unassigned tasks and assign them
4. For each unassigned task
5.
1. Get task data
2. Choose worker
3. Assign to worker
4. Delete task from the list of unassigned
### Worker:
1. Creates /assign/worker-xxx znode
2. Creates /workers/worker-xxx znode
3. Watches /assign/worker-xxx znode
4. Get tasks upon assignment
5. For each task, get task data
6. Execute task data
7. Create status
8. Delete assignment
### Client
1. Create task
2. Watch for status znode
3. Upon receiving a notification for the status znode, get status data
4. Delete status znode
## Compile and run it
We used maven for this little project. Install maven if you don't have it
and run "mvn install" to generate the jar file and to run tests. We have a
few tests that check for basic functionality. For the C master, you'll need
to compile the ZooKeeper C client to use the client library.
To run it, follow these steps:
### Step 1: Start ZooKeeper by running `bin/zkServer.sh start` from a copy
of the distribution package.
### Step 2: Start the master
```
java -cp .:/usr/local/zookeeper-3.4.8/zookeeper-3.4.8.jar:/usr/local/slf4j-1.7.2/slf4j-api-1.7.2.jar:/usr/local/slf4j-1.7.2/slf4j-ext-1.7.2.jar:/usr/local/slf4j-1.7.2/slf4j-log4j12-1.7.2.jar:/usr/local/apache-log4j-1.2.17/log4j-1.2.17.jar:/path/to/book/repo/target/ZooKeeper-Book-0.0.1-SNAPSHOT.jar org.apache.zookeeper.book.Master localhost:2181
```
### Step 3: Start a couple of workers
```
java -cp .:/usr/local/zookeeper-3.4.8/zookeeper-3.4.8.jar:/usr/local/slf4j-1.7.2/slf4j-api-1.7.2.jar:/usr/local/slf4j-1.7.2/slf4j-ext-1.7.2.jar:/usr/local/slf4j-1.7.2/slf4j-log4j12-1.7.2.jar:/usr/local/apache-log4j-1.2.17/log4j-1.2.17.jar:/path/to/book/repo/target/ZooKeeper-Book-0.0.1-SNAPSHOT.jar org.apache.zookeeper.book.Worker localhost:2181
```
### Step 4: Run a client
```
java -cp .:/usr/local/zookeeper-3.4.8/zookeeper-3.4.8.jar:/usr/local/slf4j-1.7.2/slf4j-api-1.7.2.jar:/usr/local/slf4j-1.7.2/slf4j-ext-1.7.2.jar:/usr/local/slf4j-1.7.2/slf4j-log4j12-1.7.2.jar:/usr/local/apache-log4j-1.2.17/log4j-1.2.17.jar:/path/to/book/repo/target/ZooKeeper-Book-0.0.1-SNAPSHOT.jar org.apache.zookeeper.book.Client localhost:2181
```
For the C master, we do the following:
### Compile
```
gcc -I/usr/local/zookeeper-3.4.8/src/c/include -I/usr/local/zookeeper-3.4.8/src/c/generated -DTHREADED -L/usr/local/lib -l zookeeper_mt master.c
```
### Run it
./a.out 127.0.0.1:2181
Have fun!
| 1 |
gurkanucar/spring-security-examples | spring security 6 | null | # spring-security-examples
## 1) In Memory Auth
### With this example, you will understand the spring security mechanism. No need to database. Just we are adding some users and authorities (roles). In this example we used Basic Authentication (username, password)
```java
@Bean
public UserDetailsService users() {
UserDetails user =
User.builder()
.username("user")
.password("pass")
.passwordEncoder(passwordEncoder::encode)
// .password(passwordEncoder.encode("pass"))
.roles("USER")
.build();
...
return new InMemoryUserDetailsManager(user, admin, mod);
}
```
- to allow endpoints we can use:
```java
@Bean
public WebSecurityCustomizer webSecurityCustomizer() {
return (web) ->
web.ignoring()
.requestMatchers(
new AntPathRequestMatcher("/auth/**"),
new AntPathRequestMatcher("/public/**"));
}
```
- also we can configure http security options:
```java
@Bean
public SecurityFilterChain filterChain(HttpSecurity httpSecurity) throws Exception {
httpSecurity
.headers(x -> x.frameOptions(HeadersConfigurer.FrameOptionsConfig::disable))
.csrf(AbstractHttpConfigurer::disable)
.cors(Customizer.withDefaults())
// authenticate any request except web.ignoring()
// also you can allow some endpoints here:
// x.requestMatchers(new AntPathRequestMatcher("/auth/**")).permitAll()
.authorizeHttpRequests(x -> x.anyRequest().authenticated())
.httpBasic(Customizer.withDefaults());
return httpSecurity.build();
}
```
## 2 - 3 ) Basic Authentication with Hardcoded Enum Roles
### With this example, you will able to add hardcoded roles to user. Many projects we don't need to add dynamic roles and store them via different table in database.
#### NOTE: After first example, if we want to retrieve users from database we have to implement our custom user details service, and use spring-security's UserDetails class instead our User class:
```java
@Service
@RequiredArgsConstructor
public class UserDetailsServiceImpl implements UserDetailsService {
private final UserService userService;
@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
var user = userService.getByUsername(username);
if (user.isEmpty()) {
throw new EntityNotFoundException();
}
return new CustomUserDetails(user.get());
}
}
public class CustomUserDetails implements UserDetails {
private final User user;
public CustomUserDetails(User user) {
this.user = user;
}
@Override
public Collection<? extends GrantedAuthority> getAuthorities() {
return this.user.getRoles();
}
@Override
public String getPassword() {
return this.user.getPassword();
}
@Override
public String getUsername() {
return this.user.getUsername();
}
@Override
public boolean isAccountNonExpired() {
return true;
}
@Override
public boolean isAccountNonLocked() {
return true;
}
@Override
public boolean isCredentialsNonExpired() {
return true;
}
@Override
public boolean isEnabled() {
return this.user.isEnabled();
}
}
```
## 4 ) Basic Authentication with Dynamic Roles
### With this example, you will able to add dynamic roles to user and store these roles in database.
```java
.....
public class User extends BaseEntity {
.....
@ManyToMany(fetch = FetchType.EAGER, cascade = CascadeType.MERGE)
@JoinTable(
name = "user_role",
joinColumns = @JoinColumn(name = "user_id", referencedColumnName = "id"),
inverseJoinColumns = @JoinColumn(name = "role_id", referencedColumnName = "id"))
private Set<Role> roles;
}
....
public class Role extends BaseEntity implements GrantedAuthority {
@Column(name = "role_name")
private String name;
@ManyToMany(
fetch = FetchType.LAZY,
cascade = {CascadeType.PERSIST, CascadeType.MERGE},
mappedBy = "roles")
@JsonIgnore
private Set<User> users = new HashSet<>();
@Override
public String getAuthority() {
return getName();
}
}
```
## 5 ) Give Access or Block to endpoints Dynamically By Role
### With this example, you will able to allow or block access to endpoints by Roles. We are using AuthorizationManager (AccessDecisionVoter is deprecated) to decide.
```java
@Component
@RequiredArgsConstructor
public class RoleBasedVoter implements AuthorizationManager<RequestAuthorizationContext> {
private final RoleRepository roleRepository;
@Override
public AuthorizationDecision check(
Supplier<Authentication> authentication, RequestAuthorizationContext object) {
if (authentication.get().getPrincipal() instanceof UserDetails) {
UserDetails userDetails = (UserDetails) authentication.get().getPrincipal();
String requestUrl = object.getRequest().getRequestURI();
List<Role> roles = roleRepository.findByUsers_Username(userDetails.getUsername());
for (Role role : roles) {
if (role.getRestrictedEndpoints().contains(requestUrl)) {
return new AuthorizationDecision(false);
}
}
}
return new AuthorizationDecision(true);
}
}
@Configuration
@EnableWebSecurity
@EnableMethodSecurity
@RequiredArgsConstructor
public class SecurityConfig {
private final UserDetailsServiceImpl userDetailsService;
private final RoleBasedVoter roleBasedVoter;
@Bean
public SecurityFilterChain filterChain(HttpSecurity httpSecurity) throws Exception {
httpSecurity
....
.userDetailsService(userDetailsService)
.authorizeHttpRequests(x -> x.anyRequest().access(roleBasedVoter))
....
```
## 6 7 8 9 ) JWT examples
- JWT is a token based authentication mechanism. Once you got token, you can use it until expire. Also we use refresh token to get new token after your access token get expired
-------------------------------------------------------------------
### Demo video:
[https://www.youtube.com/watch?v=BoioooM1vL8](https://www.youtube.com/watch?v=BoioooM1vL8)
-------------------------------------------------------------------
### Special thanks to `Ivan Franchin` for oauth2 example below:
[https://github.com/ivangfr/springboot-react-social-login](https://github.com/ivangfr/springboot-react-social-login) | 0 |
wimdeblauwe/blog-example-code | Example code for my blog entries | null | null | 1 |
bezkoder/spring-boot-refresh-token-jwt | Spring Boot Refresh Token using JWT example - Expire and Renew JWT Token | jwt jwt-auth jwt-authentication jwt-authorization spring-boot spring-boot-2 spring-boot-security spring-data spring-security | # Spring Boot Refresh Token with JWT example
Build JWT Refresh Token in the Java Spring Boot Application. You can know how to expire the JWT, then renew the Access Token with Refresh Token.
The instruction can be found at:
[Spring Boot Refresh Token with JWT example](https://bezkoder.com/spring-boot-refresh-token-jwt/)
## User Registration, User Login and Authorization process.
The diagram shows flow of how we implement User Registration, User Login and Authorization process.

And this is for Refresh Token:

## Spring Boot Server Architecture with Spring Security
You can have an overview of our Spring Boot Server with the diagram below:

## Configure Spring Datasource, JPA, App properties
Open `src/main/resources/application.properties`
```properties
spring.datasource.url= jdbc:mysql://localhost:3306/testdb?useSSL=false
spring.datasource.username= root
spring.datasource.password= 123456
spring.jpa.properties.hibernate.dialect= org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto= update
# App Properties
bezkoder.app.jwtSecret= bezKoderSecretKey
bezkoder.app.jwtExpirationMs= 3600000
bezkoder.app.jwtRefreshExpirationMs= 86400000
```
## Run Spring Boot application
```
mvn spring-boot:run
```
## Run following SQL insert statements
```
INSERT INTO roles(name) VALUES('ROLE_USER');
INSERT INTO roles(name) VALUES('ROLE_MODERATOR');
INSERT INTO roles(name) VALUES('ROLE_ADMIN');
```
Related Posts:
> [Spring Boot JWT Refresh Token using HttpOnly Cookies](https://www.bezkoder.com/spring-security-refresh-token/)
> [Spring Boot, Spring Security, MySQL: JWT Authentication & Authorization example](https://bezkoder.com/spring-boot-jwt-authentication/)
> [For PostgreSQL](https://bezkoder.com/spring-boot-security-postgresql-jwt-authentication/)
> [For MongoDB](https://bezkoder.com/spring-boot-jwt-auth-mongodb/)
## More Practice:
> [Spring Boot File upload example with Multipart File](https://bezkoder.com/spring-boot-file-upload/)
> [Exception handling: @RestControllerAdvice example in Spring Boot](https://bezkoder.com/spring-boot-restcontrolleradvice/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Pagination & Sorting example](https://www.bezkoder.com/spring-boot-pagination-sorting-example/)
Associations:
> [Spring Boot One To Many example with Spring JPA, Hibernate](https://www.bezkoder.com/jpa-one-to-many/)
> [Spring Boot Many To Many example with Spring JPA, Hibernate](https://www.bezkoder.com/jpa-many-to-many/)
> [JPA One To One example with Spring Boot](https://www.bezkoder.com/jpa-one-to-one/)
Deployment:
> [Deploy Spring Boot App on AWS – Elastic Beanstalk](https://www.bezkoder.com/deploy-spring-boot-aws-eb/)
> [Docker Compose Spring Boot and MySQL example](https://www.bezkoder.com/docker-compose-spring-boot-mysql/)
## Fullstack Authentication
> [Spring Boot + Vue.js JWT Authentication](https://bezkoder.com/spring-boot-vue-js-authentication-jwt-spring-security/)
> [Spring Boot + Angular 8 JWT Authentication](https://bezkoder.com/angular-spring-boot-jwt-auth/)
> [Spring Boot + Angular 10 JWT Authentication](https://bezkoder.com/angular-10-spring-boot-jwt-auth/)
> [Spring Boot + Angular 11 JWT Authentication](https://bezkoder.com/angular-11-spring-boot-jwt-auth/)
> [Spring Boot + Angular 12 JWT Authentication](https://www.bezkoder.com/angular-12-spring-boot-jwt-auth/)
> [Spring Boot + Angular 13 JWT Authentication](https://www.bezkoder.com/angular-13-spring-boot-jwt-auth/)
> [Spring Boot + Angular 14 JWT Authentication](https://www.bezkoder.com/angular-14-spring-boot-jwt-auth/)
> [Spring Boot + Angular 15 JWT Authentication](https://www.bezkoder.com/angular-15-spring-boot-jwt-auth/)
> [Spring Boot + Angular 16 JWT Authentication](https://www.bezkoder.com/angular-16-spring-boot-jwt-auth/)
> [Spring Boot + Angular 17 JWT Authentication](https://www.bezkoder.com/angular-17-spring-boot-jwt-auth/)
> [Spring Boot + React JWT Authentication](https://bezkoder.com/spring-boot-react-jwt-auth/)
## Fullstack CRUD App
> [Vue.js + Spring Boot + H2 Embedded database example](https://www.bezkoder.com/spring-boot-vue-js-crud-example/)
> [Vue.js + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-vue-js-mysql/)
> [Vue.js + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-vue-js-postgresql/)
> [Angular 8 + Spring Boot + Embedded database example](https://www.bezkoder.com/angular-spring-boot-crud/)
> [Angular 8 + Spring Boot + MySQL example](https://bezkoder.com/angular-spring-boot-crud/)
> [Angular 8 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-spring-boot-postgresql/)
> [Angular 10 + Spring Boot + MySQL example](https://bezkoder.com/angular-10-spring-boot-crud/)
> [Angular 10 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-10-spring-boot-postgresql/)
> [Angular 11 + Spring Boot + MySQL example](https://bezkoder.com/angular-11-spring-boot-crud/)
> [Angular 11 + Spring Boot + PostgreSQL example](https://bezkoder.com/angular-11-spring-boot-postgresql/)
> [Angular 12 + Spring Boot + Embedded database example](https://www.bezkoder.com/angular-12-spring-boot-crud/)
> [Angular 12 + Spring Boot + MySQL example](https://www.bezkoder.com/angular-12-spring-boot-mysql/)
> [Angular 12 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/angular-12-spring-boot-postgresql/)
> [Angular 13 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-13-crud/)
> [Angular 13 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-13-mysql/)
> [Angular 13 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-13-postgresql/)
> [Angular 14 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-14-crud/)
> [Angular 14 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-14-mysql/)
> [Angular 14 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-14-postgresql/)
> [Angular 15 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-15-mysql/)
> [Angular 15 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-15-postgresql/)
> [Angular 15 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-15-mongodb/)
> [Angular 16 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-16-crud/)
> [Angular 16 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-16-mysql/)
> [Angular 16 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-16-postgresql/)
> [Angular 16 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-16-mongodb/)
> [Angular 17 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-17-crud/)
> [Angular 17 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-17-mysql/)
> [Angular 17 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-17-postgresql/)
> [Angular 17 + Spring Boot + MongoDB example](https://www.bezkoder.com/spring-boot-angular-17-mongodb/)
> [React + Spring Boot + MySQL example](https://bezkoder.com/react-spring-boot-crud/)
> [React + Spring Boot + PostgreSQL example](https://bezkoder.com/spring-boot-react-postgresql/)
> [React + Spring Boot + MongoDB example](https://bezkoder.com/react-spring-boot-mongodb/)
Run both Back-end & Front-end in one place:
> [Integrate Angular with Spring Boot Rest API](https://bezkoder.com/integrate-angular-spring-boot/)
> [Integrate React.js with Spring Boot Rest API](https://bezkoder.com/integrate-reactjs-spring-boot/)
> [Integrate Vue.js with Spring Boot Rest API](https://bezkoder.com/integrate-vue-spring-boot/)
## More Practice:
> [Spring Boot File upload example with Multipart File](https://bezkoder.com/spring-boot-file-upload/)
> [Exception handling: @RestControllerAdvice example in Spring Boot](https://bezkoder.com/spring-boot-restcontrolleradvice/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Pagination & Sorting example](https://www.bezkoder.com/spring-boot-pagination-sorting-example/)
Associations:
> [JPA/Hibernate One To Many example](https://www.bezkoder.com/jpa-one-to-many/)
> [JPA/Hibernate Many To Many example](https://www.bezkoder.com/jpa-many-to-many/)
> [JPA/Hibernate One To One example](https://www.bezkoder.com/jpa-one-to-one/)
Deployment:
> [Deploy Spring Boot App on AWS – Elastic Beanstalk](https://www.bezkoder.com/deploy-spring-boot-aws-eb/)
> [Docker Compose Spring Boot and MySQL example](https://www.bezkoder.com/docker-compose-spring-boot-mysql/)
| 1 |
saturnism/docker-kubernetes-by-example-java | An end-to-end Spring Boot example w container and Kubernetes | docker google-cloud-platform java kubernetes microservices spring-boot | Spring Boot with Kubernetes
---------------------------
Ray used this repository for many of his demos during his talks around the world in conferences. You can find a list of Ray's videos on how to run the demos in his [YouTube playlist](https://www.youtube.com/playlist?list=PL4uYfigiauVYH4OwOyq8FGbPQOn-JueEf).
But specifically, checkout the one from [Jfokus](https://www.youtube.com/watch?v=R2l-tL_1els&index=6&list=PL4uYfigiauVYH4OwOyq8FGbPQOn-JueEf).
To set everything up, see [Kubernetes Code Lab](http://bit.ly/k8s-lab)
To learn about running this in Istio, see [Istio Code Lab](http://bit.ly/istio-lab)
| 1 |
cuiyungao/JavaCodeExamples | null | null | # JavaCodeExamples
The repository includes some examples of the Java course. You're welcome to modify and add examples.
第二章:java语言基础[src](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src)
第三章:类和对象[src3](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src3)
第四章:接口与继承[src4](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src4)
第五章:设计模式导论(单例模式,三种工厂模式)[src5](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src5)
第六章:软件测试及代码质量保障[src6](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src6)
第七章:集合与策略模式、迭代器模式[src7](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src7)
第八章:数据访问对象模式、输入输出流[src8](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src8)
第九章:MVC、Swing图形用户界面[src9](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src9)
第十章:Thread、生产者与消费者模式[src10](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src10)
第十一章:泛型与反射、模板模式[src11](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src11)
第十二章:网络编程、观察者模式[src12](https://github.com/cuiyungao/JavaCodeExamples/tree/master/src12)
Good luck!
| 0 |
chanjarster/spring-mvc-error-handling-example | Spring MVC error handling example | null | # Spring Boot & Spring MVC 异常处理的N种方法
参考文档:
* Spring Boot 1.5.4.RELEASE [Documentation][spring-boot-doc]
* Spring framework 4.3.9.RELEASE [Documentation][spring-mvc-doc]
* [Exception Handling in Spring MVC][blog-exception-handling-in-spring-mvc]
## 默认行为
根据Spring Boot官方文档的说法:
> For machine clients it will produce a JSON response with details of the error, the HTTP status and the exception message. For browser clients there is a ‘whitelabel’ error view that renders the same data in HTML format
也就是说,当发生异常时:
* 如果请求是从浏览器发送出来的,那么返回一个`Whitelabel Error Page`
* 如果请求是从machine客户端发送出来的,那么会返回相同信息的`json`
你可以在浏览器中依次访问以下地址:
1. http://localhost:8080/return-model-and-view
1. http://localhost:8080/return-view-name
1. http://localhost:8080/return-view
1. http://localhost:8080/return-text-plain
1. http://localhost:8080/return-json-1
1. http://localhost:8080/return-json-2
会发现[FooController][def-foo]和[FooRestController][def-foo-rest]返回的结果都是一个`Whitelabel Error Page`也就是html。
但是如果你使用`curl -i -s -X GET`访问上述地址,那么返回的都是如下的`json`:
```json
{
"timestamp": 1498886969426,
"status": 500,
"error": "Internal Server Error",
"exception": "me.chanjar.exception.SomeException",
"message": "...",
"trace": "...",
"path": "..."
}
```
但是有一个URL除外:`http://localhost:8080/return-text-plain`,它不会返回任何结果,原因稍后会有说明。
本章节代码在[me.chanjar.boot.def][pkg-me.chanjar.boot.def],使用[DefaultExample][boot-DefaultExample]运行。
注意:我们必须在`application.properties`添加`server.error.include-stacktrace=always`才能够得到stacktrace。
### Spring MVC处理请求的总体流程

### 分析为何浏览器访问都`Whitelabel Error Page`

### 分析为何curl text/plain资源却没有返回结果
如果你在[logback-spring.xml][logback-spring.xml]里一样配置了这么一段:
```xml
<logger name="org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod" level="TRACE"/>
```
那么你就能在日志文件里发现这么一个异常:
```
... TRACE 13387 --- [nio-8080-exec-2] .w.s.m.m.a.ServletInvocableHandlerMethod : Invoking 'org.springframework.boot.autoconfigure.web.BasicErrorController.error' with arguments [org.apache.catalina.core.ApplicationHttpRequest@1408b81]
... TRACE 13387 --- [nio-8080-exec-2] .w.s.m.m.a.ServletInvocableHandlerMethod : Method [org.springframework.boot.autoconfigure.web.BasicErrorController.error] returned [<500 Internal Server Error,{timestamp=Thu Nov 09 13:20:15 CST 2017, status=500, error=Internal Server Error, exception=me.chanjar.exception.SomeException, message=No message available, trace=..., path=/return-text-plain, {}>]
... TRACE 13387 --- [nio-8080-exec-2] .w.s.m.m.a.ServletInvocableHandlerMethod : Error handling return value [type=org.springframework.http.ResponseEntity] [value=<500 Internal Server Error,{timestamp=Thu Nov 09 13:20:15 CST 2017, status=500, error=Internal Server Error, exception=me.chanjar.exception.SomeException, message=No message available, trace=..., path=/return-text-plain, {}>]
HandlerMethod details:
Controller [org.springframework.boot.autoconfigure.web.BasicErrorController]
Method [public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)]
org.springframework.web.HttpMediaTypeNotAcceptableException: Could not find acceptable representation
...
```
要理解这个异常是怎么来的,那我们来简单分析以下Spring MVC的处理过程:

那么这个问题怎么解决呢?我会在*自定义ErrorController*里说明。
## 自定义Error页面
前面看到了,Spring Boot针对浏览器发起的请求的error页面是`Whitelabel Error Page`,下面讲解如何自定义error页面。
注意2:自定义Error页面不会影响machine客户端的输出结果
### 方法1
根据Spring Boot官方文档,如果想要定制这个页面只需要:
> to customize it just add a `View` that resolves to ‘error’
这句话讲的不是很明白,其实只要看`ErrorMvcAutoConfiguration.WhitelabelErrorViewConfiguration`的代码就知道,只需注册一个名字叫做`error`的`View`类型的`Bean`就行了。
本例的[CustomDefaultErrorViewConfiguration][boot-CustomDefaultErrorViewConfiguration]注册将`error`页面改到了[templates/custom-error-page/error.html][boot-custom-error-page-error-html]上。
本章节代码在[me.chanjar.boot.customdefaulterrorview][pkg-me.chanjar.boot.customdefaulterrorview],使用[CustomDefaultErrorViewExample][boot-CustomDefaultErrorViewExample]运行。
### 方法2
方法2比方法1简单很多,在Spring官方文档中没有说明。其实只需要提供`error` `View`所对应的页面文件即可。
比如在本例里,因为使用的是Thymeleaf模板引擎,所以在classpath `/templates`放一个自定义的`error.html`就能够自定义error页面了。
本章节就不提供代码了,有兴趣的你可以自己尝试。
## 自定义Error属性
前面看到了不论error页面还是error json,能够得到的属性就只有:timestamp、status、error、exception、message、trace、path。
如果你想自定义这些属性,可以如Spring Boot官方文档所说的:
> simply add a bean of type `ErrorAttributes` to use the existing mechanism but replace the contents
在`ErrorMvcAutoConfiguration.errorAttributes`提供了[DefaultErrorAttributes][spring-DefaultErrorAttributes-javadoc],我们也可以参照这个提供一个自己的[CustomErrorAttributes][boot-CustomErrorAttributes]覆盖掉它。
如果使用`curl -i -s -X GET`访问相关地址可以看到,返回的json里的出了修改过的属性,还有添加的属性:
```json
{
"exception": "customized exception",
"add-attribute": "add-attribute",
"path": "customized path",
"trace": "customized trace",
"error": "customized error",
"message": "customized message",
"timestamp": 1498892609326,
"status": 100
}
```
本章节代码在[me.chanjar.boot.customerrorattributes][pkg-me.chanjar.boot.customerrorattributes],使用[CustomErrorAttributesExample][boot-CustomErrorAttributesExample]运行。
## 自定义ErrorController
在前面提到了`curl -i -s -X GET http://localhost:8080/return-text-plain`得不到error信息,解决这个问题有两个关键点:
1. 请求的时候指定`Accept`头,避免匹配到[BasicErrorController.error][BasicErrorController_error]方法。比如:`curl -i -s -X GET -H 'Accept: text/plain' http://localhost:8080/return-text-plain`
1. 提供自定义的``ErrorController``提供一个`path=/error procudes=text/plain`的方法。
其实还有另一种方式:提供一个Object->String转换的HttpMessageConverter,这个方法本文不展开。
下面将如何提供自定义的``ErrorController``。按照Spring Boot官方文档的说法:
> To do that just extend ``BasicErrorController`` and add a public method with a ``@RequestMapping`` that has a ``produces`` attribute, and create a bean of your new type.
所以我们提供了一个[CustomErrorController][boot-CustomErrorController],并且通过[CustomErrorControllerConfiguration][boot-CustomErrorControllerConfiguration]将其注册为Bean。
本章节代码在[me.chanjar.boot.customerrorcontroller][pkg-me.chanjar.boot.customerrorcontroller],使用[CustomErrorControllerExample][boot-CustomErrorControllerExample]运行。
## ControllerAdvice定制特定异常返回结果
根据Spring Boot官方文档的例子,可以使用[@ControllerAdvice][spring-ControllerAdvice]和[@ExceptionHandler][spring-ExceptionHandler]对特定异常返回特定的结果。
我们在这里定义了一个新的异常:AnotherException,然后在[BarControllerAdvice][boot-BarControllerAdvice]中对SomeException和AnotherException定义了不同的[@ExceptionHandler][spring-ExceptionHandler]:
* SomeException都返回到`controlleradvice/some-ex-error.html`上
* AnotherException统统返回`ResponseEntity`
在[BarController][boot-BarController]中,所有`*-a`都抛出``SomeException``,所有`*-b`都抛出``AnotherException``。下面是用浏览器和curl访问的结果:
| url | Browser | curl -i -s -X GET |
| -------------------------------------- |------------------------------------------| -------------------|
| http://localhost:8080/bar/html-a | some-ex-error.html | some-ex-error.html |
| http://localhost:8080/bar/html-b | error(json) | error(json) |
| http://localhost:8080/bar/json-a | some-ex-error.html | some-ex-error.html |
| http://localhost:8080/bar/json-b | error(json) | error(json) |
| http://localhost:8080/bar/text-plain-a | some-ex-error.html | some-ex-error.html |
| http://localhost:8080/bar/text-plain-b | Could not find acceptable representation(White Error Page) | Could not find acceptable representation(无输出) |
注意上方表格的``Could not find acceptable representation``错误,产生这个的原因前面已经讲过。
不过需要注意的是流程稍微有点不同,在前面的例子里的流程是这样的:
1. 访问url
1. 抛出异常
1. forward到 /error
1. BasicErrorController.error方法返回的ResponseEntity没有办法转换成String
本章节例子的异常是这样的:
1. 访问url
1. 抛出异常
1. 被`@ExceptionHandler`处理
1. AnotherException的`@ExceptionHander`返回的ResponseEntity没有办法转换成String,被算作没有被处理成功
1. forward到 /error
1. BasicErrorController.error方法返回的ResponseEntity没有办法转换成String
所以你会发现如果使用[@ExceptionHandler][spring-ExceptionHandler],那就得自己根据请求头``Accept``的不同而输出不同的结果了,办法就是定义一个``void @ExceptionHandler``,具体见[@ExceptionHandler javadoc][spring-ExceptionHandler-javadoc]。
## 定制不同Status Code的错误页面
Spring Boot 官方文档提供了一种简单的根据不同Status Code跳到不同error页面的方法,见[这里][spring-boot-status-code-error-page]。
我们可以将不同的Status Code的页面放在`classpath: public/error`或`classpath: templates/error`目录下,比如`400.html`、`5xx.html`、`400.ftl`、`5xx.ftl`。
打开浏览器访问以下url会获得不同的结果:
| url | Result |
|----------------------------------------|--------------------------------------------|
| http://localhost:8080/loo/error-403 | static resource: public/error/403.html |
| http://localhost:8080/loo/error-406 | thymeleaf view: templates/error/406.html |
| http://localhost:8080/loo/error-600 | Whitelabel error page |
| http://localhost:8080/loo/error-601 | thymeleaf view: templates/error/6xx.html |
注意`/loo/error-600`返回的是Whitelabel error page,但是`/loo/error-403`和`loo/error-406`能够返回我们期望的错误页面,这是为什么?先来看看代码。
在`loo/error-403`中,我们抛出了异常``Exception403``:
```java
@ResponseStatus(HttpStatus.FORBIDDEN)
public class Exception403 extends RuntimeException
```
在`loo/error-406`中,我们抛出了异常``Exception406``:
```java
@ResponseStatus(NOT_ACCEPTABLE)
public class Exception406 extends RuntimeException
```
注意到这两个异常都有[@ResponseStatus][spring-ResponseStatus-javadoc]注解,这个是注解标明了这个异常所对应的Status Code。
但是在`loo/error-600`中抛出的[SomeException][SomeException]没有这个注解,而是尝试在``Response.setStatus(600)``来达到目的,但结果是失败的,这是为什么呢?:
```java
@RequestMapping("/error-600")
public String error600(HttpServletRequest request, HttpServletResponse response) throws SomeException {
request.setAttribute(WebUtils.ERROR_STATUS_CODE_ATTRIBUTE, 600);
response.setStatus(600);
throw new SomeException();
}
```
要了解为什么就需要知道Spring MVC对于异常的处理机制,下面简单讲解一下:
Spring MVC处理异常的地方在[DispatcherServlet.processHandlerException][DispatcherServlet_L1216],这个方法会利用[HandlerExceptionResolver][spring-HandlerExceptionResolver]来看异常应该返回什么`ModelAndView`。
目前已知的[HandlerExceptionResolver][spring-HandlerExceptionResolver]有这么几个:
1. [DefaultErrorAttributes][spring-DefaultErrorAttributes-javadoc],只负责把异常记录在Request attributes中,name是`org.springframework.boot.autoconfigure.web.DefaultErrorAttributes.ERROR`
1. [ExceptionHandlerExceptionResolver][spring-ExceptionHandlerExceptionResolver-javadoc],根据[@ExceptionHandler][spring-ExceptionHandler] resolve
1. [ResponseStatusExceptionResolver][spring-ResponseStatusExceptionResolver-javadoc],根据[@ResponseStatus][spring-ResponseStatus-javadoc] resolve
1. [DefaultHandlerExceptionResolver][spring-DefaultHandlerExceptionResolver-javadoc],负责处理Spring MVC标准异常
``Exception403``和``Exception406``都有被[ResponseStatusExceptionResolver][spring-ResponseStatusExceptionResolver-javadoc]处理了,而``SomeException``没有任何Handler处理,这样``DispatcherServlet``就会将这个异常往上抛至到容器处理(见[DispatcherServlet#L1243][DispatcherServlet_L1243]),以Tomcat为例,它在[StandardHostValve#L317][StandardHostValve_L317]、[StandardHostValve#L345][StandardHostValve_L345]会将Status Code设置成500,然后forward到`/error`,结果就是[BasicErrorController][BasicErrorController]处理时就看到Status Code=500,然后按照500去找error page找不到,就只能返回White error page了。
实际上,从Request的attributes角度来看,交给[BasicErrorController][BasicErrorController]处理时,和容器自己处理时,有几个相关属性的内部情况时这样的:
| Attribute name | When throw up to Tomcat | Handled by HandlerExceptionResolver |
|---------------------------------|-------------------------|--------------------------------------|
| `DefaultErrorAttributes.ERROR` | Has value | Has Value |
| `DispatcherServlet.EXCEPTION` | No value | Has Value |
| `javax.servlet.error.exception` | Has value | No Value |
PS. `DefaultErrorAttributes.ERROR` = `org.springframework.boot.autoconfigure.web.DefaultErrorAttributes.ERROR`
PS. `DispatcherServlet.EXCEPTION` = `org.springframework.web.servlet.DispatcherServlet.EXCEPTION`
解决办法有两个:
1. 给``SomeException``添加``@ResponseStatus``,但是这个方法有两个局限:
1. 如果这个异常不是你能修改的,比如在第三方的Jar包里
1. 如果``@ResponseStatus``使用[HttpStatus][HttpStatus-javadoc]作为参数,但是这个枚举定义的Status Code数量有限
1. 使用[@ExceptionHandler][spring-ExceptionHandler],不过得注意自己决定view以及status code。
这个办法很麻烦,因为你得小心的处理请求头里的`Accept`,以此返回相应的结果,并且因为它的处理方式脱离了框架,容易造成返回信息不一致。
第二种解决办法的例子`loo/error-601`,对应的代码:
```java
@RequestMapping("/error-601")
public String error601(HttpServletRequest request, HttpServletResponse response) throws AnotherException {
throw new AnotherException();
}
@ExceptionHandler(AnotherException.class)
String handleAnotherException(HttpServletRequest request, HttpServletResponse response, Model model)
throws IOException {
// 需要设置Status Code,否则响应结果会是200
response.setStatus(601);
model.addAllAttributes(errorAttributes.getErrorAttributes(new ServletRequestAttributes(request), true));
return "error/6xx";
}
```
补充:从Spring Framework 5.0开始,提供了`ResponseStatusException`,你可以直接在抛出异常处定义Status Code、Reason,能够很好的解决第三方Jar包的问题。用法类似于这样:
```java
@PutMapping("/actor/{id}/{name}")
public String updateActorName(
@PathVariable("id") int id,
@PathVariable("name") String name) {
try {
return actorService.updateActor(id, name);
} catch (ActorNotFoundException ex) {
throw new ResponseStatusException(
HttpStatus.BAD_REQUEST, "Provide correct Actor Id", ex);
}
}
```
总结:
1. 没有被[HandlerExceptionResolver][spring-HandlerExceptionResolver]resolve到的异常会交给容器处理。已知的实现有(按照顺序):
1. [DefaultErrorAttributes][spring-DefaultErrorAttributes-javadoc],只负责把异常记录在Request attributes中,name是`org.springframework.boot.autoconfigure.web.DefaultErrorAttributes.ERROR`
1. [ExceptionHandlerExceptionResolver][spring-ExceptionHandlerExceptionResolver-javadoc],根据[@ExceptionHandler][spring-ExceptionHandler] resolve
1. [ResponseStatusExceptionResolver][spring-ResponseStatusExceptionResolver-javadoc],根据[@ResponseStatus][spring-ResponseStatus-javadoc] resolve
1. [DefaultHandlerExceptionResolver][spring-DefaultHandlerExceptionResolver-javadoc],负责处理Spring MVC标准异常
1. [@ResponseStatus][spring-ResponseStatus-javadoc]用来规定异常对应的Status Code,其他异常的Status Code由容器决定,在Tomcat里都认定为500([StandardHostValve#L317][StandardHostValve_L317]、[StandardHostValve#L345][StandardHostValve_L345])
1. [@ExceptionHandler][spring-ExceptionHandler]处理的异常不会经过[BasicErrorController][BasicErrorController],需要自己决定如何返回页面,并且设置Status Code(如果不设置就是200)
1. [BasicErrorController][BasicErrorController]会尝试根据Status Code找error page,找不到的话就用Whitelabel error page
本章节代码在[me.chanjar.boot.customstatuserrorpage][pkg-me.chanjar.boot.customstatuserrorpage],使用[CustomStatusErrorPageExample][boot-CustomStatusErrorPageExample]运行。
## 利用ErrorViewResolver来定制错误页面
前面讲到[BasicErrorController][BasicErrorController]会根据Status Code来跳转对应的error页面,其实这个工作是由[DefaultErrorViewResolver][DefaultErrorViewResolver-javadoc]完成的。
实际上我们也可以提供自己的[ErrorViewResolver][ErrorViewResolver-javadoc]来定制特定异常的error页面。
```java
@Component
public class SomeExceptionErrorViewResolver implements ErrorViewResolver {
@Override
public ModelAndView resolveErrorView(HttpServletRequest request, HttpStatus status, Map<String, Object> model) {
return new ModelAndView("custom-error-view-resolver/some-ex-error", model);
}
}
```
不过需要注意的是,无法通过[ErrorViewResolver][ErrorViewResolver-javadoc]设定Status Code,Status Code由[@ResponseStatus][spring-ResponseStatus-javadoc]或者容器决定(Tomcat里一律是500)。
本章节代码在[me.chanjar.boot.customerrorviewresolver][pkg-me.chanjar.boot.customerrorviewresolver],使用[CustomErrorViewResolverExample][boot-CustomErrorViewResolverExample]运行。
## @ExceptionHandler 和 @ControllerAdvice
前面的例子中已经有了对[@ControllerAdvice][spring-ControllerAdvice]和[@ExceptionHandler][spring-ExceptionHandler]的使用,这里只是在做一些补充说明:
1. ``@ExceptionHandler``配合``@ControllerAdvice``用时,能够应用到所有被``@ControllerAdvice``切到的Controller
2. ``@ExceptionHandler``在Controller里的时候,就只会对那个Controller生效
## 最佳实践
前面讲了那么多种方式,那么在Spring MVC中处理异常的最佳实践是什么?在回答这个问题前我先给出一个好的异常处理应该是什么样子的:
1. 返回的异常信息能够适配各种`Accept`,比如`Accept:text/html`返回html页面,`Accept:application/json`返回json。
1. 统一的异常信息schema,且可自定义,比如只包含`timestamp`、`error`、`message`等信息。
1. 能够自定义部分信息,比如可以自定义`error`、`message`的内容。
要达成以上目标我们可以采取的方法:
1. 达成第1条:自定义`ErrorController`,扩展`BasicErrorController`,支持更多的`Accept`类型。
1. 达成第2条:自定义`ErrorAttributes`
1. 达成第3条:
1. 使用`@ResponseStatus`或`ResponseStatusException`(since 5.0)
2. 前一种方式不适用时,自定义`ErrorAttributes`,在里面写代码,针对特定异常返回特定信息。推荐使用配置的方式来做,比如配置文件里写XXXException的message是YYYY。
Spring MVC对于从Controller抛出的异常是不打印到console的,解决办法是提供一个`HandlerExceptionResolver`,比如这样:
```
@Order(Ordered.HIGHEST_PRECEDENCE)
public class ErrorLogger implements HandlerExceptionResolver {
private static final Logger LOGGER = LoggerFactory.getLogger(ErrorLogger.class);
@Override
public ModelAndView resolveException(HttpServletRequest request, HttpServletResponse response, Object handler,
Exception ex) {
LOGGER.error("Exception happened at [{}]: {}", request.getRequestURI(), ExceptionUtils.getStackTrace(ex));
return null;
}
}
```
## 附录I
下表列出哪些特性是Spring Boot的,哪些是Spring MVC的:
| Feature | Spring Boot | Spring MVC |
|----------------------------|----------------------|--------------------|
| BasicErrorController | Yes | |
| ErrorAttributes | Yes | |
| ErrorViewResolver | Yes | |
| @ControllerAdvice | | Yes |
| @ExceptionHandler | | Yes |
| @ResponseStatus | | Yes |
| HandlerExceptionResolver | | Yes |
[spring-boot-doc]: http://docs.spring.io/spring-boot/docs/1.5.4.RELEASE/reference/htmlsingle/#boot-features-error-handling
[spring-mvc-doc]: http://docs.spring.io/spring/docs/4.3.9.RELEASE/spring-framework-reference/htmlsingle/#mvc-exceptionhandlers
[blog-exception-handling-in-spring-mvc]: https://spring.io/blog/2013/11/01/exception-handling-in-spring-mvc
[RequestMapping]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-web/src/main/java/org/springframework/web/bind/annotation/RequestMapping.java
[RequestMappingHandlerMapping]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/RequestMappingHandlerMapping.java
[AbstractHandlerMethodMapping_L341]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/handler/AbstractHandlerMethodMapping.java#L341
[AbstractHandlerMethodMapping_L352]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/handler/AbstractHandlerMethodMapping.java#L352
[BasicErrorController]: https://github.com/spring-projects/spring-boot/blob/v1.5.4.RELEASE/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/BasicErrorController.java
[BasicErrorController_errorHtml]: https://github.com/spring-projects/spring-boot/blob/v1.5.4.RELEASE/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/BasicErrorController.java#L86
[BasicErrorController_error]: https://github.com/spring-projects/spring-boot/blob/v1.5.4.RELEASE/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/BasicErrorController.java#L98
[RequestMappingInfo_L266]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/RequestMappingInfo.java#L266
[ProducesRequestCondition_L235]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/condition/ProducesRequestCondition.java#L235
[HttpEntityMethodProcessor_L159]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/HttpEntityMethodProcessor.java#L159
[HttpEntityMethodProcessor]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/HttpEntityMethodProcessor.java
[AbstractMessageConverterMethodProcessor]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/AbstractMessageConverterMethodProcessor.java
[AbstractMessageConverterMethodProcessor_L259]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/AbstractMessageConverterMethodProcessor.java#L259
[AbstractMessageConverterMethodProcessor_L163]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/AbstractMessageConverterMethodProcessor.java#L163
[AbstractMessageConverterMethodProcessor_L187]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/AbstractMessageConverterMethodProcessor.java#L187
[HttpMessageConverter]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-web/src/main/java/org/springframework/http/converter/HttpMessageConverter.java
[RequestMapping_produces]: https://github.com/spring-projects/spring-framework/blob/master/spring-web/src/main/java/org/springframework/web/bind/annotation/RequestMapping.java#L396
[SomeException]: src/main/java/me/chanjar/exception/SomeException.java
[def-foo]: src/main/java/me/chanjar/controllers/FooController.java
[def-foo-rest]: src/main/java/me/chanjar/controllers/FooRestController.java
[pkg-me.chanjar.boot.def]: src/main/java/me/chanjar/boot/def
[boot-CustomDefaultErrorViewConfiguration]: src/main/java/me/chanjar/boot/customdefaulterrorview/CustomDefaultErrorViewConfiguration.java
[boot-DefaultExample]: src/main/java/me/chanjar/boot/def/DefaultExample.java
[pkg-me.chanjar.boot.customdefaulterrorview]: src/main/java/me/chanjar/boot/customdefaulterrorview
[boot-custom-error-page-error-html]: src/main/resources/templates/custom-error-page/error.html
[boot-CustomDefaultErrorViewExample]: src/main/java/me/chanjar/boot/customdefaulterrorview/CustomDefaultErrorViewExample.java
[pkg-me.chanjar.boot.customerrorattributes]: src/main/java/me/chanjar/boot/customerrorattributes
[boot-CustomErrorAttributes]: src/main/java/me/chanjar/boot/customerrorattributes/CustomErrorAttributes.java
[boot-CustomErrorAttributesExample]: src/main/java/me/chanjar/boot/customerrorattributes/CustomErrorAttributesExample.java
[pkg-me.chanjar.boot.customerrorcontroller]: src/main/java/me/chanjar/boot/customerrorcontroller
[boot-CustomErrorController]: src/main/java/me/chanjar/boot/customerrorcontroller/CustomErrorController.java
[boot-CustomErrorControllerConfiguration]: src/main/java/me/chanjar/boot/customerrorcontroller/CustomErrorControllerConfiguration.java
[boot-CustomErrorControllerExample]: src/main/java/me/chanjar/boot/customerrorcontroller/CustomErrorControllerExample.java
[pkg-me.chanjar.boot.controlleradvice]: src/main/java/me/chanjar/boot/controlleradvice/
[boot-BarController]: src/main/java/me/chanjar/boot/controlleradvice/BarController.java
[boot-BarControllerAdvice]: src/main/java/me/chanjar/boot/controlleradvice/BarControllerAdvice.java
[logback-spring.xml]: src/main/resources/logback-spring.xml
[pkg-me.chanjar.boot.customstatuserrorpage]: src/main/java/me/chanjar/boot/customstatuserrorpage
[boot-CustomStatusErrorPageExample]: src/main/java/me/chanjar/boot/customstatuserrorpage/CustomStatusErrorPageExample.java
[pkg-me.chanjar.boot.customerrorviewresolver]: src/main/java/me/chanjar/boot/customerrorviewresolver
[boot-CustomErrorViewResolverExample]: src/main/java/me/chanjar/boot/customerrorviewresolver/CustomErrorViewResolverExample.java
[spring-ExceptionHandler]: http://docs.spring.io/spring/docs/4.3.9.RELEASE/spring-framework-reference/htmlsingle/#mvc-ann-exceptionhandler
[spring-ControllerAdvice]: http://docs.spring.io/spring/docs/4.3.9.RELEASE/spring-framework-reference/htmlsingle/#mvc-ann-controller-advice
[spring-ExceptionHandler-javadoc]: https://docs.spring.io/spring/docs/4.3.9.RELEASE/javadoc-api/org/springframework/web/bind/annotation/ExceptionHandler.html
[spring-boot-status-code-error-page]: http://docs.spring.io/spring-boot/docs/1.5.4.RELEASE/reference/htmlsingle/#boot-features-error-handling-custom-error-pages
[spring-ResponseStatus-javadoc]: https://docs.spring.io/spring/docs/4.3.9.RELEASE/javadoc-api/org/springframework/web/bind/annotation/ResponseStatus.html
[spring-HandlerExceptionResolver]: http://docs.spring.io/spring/docs/4.3.9.RELEASE/spring-framework-reference/htmlsingle/#mvc-exceptionhandlers-resolver
[spring-DefaultErrorAttributes-javadoc]: http://docs.spring.io/spring-boot/docs/1.5.4.RELEASE/api/org/springframework/boot/autoconfigure/web/DefaultErrorAttributes.html
[spring-ExceptionHandlerExceptionResolver-javadoc]: https://docs.spring.io/spring/docs/4.3.9.RELEASE/javadoc-api/org/springframework/web/servlet/mvc/method/annotation/ExceptionHandlerExceptionResolver.html
[spring-ResponseStatusExceptionResolver-javadoc]: https://docs.spring.io/spring/docs/4.3.9.RELEASE/javadoc-api/org/springframework/web/servlet/mvc/annotation/ResponseStatusExceptionResolver.html
[spring-DefaultHandlerExceptionResolver-javadoc]: https://docs.spring.io/spring/docs/4.3.9.RELEASE/javadoc-api/org/springframework/web/servlet/mvc/support/DefaultHandlerExceptionResolver.html
[StandardHostValve_L317]: https://github.com/apache/tomcat/blob/TONCAT_9_0_0_M23/java/org/apache/catalina/core/StandardHostValve.java#L317
[StandardHostValve_L345]: https://github.com/apache/tomcat/blob/TONCAT_9_0_0_M23/java/org/apache/catalina/core/StandardHostValve.java#L345
[DispatcherServlet_L1216]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/DispatcherServlet.java#L1216
[DispatcherServlet_L1243]: https://github.com/spring-projects/spring-framework/blob/v4.3.9.RELEASE/spring-webmvc/src/main/java/org/springframework/web/servlet/DispatcherServlet.java#L1243
[HttpStatus-javadoc]: https://docs.spring.io/spring/docs/4.3.9.RELEASE/javadoc-api/org/springframework/http/HttpStatus.html
[DefaultErrorViewResolver-javadoc]: http://docs.spring.io/spring-boot/docs/1.5.4.RELEASE/api/org/springframework/boot/autoconfigure/web/DefaultErrorViewResolver.html
[ErrorViewResolver-javadoc]: http://docs.spring.io/spring-boot/docs/1.5.4.RELEASE/api/org/springframework/boot/autoconfigure/web/ErrorViewResolver.html
| 1 |
philipsorst/angular-rest-springsecurity | An example AngularJS Application that uses a Spring Security protected Jersey REST backend based on Hibernate/JPA | null | angular-rest-springsecurity
===========================
[](https://travis-ci.org/philipsorst/angular-rest-springsecurity)
[](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=W9NAXW8YAZ4D6&item_name=Angular+REST+SpringSecurity+Example+Donation¤cy_code=EUR)
An example AngularJS Application that uses a Spring Security protected Jersey REST backend based on Hibernate/JPA.
About
-----
The projects aim is to demonstrate the Java implementation of a simple REST interface which is used by an AngularJS application. The following topics are covered:
* A relational database that holds blog posts and users.
* A REST service that exposes the data in the database.
* Authentication and authorization against the REST service.
* A Simple AngularJS application that allows users to view or edit news entries depending on their role.
* A responsive design.
This project is just meant to be a demonstration, therefore it is neither well documented nor well tested. Use it to learn about the technologies used, but do not use it for productive applications.
Any feedback is welcome, and I will incorporate useful pull requests.
Technologies
------------
* [AngularJS](http://angularjs.org/)
* [Bootstrap](http://getbootstrap.com/)
* [Jersey](https://jersey.java.net/)
* [Spring Security](http://projects.spring.io/spring-security/)
* [Hibernate](http://hibernate.org/)
Running
-------
Make sure Java >= 8 and [Maven](http://maven.apache.org/) >= 3.0 is installed on your system. Go into the project dir and type `mvn jetty:run`, then point your browser to `http://localhost:8080`.
License
-------
[The Apache Software License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.txt) | 1 |
FirelyTeam/fhirstarters | A collection of example projects to get you up to speed with HL7 FHIR | null | # fhirstarters
This repository contains samples to get you started at playing with FHIR!
* [Browser/Postman Tutorial](./postman/): First introduction, requires no coding
* [Java/HAPI](./java/): Sample client and server projects in Java
* [.NET](./dotnet/): Sample project in .NET/C#
* [iOS](./ios/): Sample client project in iOS/Swift 4
* [Beginner's Track](./BeginnersTrack.md): Exercises used for the hands-on track during the FHIR Developer Days | 1 |
ellucianEthos/java-examples | Set of sample code for invoking Ethos APIs and publishing/consuming change-notifications. | null | # java-examples
This is a set of sample code for performing the following actions against the Ethos Integration services:
- getting an access token
- invoking the proxy API
- consuming change-notifications
- publishing change-notifications
This folder can be imported into eclipse as a java project.
| 1 |
janlindblad/bookzone | Bookzone example | null | BookZone Example Project
========================
This project is an open source network programmability tutorial
accompanying the book "Network Programmability with YANG:
The Structure of Network Automation with YANG, NETCONF, RESTCONF,
and gNMI". We hope you will find this project useful whether you
are reading the book or not. For the deeper insights, we would
certainly recommend reading the book, however.
The book can be ordered online from a variety of sources. The ISBN
is 978-0135180396 (or 0135180392 in the older ISBN format). Here are
links to the [YANG book Amazon page] and the [BookZone project page].
[YANG book Amazon page]: https://www.amazon.com/Network-Programmability-YANG-Modeling-driven-Management/dp/0135180392
[BookZone project page]: https://github.com/janlindblad/bookzone
Software you will need
----------------------
The project is using a variety of tools. If you would like to run
everything, you will need to download and install the following
YANG, NETCONF, RESTCONF and gNXI software development kits (SDKs):
+ [ConfD Basic]
+ [gNXI]
+ [NSO]
[ConfD Basic]: https://www.tail-f.com/confd-basic/
[gNXI]: https://github.com/google/gnxi
[NSO]: https://developer.cisco.com/docs/nso/#!getting-nso/getting-nso
Once you have installed ConfD or NSO, you need to set a collection of
environment variables in order for the system to find the commands,
sources and files necessary. The easiest way to do that is to source
the resource (rc) file that comes with each installation. This also
allows easily switching from one installed version to another or back
again within seconds.
If a bash user installed ConfD basic in \~/confd-basic/6.7/ the
following command would set up the environment correctly:
> source \~/confd-basic/6.7/confdrc
Similarly, for an NSO user with NSO installed in \~/nso/4.7/ the
following command would set up the environment correctly:
> source \~/nso/4.7/ncsrc
Depending on your interests, you may not need all of these SDKs. The
example descriptions in "The YANG Journey" below list the SDKs you
will need for each one. Apart from these SDKs, you will also need the
following tools. Many of them are often already installed on a
developer's machine, but you may want to make sure.
### make
Essential build tool, included in most development environments. How
to install depends on your system, but you could try one of these:
> sudo apt-get install build-essential
> yum install make
### curl
URL fetching tool. Here is the [curl] home page.
[curl]: https://github.com/curl/curl
You could also try one of these commands:
> sudo apt-get install curl
> yum install curl
### netconf-console
Basic NETCONF client. Here is the [netconf-console] home page.
[netconf-console]: https://pypi.org/project/netconf-console/
With some luck, you could also install it using
> pip install netconf-console
### Paramiko
Python SSH implementation. Here is the [Paramiko] installation page.
[Paramiko]: http://www.paramiko.org/installing.html
With some luck, you could also install it using
> pip install paramiko
### Pyang
Basic (extensible) YANG compiler. Here is the [Pyang] home page.
[Pyang]: https://pypi.org/project/pyang/
With some luck, you could also install it using
> pip install pyang
The YANG Joureney
-----------------
There are seven stages of YANG models in this project. If you are new
to YANG, you can start from the beginning and work your way through
to more complex modules. Or you can jump in at any particular step
and start playing around and making your own changes and experiments.
### 1-intro/
This is the first, small and simple step towards a YANG module and
running server. The module is tiny, just 30 lines, but complete
enough to compile and allow starting a server and get to configure a
few things on it. The system doesn't actually do anything based on
what is configured, so you can feel safe experimenting. This section
requires [ConfD Basic].
### 2-config/
The module from 1-intro is expanded with a couple of additional
lists, and uses some more specific types, such as uses enumerations
and identities and a typedef. This section requires [ConfD Basic].
### 3-action-notif/
This module adds actions and notifications to the system, and uses
the leafref type. The project also provides some backend code for
implementing the actions and notifications. This section requires
[ConfD Basic].
### 4-oper/
This version of the module adds operational data (read only status)
to the mix, and introduces a grouping. The module has now grown to
just over 250 lines. This section requires [ConfD Basic].
### 5-precision/
In this version of the module, the existing leafs are refined with
more exact types, ranges, patterns and contstraints like properly
linked leafrefs, must and when statements. At this point, the module
is about 350 lines. This section requires [ConfD Basic].
### 6-augment/
Here the exact same bookzone-example.yang module is used as in
5-precision/, but an additional module audiozone-example.yang
augments the former with additional elements. This allows a
different organization to evolve and tailor an externally sourced
YANG module to their needs. This section requires [ConfD Basic].
### 7-service/
This section uses a completely different YANG module, working on the
service level (not on the device level). The aim is to understand
service orchestration, and how that maps to device configuration.
This section requires [NSO].
The NETCONF, RESTCONF, and gNMI journey
---------------------------------------
These examples also allow testing out the details of NETCONF,
RESTCONF and gNMI. In order to have a server to play with, these
examples piggy-back on the YANG examples above.
### NETCONF
To play with NETCONF towards the ConfD server, go into 6-augment/
above and type
> make nc
To see a menu of specific operations you could run. This section
requires [ConfD Basic].
### RESTCONF
Similarly, for RESTCONF, type
> make rc
Unfortunately the RESTCONF functionality available in ConfD is not
currently included in the ConfD Basic package. In order to play with
RESTCONF, a ConfD Premium evaluation would be required.
### gNMI/gRPC
In order to play with gNMI, it is necessary to go into the 2-config/
directory. This is because the gNXI implementation currently does not
support YANG 1.1, which the higher YANG modules leverage. Once in
that directory, type
> make gnmi
This will show a menu of commands you can run. This section requires
the [gNXI] SDK.
Contributions
-------------
You are most welcome to contribute to this project with suggestions,
bug reports and pull requests. Keep in mind that the examples have to
stay very close to the contents of the book, however.
Jan Lindblad
| 1 |
dehora/nakadi-java | 🌀 Client library for the Nakadi Event Broker (examples: http://bit.ly/njc-examples, site: https://dehora.github.io/nakadi-java/) | gson java nakadi rxjava |
**Status**
- Build: [](https://circleci.com/gh/dehora/nakadi-java)
- Source Release: [0.19.0](https://github.com/zalando-incubator/nakadi-java/releases/tag/0.19.0)
- Contact: [maintainers](https://github.com/zalando-incubator/nakadi-java/blob/master/MAINTAINERS)
# nakadi-java
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *[DocToc](https://github.com/thlorenz/doctoc)*
- [About](#about)
- [Background](#background)
- [Requirements and Getting Started](#requirements-and-getting-started)
- [Status](#status)
- [Usage](#usage)
- [Available Resources](#available-resources)
- [Creating a client](#creating-a-client)
- [Authorization](#authorization)
- [OAuth Scopes](#oauth-scopes)
- [HTTPS Security](#https-security)
- [Metric Collector](#metric-collector)
- [JSON](#json)
- [Using TypeLiterals](#using-typeliterals)
- [Resource Classes](#resource-classes)
- [Retries](#retries)
- [Event Types](#event-types)
- [Producing Events](#producing-events)
- [Publishing Compression](#publishing-compression)
- [Compacting Events](#compacting-events)
- [Subscriptions](#subscriptions)
- [Consuming Events](#consuming-events)
- [Named Event Type Streaming](#named-event-type-streaming)
- [Subscription Streaming](#subscription-streaming)
- [Streaming and Compression](#streaming-and-compression)
- [Backpressure and Buffering](#backpressure-and-buffering)
- [Healthchecks](#healthchecks)
- [Registry](#registry)
- [Metrics](#metrics)
- [Installation](#installation)
- [Maven](#maven)
- [Gradle](#gradle)
- [SBT](#sbt)
- [Idioms](#idioms)
- [Fluent](#fluent)
- [Iterable pagination](#iterable-pagination)
- [HTTP Requests](#http-requests)
- [Exceptions](#exceptions)
- [Build and Development](#build-and-development)
- [Internals](#internals)
- [Contributing](#contributing)
- [License](#license)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
----
## About
Nakadi-java is a client driver for the [Nakadi Event Broker](https://github.com/zalando/nakadi). It was created for the following reasons:
- Completeness. Provide a full reference implementation of the Nakadi API for producers and consumers.
- Minimise dependencies. The client doesn't force a dependency on frameworks or libraries. The sole dependency is on the SLF4J API.
- Robust HTTP handling. Request/response behaviour and consumer stream handling are given the same importance as functionality.
- Operational visibility. Error handling, stream retries, logging and instrumentation are given the same importance as functionality.
- Be easy to use. The client should be straightforward to use as is, or as an engine for higher level abstractions.
### Background
A number of JVM clients already exist and are in use - nakadi-java is not meant
to compete with or replace them. In certain respects they solve different
goals. The existing JVM clients looked at as a whole, provide partial
implementations with larger dependencies, but which are idiomatic to certain
frameworks, whereas the aim of nakadi-java is to provide a full client with a
reduced dependency footprint to allow portability.
Nakadi-java is designed for application development. If you're just looking
for a quick way to browse and examine streams, take a look at the excellent
[Peek library](https://github.com/zalando-incubator/peek).
## Requirements and Getting Started
See the [installation section](#installation) on how to add the client library
to your project as a jar dependency. The client uses Java 1.8 or later.
## Status
The client is pre 1.0.0, with the aim of getting to 1.0.0 quickly.
The client API is relatively stable and unlikely to see massive sweeping
changes, though some changes should be expected. The entire Nakadi API is
implemented.
The client's had some basic testing to verify it can handle things like
consumer stream connection/network failures and retries. It should not be
deemed robust yet, but it is a goal to produce a well-behaved production
level client especially for producing and consuming events for 1.0.0.
See also:
- The [open issues](https://github.com/zalando-incubator/nakadi-java/issues) section has a
list of bugs and things to get done.
- The [help-wanted](https://github.com/zalando-incubator/nakadi-java/issues?q=is%3Aissue+is%3Aopen+label%3Aenhancement) has a list of things that would be pretty cool to have.
As a client that aims to provide a full implementation, it will post 1.0.0
continue to track the development of the Nakadi Event Broker's API.
## Usage
This section summarizes what you can do with the client. The [nakadi-java-examples](https://github.com/zalando-incubator/nakadi-java-examples) project provides runnable examples for most of what you see here.
### Available Resources
The API resources this client supports are:
- [Event Types](#event-types)
- [Events](#producing-events)
- [Subscriptions](#subscriptions)
- [Streams](#consuming-events)
- [Registry](#registry)
- [Healthchecks](#healthchecks)
- [Metrics](#metrics)
### Creating a client
A new client can be created via a builder:
```java
NakadiClient client = NakadiClient.newBuilder()
.baseURI("http://localhost:9080")
.build();
```
You can create multiple clients if you wish. Every client must have a base URI
set and can optionally have other values set (notably for token providers and
metrics collection).
Here's a fuller configuration:
```java
NakadiClient client = NakadiClient.newBuilder()
.baseURI("http://localhost:9080")
.metricCollector(myMetricsCollector)
.tokenProvider(myResourceTokenProvider)
.readTimeout(60, TimeUnit.SECONDS)
.connectTimeout(30, TimeUnit.SECONDS)
.build();
```
#### Authorization
By default the client does not send an authorization header with each request.
This is useful for working with a development Nakadi server, which will try
and resolve bearer tokens if they are sent but will accept requests with no
bearer token present.
You can define a token provider by implementing the `TokenProvider`
interface, which will supply the client with a string that will be
sent to the server as the value of an Authorization header. The
`TokenProvider` is called on each request and thus can be implemented as
a dynamic provider to handle token refreshes and recycling.
```java
NakadiClient client = NakadiClient.newBuilder()
.baseURI("http://localhost:9080")
.tokenProvider(new MyTokenProvider())
.build();
```
There's a `ZignTokenProvider` that can connect to the zign process and run in
the background in the
[nakadi-java-zign](https://github.com/zalando-incubator/nakadi-java/tree/master/nakadi-java-zign)
sub-project.
#### OAuth Scopes
Some resources support use of OAuth scopes (where the API documents them, it's incomplete as of
2016-11-15):
- `StreamProcessor`: can be set via `StreamProcessor.Builder.scope()` before calling `start()`.
- `EventTypeResource`: can be set via `EventTypeResource.scope()` before making an API call.
- `EventResource`: can be set via `EventResource.scope` before making an API call.
- `SubscriptionResource`: can be set via `EventResource.scope` before making an API call.
On each request the client will resolve the scope to a token by asking the `TokenProvider` to
supply a token via `authHeaderValue`. If a custom scope has been applied on the request it will
be used, otherwise the default scope documented by the API will be used.
The scope set on resource instances is stateful, not one-shot, and will be re-used across requests.
To change the scope, call `scope()` again will a new scope value, or if you wish to clear the
custom scope and revert to defaults, call `scope()` with `null`. However the `StreamProcessor`
scope is fixed once streaming begins after `start()` is called and can't be changed.
#### HTTPS Security
The client checks certificates. If your target server is using a self-signed
certificate and for some reason you can't install that cert into the system
trust store using something like keytool, you can supply the cert via
the builder's `certificatePath` method:
```java
NakadiClient client = NakadiClient.newBuilder()
.baseURI("http://localhost:9080")
.certificatePath("file:///var/certs")
.build();
```
This will cause the client to install any certificates it finds. There are three
loading options
- A path beginning with `"file:///"` will load from the supplied directory any
files with `*.crt` and `*.pem` extensions
- A path beginning with `"classpath:"` and ending with `*.crt` or `*.pem` will
load that resource item from the classpath.
- A path beginning with `"classpath:"` will load from the supplied classpath
directory any files with `*.crt` and `*.pem` extensions.
The classpath option targeting a directory is for local development and not meant
for production/deployed situations. If you must use the classpath for deployed apps,
use the cert resource option as that will allow the classpath resolver to work more
generally.
If no `certificatePath` is supplied, the system defaults are used. This is the
strongly recommended option for deployments.
#### Metric Collector
The client emits well known metrics as meters and timers (see `MetricCollector`
for the available metrics).
By default the client ignores metrics, but you can supply your own collector.
For example, this sets the client to use `MetricsCollectorDropwizard`, from
the support library that integrates with
[Dropwizard Metrics](http://metrics.dropwizard.io/3.1.0/):
```java
MetricRegistry metricRegistry = new MetricRegistry();
MetricsCollectorDropwizard metrics =
new MetricsCollectorDropwizard("mynamespace", metricRegistry);
NakadiClient client = NakadiClient.newBuilder()
.baseURI("http://localhost:9080")
.metricCollector(metrics)
.build();
```
To provide your own collector implement the `MetricCollector` interface. Each
emitted metric is based on an enum. Implementations can look at the enum and
record as they wish. They can also work with them generally and ask any enum
for its path, which will be a dotted string.
Please note that calls to the collector are currently blocking. This may be
changed to asynchronous for 1.0.0, but in the meantime if your collector is
making network calls or hitting disk, you might want to hand off them off
as Callables or send them to a queue.
#### JSON
Some calls return `Response` objects that contain raw json. You can serialize
these using the `JsonSupport` helper, available from the client. `JsonSupport`
accepts classes, and for generic bindings you can supply it with a `TypeLiteral`.
#### Using TypeLiterals
When using a `TypeLiteral`, please note the following:
- TypeLiterals must be an actual subclass. This means declaring the TypeLiteral with a
pair of braces `new TypeLiteral<Map<String, Object>>() {};` and not just
`new TypeLiteral<Map<String, Object>>();`. The latter won't work and can cause hard to debug
errors.
- TypeLiterals for the 3 category classes can't be declared with a String. For example
`DataChangeEvent<String>` will cause marshalling errors, because the underlying JSON
processing treats `String` as a JSON String type and not escaped JSON. The parser then
fails when it sees structured JSON instead of a JSOn String. Typically you want to declare
something like `DataChangeEvent<Map<String, Object>>` to destructure the data properly. The
client might add a stringified option for 1.0.0.
#### Resource Classes
Once you have a client, you can access server resources via the `resources()`
method. Here's an example that gets an events resource:
```java
EventResource resource = client.resources().events();
```
All calls you make to the server will be done via these resource classes to
make network calls distinct from local requests.
#### Retries
A number of the non streaming resource classes support a backoff policy:
- `EventTypeResource`
- `SubscriptionResource`
- `EventResource`
- `RegistryResource`
- `MetricsResource`
- `HealthCheckResource`
They each take a `RetryPolicy` via a `retryPolicy()` method; there is an inbuilt `ExponentialRetry`
that can be used to define a maximum number of requests or maximum total time elapsed. Note that
the retry policy object is stateful and must be reset between results. You can disable the retries
(the default behavior) by setting `retryPolicy` to null, or to start a new retry supplying a fresh
`RetryPolicy` instance.
**Please be careful with EventTypeResource**: the ordering and general delivery behaviour for event
delivery is **undefined** under retries. That is, a delivery retry may result in out of order
batches being sent to the server. Also retrying a partially delivered (207) batch may result
in one or more events being delivered multiple times.
### Event Types
You can create, edit and delete event types as well as list them:
```java
// grab an event type resource
EventTypeResource eventTypes = client.resources().eventTypes();
// create a new event type, using an escaped string for the schema
EventType requisitions = new EventType()
.category(EventType.Category.data)
.name("priority-requisitions")
.owningApplication("weyland")
.partitionStrategy(EventType.PARTITION_HASH)
.enrichmentStrategy(EventType.ENRICHMENT_METADATA)
.partitionKeyFields("id")
.cleanupPolicy("delete")
.schema(new EventTypeSchema().schema(
"{ \"properties\": { \"id\": { \"type\": \"string\" } } }"));
Response response = eventTypes.create(requisitions);
// read the partitions for an event type
PartitionCollection partitions = eventTypes.partitions("priority-requisitions");
partitions.iterable().forEach(System.out::println);
// read a particular partition
Partition partition = eventTypes.partition("priority-requisitions", "0");
System.out.println(partition);
// list event types
EventTypeCollection list = client.resources().eventTypes().list();
list.iterable().forEach(System.out::println);
// find by name
EventType byName = eventTypes.findByName("priority-requisitions");
// update
Response update = eventTypes.update(byName);
// remove
Response delete = eventTypes.delete("priority-requisitions");
```
### Producing Events
You can send one or more events to the server:
```java
EventResource resource = client.resources().events();
// nb: EventMetadata.newPreparedEventMetadata sets defaults for eid, occurred at and flow id fields
EventMetadata em = EventMetadata.newPreparedEventMetadata();
// you can send flowids as strings and tracing spans as Map<String, String>
EventMetadata em1 = new EventMetadata()
.eid(UUID.randomUUID().toString())
.occurredAt(OffsetDateTime.now())
.spanCtx(tracingSpan)
.flowId("decafbad");
// create our domain event inside a typesafe DataChangeEvent
PriorityRequisition pr = new PriorityRequisition("22");
DataChangeEvent<PriorityRequisition> dce = new DataChangeEvent<PriorityRequisition>()
.metadata(em)
.op(DataChangeEvent.Op.C)
.dataType("priority-requisitions")
.data(pr);
Response response = resource.send("priority-requisitions", dce);
// send a batch of two events
DataChangeEvent<PriorityRequisition> dce1 = new DataChangeEvent<PriorityRequisition>()
.metadata(EventMetadata.newPreparedEventMetadata())
.op(DataChangeEvent.Op.C)
.dataType("priority-requisitions")
.data(new PriorityRequisition("23"));
DataChangeEvent<PriorityRequisition> dce2 = new DataChangeEvent<PriorityRequisition>()
.metadata(EventMetadata.newPreparedEventMetadata())
.op(DataChangeEvent.Op.C)
.dataType("priority-requisitions")
.data(new PriorityRequisition("24"));
ArrayList list = new ArrayList();
list.add(dce1);
list.add(dce2);
Response batch = resource.send("priority-requisitions", list);
```
#### Publishing Compression
Event posting can be compressed by configuring the client
with `.enablePublishingCompression()`:
```java
NakadiClient client = NakadiClient.newBuilder()
.baseURI("http://localhost:9080")
.enablePublishingCompression()
.build();
```
### Compacting Events
Events can be sent with compaction information by setting their metadata.
This is required when the `cleanup_policy` of event type is set to `compact`.
```java
// create metadata with compaction information for an event
EventMetadata compacted = EventMetadata.newPreparedEventMetadata()
.partitionCompactionKey("329ed3d2-8366-11e8-adc0-fa7ae01bbebc");
PriorityRequisition pr = new PriorityRequisition("23");
DataChangeEvent<PriorityRequisition> dce = new DataChangeEvent<PriorityRequisition>()
.metadata(compacted)
.op(DataChangeEvent.Op.C)
.dataType("priority-requisitions")
.data(pr);
Response response = resource.send("priority-requisitions", dce);
```
### Subscriptions
You can create, edit and delete subscriptions as well as list them:
```java
// grab a subscription resource
SubscriptionResource resource = client.resources().subscriptions();
// create a new subscription
Subscription subscription = new Subscription()
.consumerGroup("mccaffrey-cg")
.eventType("priority-requisitions")
.owningApplication("shaper");
Response response = resource.create(subscription);
// create a subscription from a given offset
Cursor c0 = new Cursor("0", "000000000000002009", "priority-requisitions");
Cursor c1 = new Cursor("1", "000000000000002008", "priority-requisitions");
Subscription offsetSubscription = new Subscription()
.consumerGroup("roja-cg")
.eventType("priority-requisitions")
.owningApplication("anarch")
.readFrom("cursors")
.initialCursors(Lists.newArrayList(c0, c1));
// find a subscription
Subscription found = resource.find("a2ab0b7c-ee58-48e5-b96a-d13bce73d857");
// get the cursors and iterate them
SubscriptionCursorCollection cursors = resource.cursors(found.id());
cursors.iterable().forEach(System.out::println);
// get the stats and iterate them
SubscriptionEventTypeStatsCollection stats = resource.stats(found.id());
stats.iterable().forEach(System.out::println);
// list subscriptions
SubscriptionCollection list = resource.list();
list.iterable().forEach(System.out::println);
// list for an owner
list = resource.list(new QueryParams().param("owning_application", "shaper"));
list.iterable().forEach(System.out::println);
// delete a subscription
Response delete = resource.delete(found.id());
```
### Consuming Events
You can consume events via stream. Both the named event type and newer
subscription stream APIs are available via the `StreamProcessor` class.
A `StreamProcessor` accepts a `StreamObserverProvider` which is a factory for
creating the `StreamObserver` class the events will be sent to. The
`StreamObserver` accepts one or more `StreamBatchRecord` objects where each
item in the batch has been marshalled to an instance of `T` as defined by
it and the `StreamObserverProvider`.
A `StreamObserver` implements a number of callback methods that are invoked
by the underlying stream processor:
- `onStart()`: Called before stream connection begins and before a retry is attempted.
- `onStop()`: Called after the stream is completed and when a retry is needed.
- `onCompleted()`: Called when the client is finished sending batches.
- `onError(Throwable t)`: Called when there's been an error.
- `onNext(StreamBatchRecord<T> record)`: Called for each batch of events. Also contains the current offset observer and the batch cursor.
- `requestBackPressure()`: request a maximum number of emitted items from the stream.
- `requestBuffer()`: Ask to have batches buffered before emitting them from the stream.
The interface is influenced by [RxJava](https://github.com/ReactiveX/RxJava)
and the general style of `onX` callback APIs. You can see an example in the
source called `LoggingStreamObserverProvider` which maps the events in a
batch to plain strings.
The API also supports a `StreamOffsetObserver` - the offset observer is given
to the `StreamObserver` object with each `onNext` call. Typically the offset
observer is used to provide checkpointing of a consumer's partition in the
stream.
#### Named Event Type Streaming
To consume a named event type stream, configure a `StreamProcessor` and run it:
```java
// configure a stream for an event type from a given cursor;
// all api settings are available
StreamConfiguration sc = new StreamConfiguration()
.eventTypeName("priority-requisitions")
.cursors(new Cursor("0", "450"));
// set up a processor with an event observer provider
StreamProcessor processor = client.resources().streamBuilder()
.streamConfiguration(sc)
.streamObserverFactory(new LoggingStreamObserverProvider())
.build();
// consume in the background until the app exits or stop() is called
processor.start();
// configure a stream with a bounded number of events retries, keepalives, plus custom timeouts
StreamConfiguration sc1 = new StreamConfiguration()
.eventTypeName("priority-requisitions")
.cursors(new Cursor("0", "450"))
.batchLimit(15)
.batchFlushTimeout(2, TimeUnit.SECONDS)
.maxRetryAttempts(256)
.maxRetryDelay(30, TimeUnit.SECONDS)
.streamLimit(1024)
.connectTimeout(8, TimeUnit.SECONDS)
.readTimeout(3, TimeUnit.MINUTES)
.streamKeepAliveLimit(2048)
.streamTimeout(1, TimeUnit.DAYS);
// create a processor with an observer and an offset observer
StreamProcessor boundedProcessor = client.resources().streamBuilder()
.streamConfiguration(sc1)
.streamObserverFactory(new LoggingStreamObserverProvider())
.streamOffsetObserver(new LoggingStreamOffsetObserver())
.build();
/*
start in the background, stopping when the criteria are reached,
the app exits, or stop() is called
*/
boundedProcessor.start();
```
If no offset observer is given, the default observer used is
`LoggingStreamOffsetObserver` which simply logs when it is invoked.
#### Subscription Streaming
Subscription stream consumers allow consumers to store offsets with the server
and work much like named event type streams:
```java
// configure a stream from a subscription id;
// all api settings are available
StreamConfiguration sc = new StreamConfiguration()
.subscriptionId("27302800-bc68-4026-a9ff-8d89372f8473")
.maxUncommittedEvents(20L);
// create a processor with an observer
StreamProcessor processor = client.resources().streamBuilder(sc)
.streamObserverFactory(new LoggingStreamObserverProvider())
.build();
// consume in the background until the app exits or stop() is called
processor.start();
```
There are some notable differences:
- The `StreamConfiguration` is configured with a `subscriptionId` instead of an `eventTypeName`.
- The inbuilt offset observer for a subscription stream will call Nakadi's checkpointing API to update the offset. You can replace this with your own implementation if you wish.
- A subscription stream also allows setting the `maxUncommittedEvents` as defined by the Nakadi API.
#### Streaming and Compression
The default behaviour for all streaming consumers is to request a gzipped stream. This can
be changed to a plain stream by setting the `Accept-Encoding` header to `identity` on
`StreamConfiguration` as follows:
```java
StreamConfiguration sc = new StreamConfiguration()
// ask the server for unencoded data
.requestHeader("Accept-Encoding", "identity")
...;
```
#### Backpressure and Buffering
A `StreamObserver` can signal for backpressure via the `requestBackPressure`
method. This is applied with each `onNext` call to the `StreamObserver` and
so can be used to adjust backpressure dynamically. The client's underlying
stream processor will make a best effort attempt to honor backpressure.
If the user wants events buffered into contiguous batches it can set a buffer
size using `requestBuffer`. This is independent of the underlying HTTP
stream - the stream will be consumed off the wire based on the API request
settings - the batches are buffered in memory by the underlying processor.
This is applied during setup and is fixed for the processor's lifecycle.
Users that don't care about backpresure controls can subclass the
`StreamObserverBackPressure` class.
### Healthchecks
You can make healthcheck requests to the server:
```java
HealthCheckResource health = client.resources().health();
// check returning a response object, regardless of status
Response healthcheck = health().healthcheck();
// ask to throw if the check failed (non 2xx code)
Response throwable = health.healthcheckThrowing();
// check with an expoential backoff retry
RetryPolicy retry = ExponentialRetry.newBuilder()
.initialInterval(1000, TimeUnit.MILLISECONDS)
.maxAttempts(5)
.maxInterval(3000, TimeUnit.MILLISECONDS)
.build();
health.retryPolicy(retry).healthcheckThrowing();
```
### Registry
You can view the service registry:
```java
RegistryResource resource = client.resources().registry();
// get and iterate available enrichments
EnrichmentStrategyCollection enrichments = resource.listEnrichmentStrategies();
enrichments.iterable().forEach(System.out::println);
// get and iterate available validations
ValidationStrategyCollection validations = resource.listValidationStrategies();
validations.iterable().forEach(System.out::println);
```
### Metrics
You can view service metrics:
```java
MetricsResource metricsResource = client.resources().metrics();
// print service metrics
MetricsResource metricsResource = client.resources().metrics();
Metrics metrics = metricsResource.get();
Map<String, Object> items = metrics.items();
System.out.println(items);
```
Note that the structure of metrics is not defined by the server, hence it's
returned as as map within the `Metrics` object.
## Installation
### Maven
Add sonatype to the repositories element in `pom.xml` or `settings.xml` to access snapshots:
```xml
<repositories>
<repository>
<id>sonatype-nexus-snapshots</id>
<name>sonatype-nexus-snapshots</name>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
```
and add the project declaration to `pom.xml`:
```xml
<dependency>
<groupId>net.dehora.nakadi</groupId>
<artifactId>nakadi-java-client</artifactId>
<version>0.19.0</version>
</dependency>
```
### Gradle
Add sonatype to the `repositories` block for snapshots:
```groovy
repositories {
maven {
url = 'https://oss.sonatype.org/content/repositories/snapshots/'
}
}
```
```kotlin
repositories {
maven {
url = uri("https://oss.sonatype.org/content/repositories/snapshots")
}
}
```
and add the project to the `dependencies` block in `build.gradle`:
```groovy
dependencies {
implementation 'net.dehora.nakadi:nakadi-java-client:0.19.0'
}
```
```kotlin
dependencies {
implementation("net.dehora.nakadi:nakadi-java-client:0.19.0")
}
```
### SBT
Add sontaype to `resolvers` in `build.sbt` to access snapshots:
```scala
resolvers += Opts.resolver.sonatypeSnapshots
```
and add the project to `libraryDependencies` in `build.sbt`:
```scala
libraryDependencies += "net.dehora.nakadi" % "nakadi-java-client" % "0.19.0"
```
## Idioms
### Fluent
The client prefers a fluent style, setters return `this` to allow chaining.
Complex constructors use a builder pattern where needed. The JavaBeans
get/set prefixing idiom is not used by the API, as is increasingly typical
with modern Java code.
### Iterable pagination
Any API call that returns a collection, including ones that could be paginated
expose Iterable contracts, allowing `forEach` or `iterator` access:
```java
EventTypeCollection list = client.resources().eventTypes().list();
list.iterable().forEach(System.out::println);
Iterator<EventType> iterator = list.iterable().iterator();
while (iterator.hasNext()) {
EventType next = iterator.next();
System.out.println(next);
}
```
Pagination if it happens, is done automatically by the collection's backing
iterable by following the `next` relation sent back by the server.
You can if wish work with pages and hypertext links directly via the methods
on `ResourceCollection` which each collection implements.
### HTTP Requests
Calls that result in HTTP requests are performed using resource classes. The
results can be accessed as HTTP level responses or mapped to API objects.
You don't have to deal with HTTP responses from the API directly. If there
is a failure then a `NakadiException` or a subclass will be thrown. The
exception will have `Problem` information that can be examined.
### Exceptions
Client exceptions are runtime exceptions by default. They extend from
`NakadiException` which allows you to catch all errors under one type. The
`NakadiException` embeds a `Problem` object which can be examined. Nakadi's
API uses Problem JSON ([RFC7807](https://tools.ietf.org/html/rfc7807)) to
describe errors. Local errors also contain Problem descriptions.
The client will also throw an `IllegalArgumentException` in a number of places
where null fields are not accepted or sensible as values, such as required
parameters for builder classes. However the client performs no real data
validation for API requests, leaving that to the server. Invalid server
requests resulting in 422s will cause an `InvalidException` to be thrown
instead.
In a handful of circumstances the API exposes a checked exception where
it's neccessary the user handles the error; for example some exceptions
from `StreamOffsetObserver` are checked.
## Build and Development
The project is built with [Gradle](http://gradle.org/) and uses the
[Netflix Nebula](https://nebula-plugins.github.io/) plugins. The `./gradlew`
wrapper script will bootstrap the right Gradle version if it's not already
installed.
The main client jar file is build using the shadow plugin.
The main tasks are:
- `./gradlew build` : run a build and test
- `./gradlew clean` : clean down the build
- `./gradlew clean shadow` : builds the client jar
## Internals
The wiki page [Internals](https://github.com/dehora/nakadi-java/wiki/Internals)
has details on how the client works under the hood.
## Contributing
Please see the [issue tracker](https://github.com/zalando-incubator/nakadi-java/issues)
for things to work on. The [help-wanted](https://github.com/zalando-incubator/nakadi-java/issues?q=is%3Aissue+is%3Aopen+label%3Aenhancement) has a list of things that would be
pretty cool to have.
Before making a contribution, please let us know by posting a comment to the
relevant issue. If you would like to propose a new feature, create a new issue
first explaining the feature you’d like to contribute or bug you want to fix.
The codebase follows [Square's code style](https://github.com/square/java-code-styles)
for Java and Android projects.
----
## License
MIT License
Copyright (c) 2016 Zalando SE, https://tech.zalando.com
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| 0 |
nielsutrecht/jwt-angular-spring | JSON Web Token example that integrates both a Spring backend with an AngularJS frontend. | blogs java spring spring-boot | # JSON Web Token / AngularJS / Spring Boot example
[Blog post on this subject](http://niels.nu/blog/2015/json-web-tokens.html)
This is an example project where a Spring REST API is secured using JSON Web Tokens. Since there are relatively few examples available for Java and there are some pitfalls (such as most sources pointing to a Java lib that's not straightforward to use) I decided to extract my proof of concept into a stand-alone example and publish it for all to see.
## JSON Web Tokens
JSON Web Tokens have a few benefits over just sending a 'regular' token over the line. The more common approach to securing a REST API (outside of normal HTTP Basic Auth) is to send a random string as a token on succesful login from the server to the client. The client then sends this token on every request, and the server does an internal lookup on that token (in for example a REDIS cache or a simple Hashtable) to retrieve the corresponding user data.
With JSON Web Tokens the latter part isn't needed: the token itself contains a representation of the 'claims' of client: this can be just a username, but can also be extended to include any data you wish. This token is transmitted from the client on every request. The contents of the token are encrypted and a hash is added to prevent tampering: this way the content is secure: the server is the one signing and encrypting the token and is also the only one who had the key needed to decrypt the token.
In this example this key is fixed ("secretkey") but in a real life situations the secret key would simply be an array of bytes randomly generated on application startup. This has the added benefit that any tokens get automatically invalidated when you restart the service. If this behaviour is undesired you can persist the keys in for example REDIS.
## Server side: Spring Boot
I like using Spring (Boot) to create RESTful services. On the server side, the JWT signing is done in the user/login REST call in UserController. It contains a tiny 'database' of 2 users, one of which has the 'admin' rights. The verification is done in a Filter (JwtFilter): it filters every request that matches "/api/*". If a correct token isn't found an exception is thrown. If a correct token is found, the claims object is added to the Http Request object and can be used in any REST endpoint (as shown in ApiController).
The heavy lifting for JWT signing is done by the more than excellent [Java JWT](https://github.com/jwtk/jjwt) library.
## Client Side: AngularJS
The simple Angular app shows a login page. On successful login it checks with 'the API' which roles are available (of which the 'foo' role doesn't exist for any user).
## Running
It is a standard Maven project and can be imported into your favorite IDE. You run the example by starting the WebApplication class (it has a main) and navigating to http://localhost:8080/. If everything is correct you should see a "Welcome to the JSON Web Token / AngularJR / Spring example!" message and a login form.
| 1 |
bclozel/spring-resource-handling | Spring Framework 4.1 Resource Handling example | null | Spring Resource Handling
========================
[](https://travis-ci.org/bclozel/spring-resource-handling)
This application demonstrates new resource handling features in Spring Framework 4.1.
It was originally developed for the talk [Resource Handling in Spring MVC 4.1](https://2014.event.springone2gx.com/schedule/sessions/resource_handling_in_spring_mvc_4_1.html) talk at SpringOne2GX 2014.
This projects requires a local install of node+npm (see [nvm](https://github.com/creationix/nvm)).
The easiest way to get started - from the project root - development version:
SPRING_PROFILES_ACTIVE=development ./gradlew :server:bootRun
Or the production version (more optimizations):
SPRING_PROFILES_ACTIVE=production ./gradlew :server:bootRun
Then go to:
* http://localhost:8080/ for an example with JMustache templating
* http://localhost:8080/groovy for an example with Groovy Template Engine
* http://localhost:8080/app for an example with an [HTML5 AppCache Manifest](http://www.html5rocks.com/en/tutorials/appcache/beginner/)
(you can check this in Chrome with chrome://appcache-internals/ )
* http://localhost:8080/less for an example with a LESS stylesheet; this page uses less files and the LESS JS transpiler
in development mode, and a transpiled version in production
* http://localhost:8080/jsp for a JSP example
* http://localhost:8080/velocity for a Velocity example
Interesting parts of the application:
* [configuring resource handlers with resource resolvers and resource transformers](https://github.com/bclozel/spring-resource-handling/blob/master/server/src/main/resources/application-production.properties)
* [a sample template file using JMustache](https://github.com/bclozel/spring-resource-handling/blob/master/server/src/main/resources/mustache/index.html)
and a [custom Mustache lambdas](https://github.com/bclozel/spring-resource-handling/blob/master/server/src/main/java/org/springframework/samples/resources/support/MustacheViewResolverCustomizer.java) to resolve URLs to static resources
| 1 |
dteleguin/beercloak | BeerCloak: a comprehensive Keycloak extension example | keycloak | # BeerCloak: a comprehensive Keycloak extension example
BeerCloak is a collection of different techniques for building custom admin resources in Keycloak.
* `BeerEntity` JPA entity + LiquiBase changelog;
* `BeerResource` realm REST resource with CRUD operations & more;
* Authorization:
* roles: `view-beer` and `manage-beer`;
* automatically created for each existing realm;
* automatically created for each newly added realm;
* automatically included into the master `admin` role;
* used for authorization on `BeerResource` and sub-resources;
* Event logging:
* AdminEventBuilder instance;
* custom resource and action types (not yet implemented)
* GUI extensions to the admin console.
The `beercloak.resources.AbstractAdminResource` is ready to be used as a base class for admin resources. It contains the code necessary to setup authorization and logging.
### Structure
`beercloak-core`: "core" module with some "business logic", to demonstrate packaging with dependencies
`beercloak-module`: main module actually containing providers and everything (depends on `beercloak-core`)
`beercloak-ear`: EAR packaging module to combine all the above into a deployable EAR
## Requirements
* Keycloak 3.4.0.Final
## Build
`mvn install`
## Installation
1. Copy `beercloak-ear/target/beercloak-XXX.ear` into Keycloak's `standalone/deployments` directory.
**Warning!** While Keycloak generally supports hot deployment of providers, this is *not supported* for EntityProviders.
That means, BeerCloak shouldn't be hot (re)deployed, otherwise you'll get exceptions and non-working code.
See [KEYCLOAK-5782](https://issues.jboss.org/browse/KEYCLOAK-5782) for more info.
2. Configure theme in your `standalone/configuration/standalone.xml`:
```xml
<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
...
<theme>
<staticMaxAge>2592000</staticMaxAge>
<cacheThemes>true</cacheThemes>
<cacheTemplates>true</cacheTemplates>
<dir>${jboss.home.dir}/themes</dir>
<!-- Here we go -->
<modules>
<module>
deployment.beercloak
</module>
</modules>
<default>beer</default>
</theme>
...
</subsystem>
```
You can omit `<default>beer</default>`, but then you'll have to manually choose the "beer" theme in realm configuration → Themes → Admin console theme.
(Currently, if you ship a theme inside your module, you have to configure it manually in the XML config. This may change in the future with automatic deployment of themes, you can track progress under [KEYCLOAK-4547](https://issues.jboss.org/browse/KEYCLOAK-4547))
## Running example
Run Keycloak and log into the admin console. You should be able to access the "Beer" menu item.
| 1 |
jfaster/mango-example | Example of Mango Framework | null | [](https://travis-ci.org/jfaster/mango-example)
mango-doc示例代码 | 1 |
HUPO-PSI/mzML | Repository for mzML and the corresponding examples | null | ## mzML - Reporting Spectra Information in MS-based experiments
## General
Mass spectrometry is a popular method to analyse bio-molecules by measuring the intact mass-to-charge ratios of their in-situ generated ionised forms or the mass-to-charge ratios of in-situ-generated fragments of these ions. The resulting mass spectra are used for a variety of purposes, among which is the identification, characterization, and absolute or relative quantification of the analysed molecules. The processing steps to achieve these goals typically involve semi-automatic computational analysis of the recorded mass spectra and sometimes also of the associated metadata (e.g., elution characteristics if the instrument is coupled to a chromatography system). The result of the processing can be assigned a score, rank or confidence measure.
Differences inherent in the use of a variety of instruments, different experimental conditions under which analyses are performed, and potential automatic data preprocessing steps by the instrument software can influence the actual measurements and therefore the results after processing. Additionally, most instruments output their acquired data in a very specific and often proprietary format. These proprietary formats are then typically transformed into so-called peak lists to be analysed by identification and characterisation software. Data reduction such as peak centroiding and deisotoping is often performed during this transformation from proprietary formats to peak lists. In addition, these peak list file formats lack information about the precursor MS signals and about the associated metadata (i.e., instrument settings and description, acquisition mode, etc) compared to the files they were derived from. The peak lists are then used as inputs for subsequent analysis. The many different and often proprietary formats make integration or comparison of mass spectrometer output data difficult or impossible, and the use of the heavily processed and data-poor peak lists is often suboptimal.
This document addresses this problem with the presentation of the mzML XML format, which is designed to hold the data output of a mass spectrometer as well as a systematic description of the conditions under which this data was acquired and transformed. The following target objectives can be defined for the format:
1- The discovery of relevant results, so that, for example, data sets in a database or public repository that use a
particular technique or combination of techniques can be identified and studied by experimentalists during experiment
design or data analysis.
2- The sharing of best practice, whereby, for example, approaches that have been successful at analysing low abundance
analytes can be captured alongside the results produced.
3- The evaluation of results, whereby, for example, the number and quality of the spectra recorded from a sample can be
assessed in the light of the experimental conditions.
4- The sharing of data sets, so that, for example, public repositories can import or export data, multi-site projects
can share results to support integrated analysis, or meta-analyses can be performed by third parties from previously
published data.
5- The most comprehensive support of the instruments output, so that data can be captured in profile mode, centroid
mode, and other relevant forms of biomolecular mass spectrometry data representation
The primary focus of the model is to support long-term archiving and sharing, rather than day-to-day laboratory management, although the model is extensible to support context-specific details.
The description of mass spectrometry data output and its experimental context requires that models include: (i) the actual data acquired, to a sufficient precision, as well as its associated metadata; and (ii) an adequate description of the instrument characteristics, its configuration and possible preprocessing steps applied. This document details both these parts, as they are required to support the tasks T1 to T5 above.
This document defines a specification and is not a tutorial. As such, the presentation of technical details is deliberately direct. The role of the text is to describe the schema model and justify design decisions made. This document does not provide comprehensive examples of the schema in use. Example documents are provided separately and should be examined in conjunction with this document. It is anticipated that tutorial material will be developed in the future to aid implementation. Although the present specification document describes constraints and guidelines related to the content of an mzML document as well as the availability of tools helping to read and write mzML, it does not describe any implementation constraints or specifications such as coding language or operating system for software that will generate and/or read mzML data.
When you use mzML format, please cite the following publication:
Martens L., Chambers M., Sturm M., Kessner D., Levander F., Shofstahl J., Tang W.H., Römpp A., Neumann S., Pizarro A.D., Montecchi-Palazzi L., Tasman N., Coleman M., Reisinger F., Souda P., Hermjakob H., Binz P.A., Deutsch E.W..mzML--a community standard for mass spectrometry data. Mol Cell Proteomics. 2011 Jan;10(1):R110.000133
## Specification documents
**Version 1.1.0 (June 2014):**
> Specification document [docx](https://github.com/HUPO-PSI/mzML/blob/master/specification_document/mzML1.1.0_specificationDocument.doc),
## Example Files
Several example of the format can be download from the next link [Examples](https://github.com/HUPO-PSI/mzML/tree/master/examples)
## Tools, Libraries, readers and exporters
1- [jmzML](http://github.com/PRIDE-UTILITIES/jmzML/): Java API for reading and writing mzML (**IMPORT AND EXPORT**)
2- [ms-data-core-api](http://github.com/PRIDE-UTILITIES/ms-data-core-api/): Java API for reading PSI standard file formats.
| 0 |
hamen/rxjava-essentials | This repo is a reference for the topics and the examples in my RxJava Essentials | null | # RxJava Essentials by Ivan Morgillo
This repo is a reference for the topics and the examples in my RxJava Essentials by [Packt Publishing](http://bit.ly/rxjava-essentials).
## Slides ##
The slides provided here are free to use, but I'd like to be mentioned somehow if you use them for your talks.
## App ##
The app is the one used in the book examples. It provides a few basic scenarios for a few common RxJava operators.
| 0 |
eventuate-tram-examples/eventuate-tram-examples-micronaut-customers-and-orders | Microservices, Sagas, Choreography, Eventuate Tram, Micronaut | null | null | 0 |
marcojakob/tutorial-javafx-8 | Example Sources for JavaFX 8 Tutorial | null | # JavaFX 8 Tutorial - Sources
These are example sources for the [JavaFX 8 Tutorial](http://code.makery.ch/java/javafx-8-tutorial-intro/). | 1 |
jgasmi/jhipster-mt | JHipster Multitenant Example | null | README for JHipster Multitenant Example
==========================
This JHipster multi-tenant example is based on:
- Bien évidemment on JHipster: https://jhipster.github.io/
- https://www.youtube.com/watch?v=nBSHiUTHjWA
- https://code.google.com/p/ddd-cqrs-base-project/
- http://www.insaneprogramming.be/blog/2012/01/30/spring-liquibase/
You should modify the Jadira usertype version to 3.1.0.GA, because of the below bug:
https://jadira.atlassian.net/browse/JDF-81
| 1 |
fengcunhan/Hotpatch-Sample | The example of Hotpatch | null | null | 1 |
architjn/SharePanel | A small Behavior Example | null | # SharePanel
[](https://developer.android.com/about/versions/android-4.0.3.html)
[]()
A small library to show share buttons panel with coordinatorLayout with behaviour
Library supports OS on API 15 and above.

Try APK : [Download](demo.apk)
Add it in your root build.gradle at the end of repositories:
```groovy
allprojects {
repositories {
...
maven { url "https://jitpack.io" }
}
}
```
and then add dependency
```groovy
dependencies {
compile 'com.github.architjn:SharePanel:1.0'
}
```
##Usage
###XML
```xml
<com.architjn.sharepanel.SharePanel
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:app_layout_anchor="@id/collapsing_toolbar"
app:app_layout_anchorGravity="bottom|right|end">
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content">
<!-- Your Images Here -->
<com.varunest.sparkbutton.SparkButton
android:id="@+id/twitter_button"
android:layout_width="50dp"
android:layout_height="50dp"
app:sparkbutton_activeImage="@drawable/ic_twitter"
app:sparkbutton_iconSize="20dp"
app:sparkbutton_primaryColor="@color/twitter_primary_color"
app:sparkbutton_secondaryColor="@color/twitter_secondary_color" />
<com.varunest.sparkbutton.SparkButton
android:id="@+id/fb_button"
android:layout_width="50dp"
android:layout_height="50dp"
app:sparkbutton_activeImage="@drawable/ic_facebook"
app:sparkbutton_iconSize="20dp"
app:sparkbutton_primaryColor="@color/fb_primary_color"
app:sparkbutton_secondaryColor="@color/fb_secondary_color" />
</LinearLayout>
</com.architjn.sharepanel.SharePanel>
```
##Buttons Used
SparkButton : [https://github.com/varunest/SparkButton](https://github.com/varunest/SparkButton)
##License
Library falls under [Apache 2.0] (LICENSE.md)
| 1 |
Jaouan/Carousel-Browsing-Example | It's just an example of carousel browsing. | null | Android - Carousel browsing example
========
It's just an example of carousel browsing, a bit inspired by [Frank Lau's animation](https://dribbble.com/shots/2906536-animation).

License
========
[Apache License Version 2.0](LICENSE) | 1 |
indrabasak/spring-gateway-example | Spring Cloud Gateway Example | auth0-jwt oauth2-authentication okta spring-cloud-gateway | [![Build Status][travis-badge]][travis-badge-url]
[![Quality Gate][sonarqube-badge]][sonarqube-badge-url]
[![Technical debt ratio][technical-debt-ratio-badge]][technical-debt-ratio-badge-url]
[![Coverage][coverage-badge]][coverage-badge-url]

Spring Cloud Gateway Example
==============================
This project is example of using [Spring Cloud Gateway](https://spring.io/projects/spring-cloud-gateway) as an edge
service with a Spring Boot application. Spring Cloud Gateway provides means for routing an incoming request to a
matching downstream service.
Gateway is a suitable replacement for [Spring Cloud Netflix Zuul](https://spring.io/projects/spring-cloud-netflix) since
the latter module is now in maintenance mode starting Spring Cloud Greenwich (2.1.0) release train. Spring Cloud will
continue to support Zuul for a period of at least a year from the general availability of the Greenwich release train.
Putting a module in the maintenance mode means that the Spring Cloud will no longer add any new feature but will fix
blocker bugs and security issues.
## Introduction
- A **route** is the fundamental concept of Spring Cloud Gateway framework. A route contains a destination URL and a
collection of predicates and filters. An request is forwarded to a route if the result of logical _AND_ operation on
all its predicates is _true_.
- A **predicate** is boolean valued function.
- A **filter** provides a way of modifying incoming HTTP requests and outgoing HTTP responses.
### A Route Example
Here's a simple example of a route used in this project,
```yaml
spring:
cloud:
gateway:
routes:
- id: book-id
uri: http://localhost:8080
predicates:
- Path=/books/**
filters:
- PrefixPath=/public
- AddRequestHeader=X-Request-Foo, Bar
- AddRequestTimeHeaderPreFilter
- AddResponseHeader=X-Response-Bye, Bye
- AddResponseTimeHeaderPostFilter
```
## Project Synopsis
This example project consists of two modules:
- A **book-service** is a Spring Boot based REST service which provides creation and retrieval operation on a book
resource. It uses Basic Auth for authentication.
- An **edge-service** is a Spring Cloud Gateway and Spring Boot based Edge service. It routes incoming requests to the
backend book service. It uses both Basic Auth and OAuth2 for authenticating a request. Once the request is authenticated,
it forwards the request to the book service after replacing the authorization header with book service's basic
auth credentials.
Here is the flow of an incoming request and outgoing response in the example edge service.

### Types of Filter
Gateway filters can be classified into 3 groups:
- **Global Filters**: They are special filters that are conditionally applied to all routes. A good use of a global
filter can be authentication of an incoming request. Here are the global filters used in this example:
- **Authorization Filter**: A custom filter for authenticating a request. Authentication can be Basic Auth or OAuth2.
- **Basic Auth Token Relay Filter**: A custom filter whi h replaces the authorization header with basic auth credentials
specific to a route.
- **Pre Filters**: These filters are applied to incoming requests and are usually specific to a route. Here are the
pre filters used in this example:
- **Prefix Filter**: A built-in filter which adds a prefix to the incoming request before being forwarded.
- **Add Request Header Filter**: A built-in filter which adds a header to the incoming request before being forwarded.
- **Add Request Time Header Pre Filter**: A custom filter which add a timestamp header to the incoming request before being forwarded.
- **Post Filters**: These filters are applied to outgoing request and are usually specific to a route. Here are the
post filters used in this example:
- **Add Response Header Filter**: A built-in filter which adds a header to the outgoing response.
- **Add Response Time Header Post Filter**: A custom filter which add a timestamp header to the outgoing response.
## Security
This example didn't use Spring Security framework directly as typically used in a Spring Boot application by configuring the
service. However, it took advantage of the classes provided in the Spring Security libraries to come up with a custom security
framework. This example uses both **Basic Authentication** and **OAuth 2.0** for authentication.
### Basic Authentication
The basic authorization credentials are configured using `security.auth.user` and `security.auth.password` in the
`application.yml`.
### OAuth2 Authentication
The OAuth2 uses `JSON Web Token` (`JWT`) for authentication. The client of the Edge service uses `Client Credentials`
grant type to obtain an access token from an auth server. In this example, we used Auth0 server and Okta server for
obtaining an access token.
#### Auth0 Authorization Server
Here's an example to obtain an access token from an `Auth0` server:
```
curl --request POST \
--url https://ibasak.auth0.com/oauth/token \
--header 'content-type: application/json' \
--data '{
"client_id":"yHJiJecLn3bd8A2oummRa08jp9t0y1UL",
"client_secret":"YAeV4cYudc2Gyro6sR416-ehmiyxnJc7ErLDwDxNDaBmvGgCu1Y8hyG6Sa-tyfAY",
"audience":"https://quickstarts/api",
"grant_type":"client_credentials"
}'
```
A response usually looks something similar to this:
```json
{
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IlEwWXlRVGMyTWpGRVJEQXhOa0ZHTVRGQlJqZzJNRVZGTUVFd01UZENOREV6TmpJd1JqTTFOQSJ9.eyJpc3MiOiJodHRwczovL2liYXNhay5hdXRoMC5jb20vIiwic3ViIjoieUhKaUplY0xuM2JkOEEyb3VtbVJhMDhqcDl0MHkxVUxAY2xpZW50cyIsImF1ZCI6Imh0dHBzOi8vcXVpY2tzdGFydHMvYXBpIiwiaWF0IjoxNTY3Nzg5MDQ2LCJleHAiOjE1Njc4NzU0NDYsImF6cCI6InlISmlKZWNMbjNiZDhBMm91bW1SYTA4anA5dDB5MVVMIiwiZ3R5IjoiY2xpZW50LWNyZWRlbnRpYWxzIn0.aFzEvDwsNvUge5yAkzLJfrlpjtxffO2M7V0q0sGF9udi99KVEK3vQ2KXZm_N7v-ASrm-LF7twgPzdiln6tVMWkGtvFmpKx2YQwmXsEDYZGfrHOwb5XjY2AF8eXXsiJQEyI_SOSb-CzoAxFL34eIPeFa77zR6nmcIZAJyCdTtrMd1S4XIENPW1aWvwK5BVqFk6VpJ33LdemQYthQkNMYJF_v8dgXHbqSIAkdOfg4CUKXRObABTc4LnARMiFGFa-c2aQBMj1vP6PRE7h41Fr6MTHkUSVfFFayyVUFI3mH3tfiNHTqQiUZIpNJNknRYCTXDJq2V4mLgWfH9BFjelP65dg",
"expires_in": 86400,
"token_type": "Bearer"
}
```
You need to configure the `application.yml` before using the `Auth0` JWT. You have to set the following properties:
```yaml
spring:
security:
oauth2:
resourceserver:
jwt:
issuer-uri: https://ibasak.auth0.com/
audience: https://quickstarts/api
```
#### Okta Authorization Server
Here's an example to obtain an access token from an `Okta` server:
```
curl --request POST \
--url https://dev-461512.okta.com/oauth2/default/v1/token \
--header 'accept: application/json' \
--header 'authorization: Basic MG9hMWFhMXI0MzB3NEFLajczNTc6b0gwOTB0RlZFVG5iV0o4eElQX3BMOXV6SWRmV2h0MWNISjdWb1d0VQ==' \
--header 'cache-control: no-cache' \
--header 'content-type: application/x-www-form-urlencoded' \
--data grant_type=client_credentials \
--data scope=customScope
```
A response usually looks something similar to this:
```json
{
"token_type":"Bearer",
"expires_in":3600,
"access_token":"eyJraWQiOiItM1N6UVhjWDNYc0lFMmNOSnZ6NGRZRzBZemdXVS1Od091THpxYmZ1cWQ0IiwiYWxnIjoiUlMyNTYifQ.eyJ2ZXIiOjEsImp0aSI6IkFULkluNHlfNVJQd0N4eHBURkVmVThGRERiSEQyZHF6Q1RiNjBreGxPaDhpMjgiLCJpc3MiOiJodHRwczovL2Rldi00NjE1MTIub2t0YS5jb20vb2F1dGgyL2RlZmF1bHQiLCJhdWQiOiJhcGk6Ly9kZWZhdWx0IiwiaWF0IjoxNTY3ODA2OTk0LCJleHAiOjE1Njc4MTA1OTQsImNpZCI6IjBvYTFhYTFyNDMwdzRBS2o3MzU3Iiwic2NwIjpbImN1c3RvbVNjb3BlIl0sInN1YiI6IjBvYTFhYTFyNDMwdzRBS2o3MzU3IiwiQ2xhaW0xIjpmYWxzZX0.TShTVtfRp8wU39NY40KpTo1PCLB8N2x3kuVdkgJVYvU5zd5yBkz3RZZLksqsWQEfirAKduBdSkF4aMQhBUo3tdDYefQ6TNqnun_Ung1f3TdUAalyqeUgpGGlbN2J93jv-djtF5O7ylElpKqvwXhZcwXhJb1HPJqLB_LP0XtaxDb5R8uPP56IhE6JEC8PCIvpMOM0gr9mYsJWxwTe-tVd5NHUTSIaDBtMCsFbcx8MkG6YXN0N-B1ZsyZJMHBA8nwWk1Fx7EbIyxTmpUQdnBmwP-YM1XNCvBZQkX9BhId6YnaAjmLhJ_SQB1VWew28oAHpeax9Lkj-R49rzqxsjcTvVA",
"scope":"customScope"
}
```
You need to configure the `application.yml` before using the `Okta` JWT. You have to set the following properties:
```yaml
spring:
security:
oauth2:
resourceserver:
jwt:
issuer-uri: https://dev-461512.okta.com/oauth2/default
audience: api://default
```
## [Build]
To build the Spring JArs and as well as docker images, execute `mvn clean install` command from the parent directory:
```bash
$ mvn clean install
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< com.basaki:edge-service >-----------------------
[INFO] Building edge-service 1.0.0
[INFO] --------------------------------[ jar ]---------------------------------
Downloading from iovation.central: https://maven.iovationnp.com/repository/public/net/minidev/json-smart/maven-metadata.xml
Downloading from maven-central: http://repo1.maven.org/maven2/net/minidev/json-smart/maven-metadata.xml
Downloaded from maven-central: http://repo1.maven.org/maven2/net/minidev/json-smart/maven-metadata.xml (849 B at 3.6 kB/s)
Downloaded from iovation.central: https://maven.iovationnp.com/repository/public/net/minidev/json-smart/maven-metadata.xml (895 B at 642 B/s)
[WARNING] The POM for com.sun.xml.bind:jaxb-osgi:jar:2.2.10 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ edge-service ---
[INFO] Deleting /Users/jdoe/examples/spring-gateway-example/edge-service/target
...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 19.569 s
[INFO] Finished at: 2019-09-06T15:04:05-07:00
[INFO] ------------------------------------------------------------------------
```
If the build is successful, it should create the following:
- Spring Boot Jars
- `edge-service-1.0.0.jar`
- `book-service-1.0.0.jar`
- Docker Images
- `basaki/spring-gateway-edge:1.0.0 `
- `basaki/spring-gateway-book:1.0.0`
You can list all the docker images in your computer by typing `docker images` command.
```
REPOSITORY TAG IMAGE ID CREATED SIZE
basaki/spring-gateway-edge 1.0.0 000e04f2ae53 2 minutes ago 530MB
basaki/spring-gateway-book 1.0.0 c27215b6c3b3 3 days ago 564MB
```
## Starting Applications
You can start both the applications from the terminal by typing the following command:
```yaml
java -jar edge-service-1.0.0.jar
```
The edge service should start up at port `9080`
```
java -jar book-service-1.0.0.jar
```
The book service should start up at port `8080`
## Usage
### Basic Authentication
To create a book using basic authentication:
```
curl --request POST \
--url http://localhost:9080/books \
--header 'authorization: Basic amRvZTpoZWxsbw==' \
--header 'content-type: application/json' \
--data '{
"title": "Indra",
"author": "Indra'\''s Chronicle"
}'
```
### OAuth2 Authentication
```
curl --request POST \
--url http://localhost:9080/books \
--header 'authorization: Bearer eyJraWQiOiItM1N6UVhjWDNYc0lFMmNOSnZ6NGRZRzBZemdXVS1Od091THpxYmZ1cWQ0IiwiYWxnIjoiUlMyNTYifQ.eyJ2ZXIiOjEsImp0aSI6IkFULmFIVU5iUHkyU0ZVN1NlOEF2VE5kOGtSQlBvdy1CSGVyQmo2VGZfcENfR2siLCJpc3MiOiJodHRwczovL2Rldi00NjE1MTIub2t0YS5jb20vb2F1dGgyL2RlZmF1bHQiLCJhdWQiOiJhcGk6Ly9kZWZhdWx0IiwiaWF0IjoxNTY3NjQxODYyLCJleHAiOjE1Njc2NDU0NjIsImNpZCI6IjBvYTFhYTFyNDMwdzRBS2o3MzU3Iiwic2NwIjpbImN1c3RvbVNjb3BlIl0sInN1YiI6IjBvYTFhYTFyNDMwdzRBS2o3MzU3IiwiQ2xhaW0xIjpmYWxzZX0.cxKztd_NIOBBHDoC0h6LFYUDCeevc_-DQrrUMrJ9K5tKKuzqtSJoVMCWcmreypsGf6fD7UTFX74FduVnR4sKShzvmB6PsrzGon0AOiJFJPvYYwUEl97sIGENbUHkkufcNubdTMk2D2OrHvdsxMk8f6vnB0min_X1d1tK1kCd5Pd0c-388soWSjfE_mjvYosqZFmRUR8e-MBBP2ZDp5wrP_rmqWEhze7uSk08KS6N9j3R2mZzUTjtNmX7Jf1KbvtFtsAlY_HvSSahf0dUDnwNeMaRrVeTJt5nToaa85Po44P1oKx4f9o3nAvkMO-OiU3PNFt7TlfT8MHt3nnoeupC_g' \
--header 'content-type: application/json' \
--data '{
"title": "Indra",
"author": "Indra'\''s Chronicle"
}'
```
Your response should look like this:
```json
{
"id": "d4c962fe-386d-4344-a2f4-6209dfac9382",
"title": "Indra",
"author": "Indra's Chronicle"
}
```
## Deploying in Kubernetes Cluster
Scripts for deploying the `book service` are located in `book-service/src/docker` folder while the scripts for the
`edge service` are located in `edge-service/src/docker` folder.
### Deploying Book Service
#### Create Namespace
```
$ kubectl create -f book-service/src/docker/namespace.yml
namespace/gateway-example created
```
#### Create Config Map
```
k$ kubectl create -f book-service/src/docker/config.yml
configmap/spring-gateway-book-config created
```
#### Create Deployment
```
$ kubectl create -f book-service/src/docker/deployment.yml
deployment.apps/spring-gateway-book created
```
#### Create Service
```
$ kubectl create -f book-service/src/docker/service.yml
service/spring-gateway-book-service created
```
If the book service is deployed successfully, you can access it at `http://localhost:30080/public/books`
### Deploying Edge Service
You can skip the namespace creation as it's already created in the earlier step.
#### Create Config Map
```
k$ kubectl create -f edge-service/src/docker/config.yml
configmap/spring-gateway-edge-config created
```
#### Create Deployment
```
$ kubectl create -f edge-service/src/docker/deployment.yml
deployment.apps/spring-gateway-edge created
```
#### Create Service
```
$ kubectl create -f edge-service/src/docker/service.yml
service/spring-gateway-edge-service created
```
If the edge service is deployed successfully, you can access it at `http://localhost:31080/books`
[travis-badge]: https://travis-ci.org/indrabasak/spring-gateway-example.svg?branch=master
[travis-badge-url]: https://travis-ci.org/indrabasak/spring-gateway-example/
[sonarqube-badge]: https://sonarcloud.io/api/project_badges/measure?project=com.basaki%3Aspring-gateway-example&metric=alert_status
[sonarqube-badge-url]: https://sonarcloud.io/dashboard/index/com.basaki:spring-gateway-example
[technical-debt-ratio-badge]: https://sonarcloud.io/api/project_badges/measure?project=com.basaki%3Aspring-gateway-example&metric=sqale_index
[technical-debt-ratio-badge-url]: https://sonarcloud.io/dashboard/index/com.basaki:spring-gateway-example
[coverage-badge]: https://sonarcloud.io/api/project_badges/measure?project=com.basaki%3Aspring-gateway-example&metric=coverage
[coverage-badge-url]: https://sonarcloud.io/dashboard/index/com.basaki:spring-gateway-example
| 1 |
varvet/BarcodeReaderSample | A barcode scanner example using Google Play services. | null | # BarcodeReaderSample
A barcode scanner example using Google Play services.
[This tutorial](https://www.varvet.com/blog/android-qr-code-reader-made-easy/) should be of interest :).
| 1 |
alejandro-du/vaadin-microservices-demo | A microservices example developed with Spring Cloud and Vaadin | demo eureka fault-tolerance high-availability java load-balancing microservice netflix-ribbon spring-boot vaadin zuul | # Microservices with Vaadin demo
A microservices demo implemented with [Spring Cloud Netflix](http://cloud.spring.io/spring-cloud-netflix/) and [Vaadin](https://vaadin.com).
If you are using Vaadin 8, checkout the [vaadin-8](https://github.com/alejandro-du/vaadin-microservices-demo/tree/vaadin-8) branch.
## Building the demo
Run the following from the command line:
```
git clone https://github.com/alejandro-du/vaadin-microservices-demo.git
cd vaadin-microservices-demo
mvn package
```
## Running the demo
Use multiple (seven) terminals to perform the following steps:
**1) Start the `discovery-server` application (Eureka app):**
```
cd vaadin-microservices-demo/discovery-server
java -jar target/discovery-server-0.0.1-SNAPSHOT.jar
```
**2) Start the `config-server` application (Spring Cloud Config app):**
```
cd vaadin-microservices-demo/config-server
java -jar target/config-server-0.0.1-SNAPSHOT.jar
```
**3) Start an instance of the `biz-application` microservice (REST app):**
```
cd vaadin-microservices-demo/biz-application
java -jar target/biz-application-0.0.1-SNAPSHOT.jar
```
**4) Start an instance of the `admin-application` microservice (Vaadin app):**
```
cd vaadin-microservices-demo/admin-application
java -jar target/admin-application-0.0.1-SNAPSHOT.jar
```
**5) Start an instance of the `news-application` microservice (Vaadin app):**
```
cd vaadin-microservices-demo/news-application
java -jar target/news-application-0.0.1-SNAPSHOT.jar
```
**6) Start an instance of the `website-application` microservice (Vaadin app):**
```
cd vaadin-microservices-demo/website-application
java -jar target/website-application-0.0.1-SNAPSHOT.jar
```
**7) Start the `proxy-server` application (Zuul app):**
```
cd vaadin-microservices-demo/proxy-server
java -jar target/proxy-server-0.0.1-SNAPSHOT.jar
```
## Using the demo
**1) Point your browser to <http://localhost:8080>.**
You'll see the `website-application` embedding the `admin-application` and the `news-application` microservices.
This is the "edge service" implemented with Netflix Zuul. It acts as a reverse proxy, redirecting requests to the `website-application`, `news-application`, and `admin-application` instances using a load balancer provided by Netflix Ribbon with a _round robin_ strategy.
If you get a "Server not available" message, please wait until all the services are registered with the `discovery-server` (implemented with Netflix Eureka).
**2) Add, update, or delete data.**
Latest tweets from the companies you enter on the left (the `admin-application`) will be rendered on the right (the `news-application`).
The `admin-application`, and `news-application` instances (implemented with Vaadin) delegate CRUD operations to the `biz-application` (implemented with Spring Data Rest) using a load balancer (provided by Netflix Ribbon) with a _round robin_ strategy.
**3) Add microservice instances.**
You can horizontally scale the system by starting more instances of the `biz-application`, `admin-application`, `news-application`, and `website-application` microservices. Remember to specify an available port (using `-Dserver.port=NNNN`) when you start a new instance.
**4) Test high-availability.**
Make sure you are running two instances of the `admin-application`. Click the _+_ (Add) button and enter `Vaadin`
as the _name_, and `vaadin` as the _Twitter Username_. Don't click the _Add_ button yet.
Stop one of the instances of the `admin-application` and click the _Add_ button. The web application should remain functional and save the data you entered without losing the state of the UI thanks to the externalized HTTP Session (implemented with Spring Session and Hazelcast).
**5) Test system resilience.**
Stop all the instances of the `biz-application` microservice and refresh the browser to see the fallback mechanisms (implemented with Netflix Hystrix) in the `admin-application` and `news-application` microservices.
## Developing
You don't need to have all the infrastructure services running (`discovery-server`, `config-server`, and `proxy-server`) in order to develop individual microservices (`biz-application`, `admin-application`, `news-application`, and `website-application`). Activate the `development` Spring profile to use a local configuration (`application-development.properties`) that excludes external orchestration services.
For example, during development you can run the `biz-application` microservice using:
```
cd vaadin-microservices-demo/biz-application
java -Dspring.profiles.active=development -jar target/biz-application-0.0.1-SNAPSHOT.jar
```
With the `admin-application`, and `news-application` you need the REST web-service provided by the `biz-application`. You can either, run the `biz-application` in `development` mode or create a _mock_ REST web service. You can configure the end point with the `biz-application.url` property in the `application-development.properties`.
| 1 |
openjfx/samples | JavaFX samples to run with different options and build tools | documentation eclipse examples gradle ide intellij java java-11 java-12 javafx javafx-11 javafx-12 maven modular netbeans non-modular openjfx | OpenJFX Docs Samples
===
Description
---
This repository contains a collection of HelloFX samples. Each one is a very simple
HelloWorld sample created with JavaFX that can be run with different options and build tools.
The related documentation for each sample can be found [here](https://openjfx.io/openjfx-docs/).
For more information go to https://openjfx.io.
Content
---
* [HelloFX samples](#HelloFX-Samples)
* [Command Line](#Command-Line)
- [_Modular samples_](#CLI-Modular-Samples)
- [_Non-modular samples_](#CLI-Non-Modular-Samples)
* [IDEs](#IDEs)
- [IntelliJ](#IntelliJ)
[_Modular samples_](#IntelliJ-Modular-Samples)
[_Non-modular samples_](#IntelliJ-Non-Modular-Samples)
- [NetBeans](#NetBeans)
[_Modular samples_](#NetBeans-Modular-Samples)
[_Non-modular samples_](#NetBeans-Non-Modular-Samples)
- [Eclipse](#Eclipse)
[_Modular samples_](#Eclipse-Modular-Samples)
[_Non-modular samples_](#Eclipse-Non-Modular-Samples)
- [Visual Studio Code](#VSCode)
[_Modular samples_](#VSCode-Modular-Samples)
[_Non-modular samples_](#VSCode-Non-Modular-Samples)
* [License](#License)
* [Contributing](#Contributing)
HelloFX samples<a name="HelloFX-Samples" />
---
Contains samples of a simple HelloFX class that can be run from command line, with
or without build tools.
Build Tool | Sample | Description
---------- | ------ | -----------
None | [HelloFX project](HelloFX/CLI) | Simple HelloFX class to run on command line.
Maven | [HelloFX project](HelloFX/Maven) | Simple HelloFX class to run with Maven.
Gradle | [HelloFX project](HelloFX/Gradle) | Simple HelloFX class to run with Gradle.
Command Line<a name="Command-Line" />
---
Contains samples of modular and non-modular projects that can be run from command
line, with or without build tools.
### _Modular samples_<a name="CLI-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
None | [HelloFX project](CommandLine/Modular/CLI) | Modular project to run on command line.
Maven | [HelloFX project](CommandLine/Modular/Maven) | Modular project to run with Maven.
Gradle | [HelloFX project](CommandLine/Modular/Gradle) | Modular project to run with Gradle.
### _Non-modular samples_<a name="CLI-Non-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
None | [HelloFX project](CommandLine/Non-modular/CLI) | Non-modular project to run on command line.
Maven | [HelloFX project](CommandLine/Non-modular/Maven) | Non-modular project to run with Maven.
Gradle | [HelloFX project](CommandLine/Non-modular/Gradle) | Non-modular project to run with Gradle.
IDEs<a name="IDEs" />
---
Contains samples of modular and non-modular projects that can be run from an IDE,
with or without build tools.
### IntelliJ<a name="IntelliJ" />
#### _Modular samples_<a name="IntelliJ-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Java | [HelloFX project](IDE/IntelliJ/Modular/Java) | Modular project to run from IntelliJ.
Maven | [HelloFX project](IDE/IntelliJ/Modular/Maven) | Modular project to run from IntelliJ, with Maven.
Gradle | [HelloFX project](IDE/IntelliJ/Modular/Gradle) | modular project to run from IntelliJ, with Gradle.
#### _Non-modular samples_<a name="IntelliJ-Non-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Java | [HelloFX project](IDE/IntelliJ/Non-Modular/Java) | Non-modular project to run from IntelliJ.
Maven | [HelloFX project](IDE/IntelliJ/Non-Modular/Maven) | Non-modular project to run from IntelliJ, with Maven.
Gradle | [HelloFX project](IDE/IntelliJ/Non-Modular/Gradle) | Non-modular project to run from IntelliJ, with Gradle.
### NetBeans<a name="NetBeans" />
#### _Modular samples_<a name="NetBeans-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Java | [HelloFX project](IDE/NetBeans/Modular/Java) | Modular project to run from NetBeans.
Maven | [HelloFX project](IDE/NetBeans/Modular/Maven) | Modular project to run from NetBeans, with Maven.
Gradle | [HelloFX project](IDE/NetBeans/Modular/Gradle) | Modular project to run from NetBeans, with Gradle.
#### _Non-modular samples_<a name="NetBeans-Non-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Java | [HelloFX project](IDE/NetBeans/Non-Modular/Java) | Non-modular project to run from NetBeans.
Maven | [HelloFX project](IDE/NetBeans/Non-Modular/Maven) | Non-modular project to run from NetBeans, with Maven.
Gradle | [HelloFX project](IDE/NetBeans/Non-Modular/Gradle) | Non-modular project to run from NetBeans, with Gradle.
### Eclipse<a name="Eclipse" />
#### _Modular samples_<a name="Eclipse-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Java | [HelloFX project](IDE/Eclipse/Modular/Java) | Modular project to run from Eclipse.
Maven | [HelloFX project](IDE/Eclipse/Modular/Maven) | Modular project to run from Eclipse, with Maven.
Gradle | [HelloFX project](IDE/Eclipse/Modular/Gradle) | Modular project to run from Eclipse, with Gradle.
#### _Non-modular samples_<a name="Eclipse-Non-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Java | [HelloFX project](IDE/Eclipse/Non-Modular/Java) | Non-modular project to run from Eclipse.
Maven | [HelloFX project](IDE/Eclipse/Non-Modular/Maven) | Non-modular project to run from Eclipse, with Maven.
Gradle | [HelloFX project](IDE/Eclipse/Non-Modular/Gradle) | Non-modular project to run from Eclipse, with Gradle.
### Visual Studio Code<a name="VSCode" />
#### _Modular samples_<a name="VSCode-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Maven | [HelloFX project](IDE/VSCode/Modular/Maven) | Modular project to run from Visual Studio Code, with Maven.
Gradle | [HelloFX project](IDE/VSCode/Modular/Gradle) | Modular project to run from Visual Studio Code, with Gradle.
#### _Non-modular samples_<a name="VSCode-Non-Modular-Samples" />
Build Tool | Sample | Description
---------- | ------ | -----------
Java | [HelloFX project](IDE/VSCode/Non-Modular/Java) | Non-modular project to run from Visual Studio Code.
Maven | [HelloFX project](IDE/VSCode/Non-Modular/Maven) | Non-modular project to run from Visual Studio Code, with Maven.
Gradle | [HelloFX project](IDE/VSCode/Non-Modular/Gradle) | Non-modular project to run from Visual Studio Code, with Gradle.
License<a name="License" />
---
This project is licensed under [BSD 3-Clause](LICENSE).
Contributing<a name="Contributing" />
---
This project welcomes all types of contributions and suggestions.
We encourage you to report issues, create suggestions and submit
pull requests.
Contributions can be submitted via [pull requests](https://github.com/openjfx/samples/pulls/),
providing you have signed the [Gluon Individual Contributor License Agreement (CLA)](https://docs.google.com/forms/d/16aoFTmzs8lZTfiyrEm8YgMqMYaGQl0J8wA0VJE2LCCY).
Please go through the [list of issues](https://github.com/openjfx/samples/issues)
to make sure that you are not duplicating an issue.
| 0 |
dataArtisans/kafka-example | Simple example for reading and writing into Kafka | null | # kafka-example
Simple example for reading and writing into Kafka
# Set up Kafka
```bash
#get kafka
wget http://mirror.softaculous.com/apache//kafka/0.8.2.1/kafka_2.10-0.8.2.1.tgz
# unpack
tar xf kafka_2.10-0.8.2.1.tgz
cd kafka_2.10-0.8.2.1
# start zookeeper server
./bin/zookeeper-server-start.sh ./config/zookeeper.properties
# start broker
./bin/kafka-server-start.sh ./config/server.properties
# create topic “test”
./bin/kafka-topics.sh --create --topic test --zookeeper localhost:2181 --partitions 1 --replication-factor 1
# consume from the topic using the console producer
./bin/kafka-console-consumer.sh --topic test --zookeeper localhost:2181
# produce something into the topic (write something and hit enter)
./bin/kafka-console-producer.sh --topic test --broker-list localhost:9092
```
Watch this YouTube video to see how this code is working with Kafka: https://www.youtube.com/watch?v=7RPQUsy4qOM
| 1 |
la-team/lightadmin-springboot | LightAdmin and Spring Boot integration example | null | LightAdmin and Spring Boot integration example
This application is a usual spring-boot application, where lightadmin was added.
Applying this to your own, existent project/applications means:
* add lightadmin, tomcat and jasper to your pom
* add some lightadmin init code in (or around) your main code
It is really trivial.
Note: the ideal (and spring-boot like) solution would be to simply add a
`spring-boot-starter-lightadmin` to your POM. We are aware of this, but it's not
yet implemented.
| 1 |
jmgarridopaz/bluezone | An example application implementing Hexagonal Architecture | hexagonal-architecture java modules | # BlueZone
## An example application implementing Hexagonal Architecture
See the article series: [https://jmgarridopaz.github.io/content/hexagonalarchitecture-ig/intro.html]

__BlueZone__ allows car drivers to pay remotely for parking cars at zones in a city, instead of paying with coins using parking meters.
- Users (driver actors) of the application are _car drivers_ and _parking inspectors_.
- Car drivers will access the application using a Web UI (User Interface), and they can do the following:
- Query all the available rates in the city, in order to choose the one of the zone he wants to park the at.
- Purchase a parking ticket, paying an amount of money for parking a car at a zone, during a period of time. This period starts at current date-time. The ending date-time is calculated from the paid amount, according to the rate of the zone.
- Parking inspectors will access the application using a terminal with a CLI (Command Line Interface), and they can do the following:
- Check whether a car is illegally parked at a zone. This will happen if there is no valid ticket for the car and the rate of the zone. A ticket is valid if current date-time is between the starting and ending date-time of the ticket period.
- Driven actors needed by the application are: a rate repository, a ticket repository, and a payment service.
### Development environment:
- Java 11 (version "11.0.15.1" 2022-04-22 LTS)
- Maven 3.8.6
- IntelliJ IDEA 2021.3.3 (Community Edition)
- Ubuntu 20.04.4 LTS (Linux 5.13.0-40-generic)
### Instructions:
- Download and extract this github repo to a local directory on your computer ( `<bluezone_dir>` )
- Compile all modules (you need to do this just the first time before running):
~~~
cd <bluezone_dir>
./scripts/build.sh
~~~
- Select the adapters to be plugged-in at each port, editing the "ports-adapters.properties" file, located in the "<bluezone_dir>/scripts" directory.
- Run the entry point to the app:
~~~
cd <bluezone_dir>
./scripts/run_bluezone.sh
~~~
| 1 |
vladimirvivien/workbench | My code collection for testing new ideas, blog examples, etc | null | null | 0 |
tomsquest/java-agent-asm-javassist-sample | Sample maven project containing a Java agent and examples of bytecode manipulation with ASM and Javassist | null | # Sample Java Agent and Bytecode manipulation
Sample maven project containing a Java agent and examples of bytecode manipulation with ASM and Javassist.
See article on my blog : http://tomsquest.com/blog/2014/01/intro-java-agent-and-bytecode-manipulation/
## Build
```
$ # From the root dir
$ mvn package
```
## Run
```
$ # From the root dir
$ java -javaagent:agent/target/agent-0.1-SNAPSHOT.jar -jar other/target/other-0.1-SNAPSHOT.jar
```
| 1 |
gemiusz/gatling-examples-maven-java | GeMi Gatling Examples in JAVA | gatling gatling-maven-plugin java load-testing maven performance-testing | GeMi Gatling Examples in JAVA [](https://github.com/gemiusz/gatling-examples-maven-java/actions/workflows/gatling_test_all_mine_after_push.yml?query=branch%3Amaster)
============================================
Gatling project in JAVA 21 showing working examples and solutions - Inspired by [Gatling Community](https://community.gatling.io)
<br><br>
It includes:
* [Maven Wrapper](https://maven.apache.org/wrapper/), so that you can immediately run Maven with `./mvnw` without having
to install it on your computer
* minimal `pom.xml`
* latest version of `io.gatling.highcharts:gatling-charts-highcharts`applied - [Maven Central Repository Search](https://search.maven.org/artifact/io.gatling.highcharts/gatling-charts-highcharts)
* latest version of `io.gatling:gatling-maven-plugin` applied - [Maven Central Repository Search](https://search.maven.org/artifact/io.gatling/gatling-maven-plugin)
* official examples: [ComputerDatabaseSimulation](src/test/java/computerdatabase/ComputerDatabaseSimulation.java), [BasicSimulation](src/test/java/computerdatabase/BasicSimulation.java), [AdvancedSimulationStep01](src/test/java/computerdatabase/advanced/AdvancedSimulationStep01.java), [AdvancedSimulationStep02](src/test/java/computerdatabase/advanced/AdvancedSimulationStep02.java), [AdvancedSimulationStep03](src/test/java/computerdatabase/advanced/AdvancedSimulationStep03.java), [AdvancedSimulationStep04](src/test/java/computerdatabase/advanced/AdvancedSimulationStep04.java), [AdvancedSimulationStep05](src/test/java/computerdatabase/advanced/AdvancedSimulationStep05.java)
* mine examples and solutions mostly based on cases from [Gatling Community](https://community.gatling.io)
* auto run using GitHub Actions ([push](https://github.com/gemiusz/gatling-examples-maven-java/actions/workflows/gatling_test_all_mine_after_push.yml), [pull](https://github.com/gemiusz/gatling-examples-maven-java/actions/workflows/gatling_test_all_mine_after_pull_request.yml)) of all mine examples after `push` and during `pull_request`
<br><br><br>
### Mine examples and solutions divided into cases:
* [**Case0001JMESPathSimulation**](src/test/java/pl/gemiusz/Case0001JMESPathSimulation.java) => [JmesPath is not finding a JSON Object](https://community.gatling.io/t/jmespath-is-not-finding-a-json-object/6995)
* [**Case0002PDFdownloadSimulation**](src/test/java/pl/gemiusz/Case0002PDFdownloadSimulation.java) => [How to ensure a pdf is downloaded during a loadtest?](https://community.gatling.io/t/how-to-ensure-a-pdf-is-downloaded-during-a-loadtest/3927)
* [**Case0003UnzipJsonForFeederSimulation**](src/test/java/pl/gemiusz/Case0003UnzipJsonForFeederSimulation.java) => [Unzipping json file for feeders](https://community.gatling.io/t/unzipping-json-file-for-feeders/6996)
* [**Case0004StatusCodeSimulation**](src/test/java/pl/gemiusz/Case0004StatusCodeSimulation.java) => [withDefault Check Transforming feature](https://community.gatling.io/t/withdefault-check-transforming-feature/7008)
* [**Case0005UUIDfeederSimulation**](src/test/java/pl/gemiusz/Case0005UUIDfeederSimulation.java) => [Is there an EL function to generate uuid using java in gatling](https://community.gatling.io/t/is-there-an-el-function-to-generate-uuid-using-java-in-gatling/7028)
* [**Case0006CommandLineParametersSimulation**](src/test/java/pl/gemiusz/Case0006CommandLineParametersSimulation.java) => [Cannot Grab Command Line Arguments](https://community.gatling.io/t/cannot-grab-command-line-arguments/7025) & [Assertion in parameter](https://community.gatling.io/t/assertion-in-parameter/7970)
* [**Case0007AsyncReqSimulation**](src/test/java/pl/gemiusz/Case0007AsyncReqSimulation.java) - using `repeat` => [How to simulate an asynchronous request executing many times?](https://community.gatling.io/t/how-to-simulate-an-asynchronous-request-executing-many-times/7031)
* [**Case0008AsyncReqResourcesSimulation**](src/test/java/pl/gemiusz/Case0008AsyncReqResourcesSimulation.java) - using `resources` => [How to simulate an asynchronous request executing many times?](https://community.gatling.io/t/how-to-simulate-an-asynchronous-request-executing-many-times/7031)
* [**Case0009SessionValuesSimulation**](src/test/java/pl/gemiusz/Case0009SessionValuesSimulation.java) => [Dynamically generating param values for an API and setting it using session](https://community.gatling.io/t/dynamically-generating-param-values-for-an-api-and-setting-it-using-session/7041)
* [**Case0010JsonEditVariableSimulation**](src/test/java/pl/gemiusz/Case0010JsonEditVariableSimulation.java) => [Java - edit variable received in JSON](https://community.gatling.io/t/java-edit-variable-received-in-json/7046)
* [**Case0011ProxyCommandLineParametersSimulation**](src/test/java/pl/gemiusz/Case0011ProxyCommandLineParametersSimulation.java) => [Gatling proxy configuration from command line](https://community.gatling.io/t/gatling-proxy-configuration-from-command-line/7072)
* [**Case0012DenySomeResourcesSimulation**](src/test/java/pl/gemiusz/Case0012DenySomeResourcesSimulation.java) => [Gatling Java - HttpProtocolBuilder DenyList](https://community.gatling.io/t/gatling-java-httpprotocolbuilder-denylist/7099)
* [**Case0013RequestBeforeSimulation**](src/test/java/pl/gemiusz/Case0013RequestBeforeSimulation.java) => [Best way of calling another API before the performance test](https://community.gatling.io/t/best-way-of-calling-another-api-before-the-performance-test/7116)
* [**Case0014Loop5times1RPSand3sPauseSimulation**](src/test/java/pl/gemiusz/Case0014Loop5times1RPSand3sPauseSimulation.java) => [Emulate load with few requests simultaneously that repeated after some period of time](https://community.gatling.io/t/emulate-load-with-few-requests-simultaneously-that-repeated-after-some-period-of-time/7155)
* [**Case0015UUIDfeederTwoRecordsAtTheSameTimeSimulation**](src/test/java/pl/gemiusz/Case0015UUIDfeederTwoRecordsAtTheSameTimeSimulation.java) => [Feed multiple n-rows from CSV to json payload](https://community.gatling.io/t/feed-multiple-n-rows-from-csv-to-json-payload/7160)
* [**Case0016ScenarioDurationSimulation**](src/test/java/pl/gemiusz/Case0016ScenarioDurationSimulation.java) => [How to get the duration of a specific scnario?](https://community.gatling.io/t/how-to-get-the-duration-of-a-specific-scnario/7220)
* [**Case0017ForeachAfterForeachSimulation**](src/test/java/pl/gemiusz/Case0017ForeachAfterForeachSimulation.java) => [Foreach loop after a foreach loop does not execute](https://community.gatling.io/t/foreach-loop-after-a-foreach-loop-does-not-execute/7277)
* [**Case0018GetTokenWhenStatus401Simulation**](src/test/java/pl/gemiusz/Case0018GetTokenWhenStatus401Simulation.java) => [Using a .doIf when token expired and need refresh](https://community.gatling.io/t/using-a-doif-when-token-expired-and-need-refresh/7303)
* [**Case0019WhenStatusCode400ThenFailSimulation**](src/test/java/pl/gemiusz/Case0019WhenStatusCode400ThenFailSimulation.java) => [Assertion on the HTTP status code](https://community.gatling.io/t/assertion-on-the-http-status-code/7355)
* [**Case0020ExitBlockOnFailSimulation**](src/test/java/pl/gemiusz/Case0020ExitBlockOnFailSimulation.java) => [Stop the current Iteration/loop and start the next Iteration/loop when request failed](https://community.gatling.io/t/stop-the-current-iteration-loop-and-start-the-next-iteration-loop-when-request-failed/7492)
* [**Case0021CheckResourcesResponseTimeSimulation**](src/test/java/pl/gemiusz/Case0021CheckResourcesResponseTimeSimulation.java) => [Track which queries are executed within specified time frame and which outside it](https://community.gatling.io/t/track-which-queries-are-executed-within-specified-time-frame-and-which-outside-it/7910)
* [**Case0022SetOrRefreshTokenSimulation**](src/test/java/pl/gemiusz/Case0022SetOrRefreshTokenSimulation.java) => [Token Refresh - Java - Example](https://community.gatling.io/t/token-refresh-java-example/7935)
* [**Case0023foreachFromUUIDfeederFiveRecordsAtTheSameTimeSimulation**](src/test/java/pl/gemiusz/Case0023foreachFromUUIDfeederFiveRecordsAtTheSameTimeSimulation.java) => [Feeder reading multiple lines and foreach](https://community.gatling.io/t/feeder-reading-multiple-lines-and-foreach/7947)
* [**Case0024IterationLoopCondition**](src/test/java/pl/gemiusz/Case0024IterationLoopCondition.java) => [Iteration and Looping conditions](https://community.gatling.io/t/iteration-and-looping-conditions/7984)
* [**Case0025JSONfeederRandomSimulation**](src/test/java/pl/gemiusz/Case0025JSONfeederRandomSimulation.java) => [JSON feeder with nested arrays; how to randomly select a record from the parent](https://community.gatling.io/t/json-feeder-with-nested-arrays-how-to-randomly-select-a-record-from-the-parent/8059)
* [**Case0026ResponseHeaderRegexSimulation**](src/test/java/pl/gemiusz/Case0026ResponseHeaderRegexSimulation.java) => [How to capture request params generated dynamically from GET request to correlate next POST request](https://community.gatling.io/t/how-to-capture-request-params-generated-dynamically-from-get-request-to-correlate-next-post-request/8276)
| 0 |
cescoffier/vertx-microservices-examples | Vert.x Microservices examples | null | # Vert.x Microservices examples
This repository demonstrates how to build two common microservice patterns with vert.x:
1. aggregation
2. pipeline
It uses vert.x discovery, circuit breaker and if you run them on Openshift Origin, Kubernetes discovery.
In an aggregation, a microservice aggregates the results from other microservices. In this example, A calls B, C
and D and returns the aggregated answer to the client.
In a pipeline, a microservice is calling another one, calling another one... In this example, A calls B, B calls C
and C calls D. The client get the whole result.
In these examples, microservice communicates using HTTP. However this is not requires and you can use asynchronous
service proxies, events, SOAP or whatever protocol you like.
## Run the demos locally
First, you need to build the projects, with:
```
mvn clean install
```
Be aware that the microservices are going to open the port: 8080 (A), 8081 (B), 8082 (C), and 8083 (D). This is
configurable in the configuration files.
### Aggregation example
First go in the `aggregation-http` directory, and open 4 terminals (one for each microservice)
```
cd aggregation-http
```
Then, launch the microservices:
```
cd A
java -Djava.net.preferIPv4Stack=true -jar target/aggregation-http-A-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
```
cd B
java -Djava.net.preferIPv4Stack=true -jar target/aggregation-http-B-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
```
cd C
java -Djava.net.preferIPv4Stack=true -jar target/aggregation-http-C-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
```
cd D
java -Djava.net.preferIPv4Stack=true -jar target/aggregation-http-D-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
Let's analyses these command lines:
* it uses IPv4 just to avoid some networking issues
* it launches the microservice (vert.x application) using the _fat jar_ build during the Maven build
* the vert.x application is started in cluster mode and get some configuration data
The cluster is using Hazelcast and is configured in the `../etc/cluster.xml` file. By default it uses `127.0.0.1`.
Once launch, open a browser to `http://localhost:8080/assets/index.html`. You should get a web page inviting you to
submit a form that will execute the application:

If everything is launched, you should get: `{"A":"Hello vert.x","B":"Hola vert.x","C":"No service available (no
record)","D":"Aloha vert.x"}`.
Now shutdown one of the application (B, C or D), by hitting `CTRL+C` in the right terminal. Re-submit the form. You
should get: `{"A":"Hello vert.x","B":"Hola vert.x","C":"No service available (no record)","D":"Aloha vert.x"}` or
something similar.
When shutting down a microservice, it does not reply to the request anymore. The circuit breaker intercepts the error
and execute a fallback. If you restarts it, the output should be back to _normal_. This is because the circuit
breaker tries periodically to reset its state and check whether or not things are back to _normal_.
### Pipeline example
First go in the `pipeline-http` directory, and open 4 terminals (one for each microservice)
```
cd pipeline-http
```
Then, launch the microservices:
```
cd A
java -Djava.net.preferIPv4Stack=true -jar target/pipeline-http-A-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
```
cd B
java -Djava.net.preferIPv4Stack=true -jar target/pipeline-http-B-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
```
cd C
java -Djava.net.preferIPv4Stack=true -jar target/pipeline-http-C-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
```
cd D
java -Djava.net.preferIPv4Stack=true -jar target/pipeline-http-D-1.0-SNAPSHOT-fat.jar -cluster -cp ../etc -conf src/main/config/config.json
```
Let's analyses these command lines:
* it uses IPv4 just to avoid some networking issues
* it launches the microservice (vert.x application) using the _fat jar_ build during the Maven build
* the vert.x application is started in cluster mode and get some configuration data
The cluster is using Hazelcast and is configured in the `../etc/cluster.xml` file. By default it uses `127.0.0.1`.
Once launch, open a browser to `http://localhost:8080/assets/index.html`. You should get a web page inviting you to
submit a form that will execute the application:

If everything is launched, you should get: `{"D":"Aloha vert.x","C":"Olá vert.x","B":"Hola vert.x","A":"Hello vert.x"}`.
Now shutdown one of the application (B, C or D), by hitting `CTRL+C` in the right terminal. Re-submit the form. You
should get: `{"C":"No service available (fallback)","B":"Hola vert.x","A":"Hello vert.x"}` or
something similar.
When shutting down a microservice, it does not reply to the request anymore. The circuit breaker intercepts the error
and execute a fallback. If you restarts it, the output should be back to _normal_. This is because the circuit
breaker tries periodically to reset its state and check whether or not things are back to _normal_.
## Run the demos in Openshift Origin (v3)
These demos can also be executed in Openshift.
### Prerequisites
You will need to have Openshift on your machine to run them demo.
Here is how to start Openshift using Docker on Linux:
```
docker rm origin
sudo docker run -d --name "origin" \
--privileged --pid=host --net=host \
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw \
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes \
openshift/origin start
docker logs -f origin
```
You would need the `oc` command line tool too. Download it form the Openshift web site.
### Login to openshift
Once launched, execute:
```
oc login
# credentials are admin / admin
```
Also connect with your browser on https://0.0.0.0:8443. The certificates are not valid, just force the access. You
should arrive on a page similar to that one:

### Aggregation example
**Step 1: Project creation**
You first need to create the Openshift project, and give some permissions:
```
oc new-project vertx-microservice-example-aggregation-http
oc policy add-role-to-user admin admin -n vertx-microservice-example-aggregation-http
oc policy add-role-to-group view system:serviceaccounts -n vertx-microservice-example-aggregation-http
```
Do **not** change the project name, the configuration of the application is made for this name. See details below.
**Runs the microservice**
First go in the `aggregation-http` directory. Then for each project (A, B, C and D) runs:
```
mvn clean package docker:build fabric8:json fabric8:apply -Popenshift
```
It uses the Docker Maven plugin and the Fabric8 Maven plugin to build a docker image containing the microservice. It
pushes it to the docker registry and creates the application in Openshift (using Kubernetes).
When you have deployed all components, you should have 4 pods in openshift, one for each service:

To access the application page, you need to get the IP of the service `A`. To get this IP, click on the service `A`
and get the IP in the right side panel:

Once you have the IP, open the page: http://$ip/assets/index.html
You can use the application as the _local_ version. Then, go back to the Openshift overview page and scale down one
of the service pod (B, C or D):

Try to reuse the application, the circuit breaker should detect the failure and use a fallback. If you restore the
destroyed pod, the application should act _normally_ again.
**Kubernetes and service discovery**
When running in Openshift, the Kubernetes services are imported in the vert.x discovery service, so they are
retrieved as _regular_ services by the application.
### Pipeline example
**Step 1: Project creation**
You first need to create the Openshift project, and give some permissions:
```
oc new-project vertx-microservice-example-pipeline-http
oc policy add-role-to-user admin admin -n vertx-microservice-example-pipeline-http
oc policy add-role-to-group view system:serviceaccounts -n vertx-microservice-example-pipeline-http
```
Do **not** change the project name, the configuration of the application is made for this name. See details below.
**Runs the microservice**
First go in the `pipeline-http` directory. Then for each project (A, B, C and D) runs:
```
mvn clean package docker:build fabric8:json fabric8:apply -Popenshift
```
It uses the Docker Maven plugin and the Fabric8 Maven plugin to build a docker image containing the microservice. It
pushes it to the docker registry and creates the application in Openshift (using Kubernetes).
When you have deployed all components, you should have 4 pods in openshift, one for each service:

To access the application page, you need to get the IP of the service `A`. To get this IP, click on the service `A`
and get the IP in the right side panel:

Once you have the IP, open the page: http://$ip/assets/index.html
You can use the application as the _local_ version. Then, go back to the Openshift overview page and scale down one
of the service pod (B, C or D):

Try to reuse the application, the circuit breaker should detect the failure and use a fallback. If you restore the
destroyed pod, the application should act _normally_ again.
### Trick to shutdown everything
To shutdown openshift and the deployed pods use:
```
# On bash
docker stop `docker ps -qa`
docker rm -f `docker ps -qa`
# Fish
docker stop (docker ps -qa)
docker rm -f (docker ps -qa)
```
| 0 |
TheThinMatrix/OpenGL-Animation | A simple example of skeletal animation using OpenGL (and LWJGL). | null | A simple example of skeletal animation using OpenGL (and LWJGL).
To get the code working in an Eclipse project you need to set up a project with the lwjgl, lwjgl_utils,and PNGDecoder jars added to the build path, along with the relevant LWJGL natives.
In case you’ve forgotten how to set up a LWJGL project, you can find a tutorial on how to do that here: https://youtu.be/Jdkq-aSFEA0
You can download PNGDecoder here: http://twl.l33tlabs.org/dist/PNGDecoder.jar
| 1 |
OpenSharding/opensharding-spi-impl-example | ShardingSphere spi-impl example | null | # sharding-spi-impl-example
| 1 |
lievendoclo/cleanarch | Java example of the Clean Architecture | null | 
# Clean Architecture in Java
Some time ago Robert C. Martin published an interesting article about how
software architecture should be designed in such a way that technological
decisions can be deferred to a much later stage and to the edge of a system.
The concept of Clean Architecture was born.
## Why Clean Architecture?
> The center of your application is not the database. Nor is it one or more of the frameworks you may be using. **The center of your application is the use cases of your application** - _Unclebob_ ([source](https://blog.8thlight.com/uncle-bob/2012/05/15/NODB.html "NODB"))
Clean architecture helps us solve, or at least mitigate, these common problems with architecture:
* **Decisions are taken too early**, often at the beginning of a project, when we know the least about the problem that we have to solve
* **It's hard to change**, so when we discover new requirements we have to decide if we want to hack them in or go through an expensive and painful re-design. We all know which one usually wins. _The best architectures are the ones that allow us to defer commitment to a particular solution and let us change our mind_
* **It's centered around frameworks**. Frameworks are tools to be used, not architectures to be conformed to. Frameworks often require commitments from you, but they don’t commit to you. They can evolve in different directions, and then you’ll be stuck following their rules and quirks
* **It's centered around the database**. We often think about the database first, and then create a CRUD system around it. We end up using the database objects everywhere and treat everything in terms of tables, rows and columns
* **We focus on technical aspects** and when asked about our architecture we say things like “it’s servlets running in tomcat with an oracle db using spring”
* **It's hard to find things** which makes every change longer and more painful
* **Business logic is spread everywhere**, scattered across many layers, so when checking how something works our only option is to debug the whole codebase. Even worse, often it's duplicated in multiple places
* **Forces/Encourages slow, heavy tests**. Often our only choice for tests is to go through the GUI, either because the GUI has a lot of logic, or because the architecture doesn't allow us to do otherwise. This makes tests slow to run, heavy and brittle. It results in people not running them and the build beind broken often
* **Infrequent deploys** because it's hard to make changes without breaking existing functionalities. People resort to long-lived feature branches that only get integrated at the end and result in big releases, rather than small incremental ones
Clean architecture gives us all these benefits:
* **Effective testing strategy** that follows the [testing pyramid](http://martinfowler.com/bliki/TestPyramid.html) and gives us a fast and reliable build
* **Frameworks are isolated** in individual modules so that when (not if) we change our mind we only have to change one place, with the rest of the app not even knowing about it
* **Independent from Database**, which is treated just like any other data provider. Our app has real use cases rather than being a CRUD system
* **Screaming architecture** a.k.a. it screams its intended usage. When you look at the package structure you get a feel for what the application does rather than seeing technical details
* **All business logic is in a use case** so it's easy to find and it's not duplicated anywhere else
* **Hard to do the wrong thing** because modules enforce compilation dependencies. If you try to use something that you're not meant to, the app doesn't compile
* **We're always ready to deploy** by leaving the wiring up of the object for last or by using feature flags, so we get all the benefits of continuous integration (no need for feature branches)
* **Swarming on stories** so that different pairs can easily work on the same story at the same time to complete it quicker
* **Good monolith** with clear use cases that you can split in microservices later one, once you've learnt more about them
Of course, it comes at a cost:
* **Perceived duplication of code**. Entities might be represented differently when used in business logic, when dealing with the database and when presenting them in a json format. You might feel like you're duplicating code, but you're actually favouring _decoupling over DRY_
* **You need interesting business logic** to "justify" the structure. If all you do in your use case is a one-line method to read or save from a database, then maybe you can get away with something simpler
## Graphical representation of Clean Architecture


## What this implementation is trying to achieve
This implementation is far from perfect. What I'm trying to do here is to
provide code that adheres as close as possible to the tenets of Clean Architecture.
The package names have been chosen to represent the concepts of Clean Architecture
so that it becomes clear what is being implemented.
## Where I don't agree
Well, there is one part where I don't agree. I chose not to have the Presenter extend
a Boundary, because I believe this is a concept that just doesn't implement well.
Instead, my Boundary objects that return something accept a Consumer. This consumer
can then be implemented by a Presenter (in my case Presenter implementations are
just stateful Consumer implementations). This feels much more logical to me and
provides flexibility towards asynchronicity (queueing, reactive) that would otherwise leak into
the interfaces.
## Acknowledgements
https://github.com/mattia-battiston/clean-architecture-example | 1 |
nicolasgramlich/AndEngineRobotiumExtensionExample | AndEngine - Robotium Extension Example | null | null | 1 |
JJBRT/advanced-java-tutorials | A collection of examples about advanced Java programming | advanced collection example examples java tutorial | # Advanced Java tutorials
<a href="https://jjbrt.github.io/advanced-java-tutorials/">
<img src="https://raw.githubusercontent.com/JJBRT/advanced-java-tutorials/master/docs/Java-logo.png" alt="Java-logo.png" height="180px" align="right"/>
</a>
A collection of examples about advanced Java programming. Here you will find tutorials about:
* [how to create your own dependency injection framework](https://jim-jerald-burton.medium.com/how-to-create-your-own-dependency-injection-framework-in-java-12a6e52aeff9)
* [how to make applications created with old Java versions work on Java 9 and later versions](https://dev.to/bw_software/making-applications-created-with-old-java-versions-work-on-java-9-and-later-versions-19ld)
* [how to make reflection fully work on Java 9 and later](https://jim-jerald-burton.medium.com/making-reflection-fully-work-on-java-9-and-later-767320344d1d)
* [how to export all modules to all modules at runtime on Java 9 and later](https://jim-jerald-burton.medium.com/exporting-all-modules-to-all-modules-at-runtime-on-java-9-and-later-3517eb479701)
* [how to iterate collections and arrays in parallel by setting thread priority](https://dev.to/bw_software/iterating-collections-and-arrays-in-parallel-5acg)
* [how to configure host name resolution to use a universal custom host name resolver](https://dev.to/jjbrt/how-to-configure-hostname-resolution-to-use-a-universal-custom-hostname-resolver-in-java-14p0)
* [how to query and validate a JSON document](https://dev.to/jjbrt/querying-and-validating-a-json-document-in-java-323i)
<br/>
**Any instructions to make each project work are indicated in the possible README.md file inside it**.
| 0 |
jsvazic/GAHelloWorld | A simple example of a Genetic Algorithm that generates Hello world!"" | null | # Genetic Algorithm Hello World!
This is a simple project intended to showcase genetic algorithms with a well
known example for all new developers; namely the classic "Hello, world!"
example!
## Overview
The application simply "evolves" the string "Hello, world!" from a population
of random strings. It is intended to be a gentle introduction into the world
of genetic algorithms, using Java, Clojure, Common Lisp, Haskell,
Scala, Python and OCaml. The programs themselves are really quite
simple, and more complex topics like crossover selection using
roulette wheel algorithms, insertion/deletion mutation, etc, have not
been included.
### History
I've been working with Genetic Algorithms for a little while now and I
stubmled across a
[C++ implemetation](http://www.generation5.org/content/2003/gahelloworld.asp)
a while ago. I decided to bring it back to life and migrate it to Java with
my own enhancements. This is far from ideal code, but it was designed to be
a gentle introduction for newcomers to genetic algorithms.
### But why the <i>net.auxesia</i> package/namespace?
[Auxesia](http://www.theoi.com/Ouranios/HoraAuxesia.html) is the greek
goddess of spring growth, so when dealing with evolutionary programming like
genetic algorithms, the name just seemed to fit. That and I was trying to be
witty with my naming, and [Dalek](http://en.wikipedia.org/wiki/Dalek) just
didn't seem right.
## Architecture
The overall architecture for each language is the same. The genetic algorithm
is broken up between two logical units: a <i>Chromsome</i> and a
<i>Population</i>. In some cases a separate driver is also added, but this is
just to keep the logic for the other two components separate and clean.
### Population
The Population has 3 key attributes (a crossover ratio, an
elitism ratio and a mutation ratio), along with a collection of Chromosome
instances, up to a pre-defined population size. There is also an evolve()
function that is used to "evolve" the members of the population.
#### Evolution
The evolution algorithm is simple in that it uses the various ratios during
the evolution process. First, the elitism ratio is used to copy over a
certain number of chromosomes unchanges to the new generation. The remaining
chromosomes are then either mated with other chromosomes in the population, or
copied over directly, depending on the crossover ratio. In either case, each
of these chromosomes is subject to random mutation, which is based on the
mutation ration metioned earlier.
The crossover algorithm used for mating is a very basic tournament selection
algorithm. See
[Tournament Selection](http://en.wikipedia.org/wiki/Tournament_selection) for
more details.
### Chromosome
Each chromosome has a gene that represents one possible solution to the given
problem. In our case, each gene represents a string that strives to match
"Hello, world!". Each chromosome also has a fitness attribute that is a
measure of how close the gene is to the target of "Hello, world!". This
measurement is just a simple sun of the absolute difference of each character
in the gene to the corresponding character in the target string above. Each
gene is simply a string of 13 ASCII characters from ASCII 32 to ASCII 121
inclusive.
The functions operating on Chromsome include <i>mutate()</i> and
<i>mate()</i>, amongst others as necessary for the various language
implementations.
#### mutate()
The mutate() function will randomly replace one character in the given gene.
#### mate()
The mate() function will take another chromosome instance and return two new
chromosome instances. The algorithm is as follows:
1. Select a random pivot point for the genes.
2. For the first child, select the first n < pivot characters from the first
parent, then the remaining pivot <= length characters from the second
parent.
3. For the second child, repeate the same process, but use the first n < pivot
characters from the second parent and the remaining characters from the
first parent.
### Driver
The driver code simply instantiates a new Population instance with a set of
values for the population size, crossover ratio, elitism ratio and mutation
ratio, as well as a maximum number of generations to create before exiting
the simulation, in order to prevent a potential infinite execution.
Depending on the implementation, this code may reside in its own source file.
## Usage
Take a look at the README files in:
* [Java](GAHelloWorld/tree/master/java)
* [Clojure](GAHelloWorld/tree/master/clojure)
* [Common Lisp](GAHelloWorld/tree/master/common-lisp)
* [Scala](GAHelloWorld/tree/master/scala)
* [Python](GAHelloWorld/tree/master/python)
* [OCaml](GAHelloWorld/tree/master/ocaml)
* [PHP](GAHelloWorld/tree/master/php)
for the specifics for each language.
### Unit tests
Each source implementation has unit tests to go along with the source code.
## Copyright and License
The MIT License
Copyright © 2011 John Svazic
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| 1 |
vrudas/spring-framework-examples | An educational project with Spring Framework examples. Used for lectures at courses. | java jwt-authentication oauth2-client remember-me spring spring-boot spring-framework spring-framework-5 spring-guides spring-mvc spring-security | <p align="center">
<img src="https://github.com/vrudas/spring-framework-examples/assets/8240025/9d664274-e5b0-431b-8c09-9c29d9b92fa0" alt="Spring Logo" width="256"/>
</p>
<h1 align="center">
Spring Framework Examples
</h1>
<p align="center">
An educational project with Spring Framework examples. Used for lectures at courses.
</p>
## Related links
- [Spring Framework Documentation](https://docs.spring.io/spring-framework/docs/current/reference/html/)
- [Spring Core Technologies](https://docs.spring.io/spring-framework/docs/current/reference/html/core.html)
- [Spring Guides](https://spring.io/guides)
- [Spring Quickstart Guide](https://spring.io/quickstart)
- [Properties with Spring and Spring Boot](https://www.baeldung.com/properties-with-spring)
- [An Intro to the Spring DispatcherServlet](https://www.baeldung.com/spring-dispatcherservlet)
- [Design Pattern - Front Controller Pattern](https://www.tutorialspoint.com/design_pattern/front_controller_pattern.htm)
- [Introduction to Using Thymeleaf in Spring](https://www.baeldung.com/thymeleaf-in-spring-mvc)
- [Servlet Filter and Handler Interceptor](https://medium.com/techno101/servlet-filter-and-handler-interceptor-spring-boot-implementation-b58d397d9dbd)
- [Error Handling for REST with Spring](https://www.baeldung.com/exception-handling-for-rest-with-spring)
- [Spring 5, Embedded Tomcat 8, and Gradle: a Quick Tutorial](https://auth0.com/blog/spring-5-embedded-tomcat-8-gradle-tutorial/)
- [Spring Boot Reference Documentation](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/)
## Spring Security 6 Migration links
- [Migrating to 6.0](https://docs.spring.io/spring-security/reference/migration/index.html)
- [Spring Security without the WebSecurityConfigurerAdapter](https://spring.io/blog/2022/02/21/spring-security-without-the-websecurityconfigureradapter)
## Important information
- The module [example-17-authorization](example-17-authorization) has an issue https://github.com/vrudas/spring-framework-examples/issues/101 that was caused because of update to Spring Security 6
## Example 21 - JWT Instructions
Please note that IntelliJ IDEA [HTTP Client](https://blog.jetbrains.com/idea/2020/09/at-your-request-use-the-http-client-in-intellij-idea-for-spring-boot-restful-web-services/) was used to perform requests in code snippets
Please follow the steps to perform a demo of how to get a JWT token for an existing user:
- Perform login action
```HTTP request
POST http://localhost:8080/login?username=user&password=user
Accept: application/json
```
- Extract the generated Bearer token from a response header `Authorization: Bearer <token>`
```
HTTP/1.1 200
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
Authorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJ1c2VyIiwiZXhwIjoxNjc4ODM2OTY1fQ.Afagk8no-r2kUiDOdtjWMT06gYPHkrhCoOSoK5_X6k8BC8Lr6k5rB-9gyoE72-lkd0rx1sEPET-3Uf7KP-7BrQ
X-Content-Type-Options: nosniff
X-XSS-Protection: 0
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Content-Length: 0
Date: Tue, 14 Mar 2023 23:35:05 GMT
Keep-Alive: timeout=60
Connection: keep-alive
<Response body is empty>
```
- Use the generated Bearer token to perform the call to an endpoint by providing the `Authorization: Bearer <token>` header
```HTTP request
GET http://localhost:8080/users/me
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJ1c2VyIiwiZXhwIjoxNjc4ODM1NjE5fQ.cRZ1ob4XZfG5RnU0jl2kdPihc9Ln-BlEOe7hbuwZJWp-UuQSGukI_57pWrBcdaCWPN-8luCF08YWU74tUErOFg
```
<h2 align="center">
Contribution statistic
</h2>
<p align="center">
<img src="https://repobeats.axiom.co/api/embed/ea96de66d99f0b7879faf1dd630824e3b2339f78.svg" alt="Repobeats analytics image"/>
</p>
| 0 |
eliostvs/clean-architecture-delivery-example | A example of clean architecture in Java 8 and Spring Boot 2.0 | clean-architecture java java-8 jwt-authentication spring-boot spring-security | [](https://travis-ci.org/eliostvs/clean-architecture-delivery-example)
# Clean Architecture Example
## Description
The architecture of the project follows the principles of Clean Architecture. It is a simple food delivery app. One can list stores, cousines, products and create food orders. JWT it is used for authentication.
## Running
`./gradlew bootRun`
## Architecture
The project consists of 3 packages: *core*, *data* and *presenter*.
### *core* package
This module contains the domain entities and use cases.
This module contains the business rules that are essential for our application.
In this module, gateways for the repositories are also being defined.
There are no dependencies to frameworks and/or libraries and could be extracted to its own module.
### *data* package
### *presenter* package
## Diagram
Here is a flow diagram of the payment of an order.

| 1 |
mstahv/spring-boot-spatial-example | A Spring Boot example editing spatial data in MySQL | null | # A Spring Boot example editing spatial data in relational database

This is a small example app that shows how one can use:
* [Spring Boot](http://projects.spring.io/spring-boot/) and [Spring Data](https://spring.io/projects/spring-data)
* Latest [Hibernate](http://hibernate.org/orm/) with spatial features. At the application API, only standard JPA stuff (and Spring Data) is used.
* ~~The example also uses [QueryDSL](http://www.querydsl.com) spatial query as an example. QueryDSL contain excellent support for spatial types.~~ QueryDSL example replaced with plain JPQL(with Hibernate spatial extensions) as the latest version is not compatible with latest JTS/Hibernate. See https://github.com/querydsl/querydsl/issues/2404. If you want to see the example of QueryDSL usage in this setup, check out a bit older version of the example.
* Relational database, like PostGis (default, Postgres + extentiosn), H2GIS or MySQL, which supports basic spatial types. The example automatically launches Docker image with PostGis for the demo using TestContainers, if run via TestApp class in src/test/java/org/vaadin/example. Not that Hibernate might need tiny adjustments for other databases.
* [Vaadin](https://vaadin.com/) and [MapLibreGL }> add-on](https://vaadin.com/directory/component/maplibregl--add-on) to build the UI layer. MapLibre add-on is a Vaadin wrapper for [MapLibre GL JS](https://github.com/maplibre/maplibre-gl-js) slippy map widget and [mapbox-gl-draw](https://github.com/mapbox/mapbox-gl-draw). Its Vaadin field implementations which make it dead simple to edit [JTS](https://locationtech.github.io/jts/) data types directly from the JPA entities.
* As base layer for maps, crisp vector format [OpenStreetMap](https://www.openstreetmap.org/) data via [MapTiler](https://www.maptiler.com) is used, but naturally any common background map can be used.
...to build a full-stack web app handling spatial data efficiently.
As the data is in an optimized form in the DB, it is possible to create efficient queries to the backend and e.g. only show features relevant to the current viewport of the map visualizing features or what ever you can with the spatial queries.
Enjoy!
| 1 |
microsoft/flink-on-azure | Examples of Flink on Azure | azure azuredatafactory azuredatalakegen2 azuresqldb big-data cdc client-go flink flink-examples flink-stream-processing golang hdfs helm java kubeflow kubernetes tensorflow | # Flink on Azure
Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale.
This repo provides examples of Flink integration with Azure, like Azure Kubernetes, Azure SQL Server, Azure Data Factory, etc.
## Examples
Outline the examples in the repository.
| Example | Description | Pipeline Status |
|-|-|-|
| [Flink Streaming Examples](flink-streaming-example) | Examples for Flink Streaming, including custom source & sink | |
| [Flink Stream Batch Unified Examples](flink-stream-batch-unified-example) | Examples for Flink Stream Batch Unified Connector | |
| [Flink History Server](flink-history-server) | Examples for Flink History Server | |
| [Flink CDC SQL Server Examples](flink-cdc-sql-server-example) | Examples for Flink CDC SQL Server Connector | |
| [Flink on Native Azure Kubernetes](flink-on-native-azure-kubernetes) | Examples for Flink Job on Native Azure Kubernetes | |
| [Flink Azure Data Factory Cloud Native Extension](flink-adf-cloud-native-extension) | Flink Azure Data Factory Cloud Native Extension | |
| [Flink Deep Learning Tensorflow](flink-dl-tensorflow) | Flink Online & Offline Training, Tensorflow Integration | |
## Prerequisites
Basic:
* [Git](https://www.git-scm.com/downloads)
* [Java Development Kit (JDK) 1.8](https://www.oracle.com/java/technologies/javase/javase8u211-later-archive-downloads.html)
* [Apache Maven](http://maven.apache.org/download.cgi) and [install](http://maven.apache.org/install.html) a Maven binary archive
* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)
Azure:
* [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)
* [Azure Kubernetes](https://azure.microsoft.com/en-us/services/kubernetes-service/)
* [Azure SQL Server](https://azure.microsoft.com/en-us/services/sql-database/)
* [Azure Data Factory](https://azure.microsoft.com/en-us/services/data-factory/)
* [Azure Data Lake Storage Gen2](https://azure.microsoft.com/en-us/services/storage/data-lake-storage/#overview)
* [Azure Blob Storage NFS Support](https://learn.microsoft.com/en-us/azure/storage/blobs/network-file-system-protocol-support)
* [Azure Storage Fuse](https://github.com/Azure/azure-storage-fuse)
Flink:
* [Flink](https://downloads.apache.org/flink)
Deep Learning:
* [Tensorflow](https://www.tensorflow.org/)
* [Kubeflow](https://www.kubeflow.org/)
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
| 0 |
mploed/event-driven-spring-boot | Example Application to demo various flavours of handling domain events in Spring Boot | atom event-driven event-sourcing events feed rest-api spring spring-boot spring-cloud-stream spring-data-jpa | # Event Driven Applications with Spring Boot
This projects tries to capture various options you have when dealing with Event Driven Spring Boot applications.
The follwing Spring Technologies are being used:
- Spring Boot
- Spring Cloud Stream Rabbit
- Spring Data JPA
These examples contain various different ways to model and deal with events:
- Complete aggregates / entities in the events
- REST Resource URLs in events
- Partial parsing / handling of events in consumers
- Events as Atom Feeds
## Prerequisites
- You need to have Docker installed
## How to run and install the example
In the root directory you need to
1. Compile everything with ./mvnw package
2. Start everything up with docker-compose up --build
## Running on Kubernetes
Mind the KubernetesSetup.md file in the kubernetes directory
## URLs and Ports
Each of the modules is it's own Spring Boot Application which can be accessed as follows:
<table>
<tr>
<th>Name</th>
<th>Application / Enpoint Type</th>
<th>URL</th>
</tr>
<tr>
<td>Application Process</td>
<td>9000</td>
<td>http://localhost:9000</td>
</tr>
<tr>
<td>Credit Application</td>
<td>9001</td>
<td>http://localhost:9001/credit-application</td>
</tr>
<tr>
<td>Customer</td>
<td>9002</td>
<td>http://localhost:9002/customer and http://localhost:9002/customer/feed</td>
</tr>
<tr>
<td>Scoring</td>
<td>9003</td>
<td>No UI</td>
</tr>
<tr>
<td>CreditDecision</td>
<td>9004</td>
<td>http://localhost:9004/credit-decision and http://localhost:9004/credit-decision/feed</td>
</tr>
</table>
## Messaging Infrastructure & Domain Events
### Public Events
#### CreditApplicationNumberGeneratedEvent
Source: application-process
Persisted in source: no
Consumers:
- credit-application
- credit-decision
Topic: CreditApplicationNumberGeneratedTopic
#### CreditApplicationEnteredEvent
Source: credit-application
Persisted in source: yes in its own Table via JPA
Consumers:
- application-process
- credit-decision
Topic: CreditApplicationEnteredTopic
#### CustomerCreatedEvent
Source: customer
Persisted in source: no
Consumers:
- application-process
- credit-decision
Topic: CustomerCreatedTopic
#### ScoringPositiveEvent
Source: scoring
Persisted in source: no
Consumers:
- application-process
- credit-decision
Topic: ScoringPositiveTopic
#### ScoringNegativeEvent
Source: scoring
Persisted in source: no
Consumers:
- application-process
- credit-decision
Topic: ScoringNegativeTopic
#### ApplicationDeclinedEvent
Source: credit-decision
Persisted in source: not as an event
Consumers:
- application-process
Topic: ApplicationDeclinedTopic
### Internal Events
#### Credit-Application
- CreditDetailsEnteredEvent
- FinancialSituationEnteredEvent
Both events are stored
Source: credit-application
Storage: Own Table via JPA
### Feeds
#### Customer Feed
Url: http://localhost:9002/customer/feed
Contains URLs to Customer Resources
#### Credit Decision Feed
Url: http://localhost:9004/credit-decision/feed
Contains Application Numbers that have been confirmed
## Event Types being used
This demo shows various types of event types: Events with all the data, Events with Resource Urls and "Events" as Feeds
#### Events with all the data
Especially the CreditApplicationEnteredEvent falls into this category: it contains all of the data for the credit application
such as the financial situation and the details of the actual credit. By consuming this event you will not need additional
roundtrips to upstream systems
Other events that fall into this category are:
- ApplicationNumberGeneratedEvent
- ScoringNegativeEvent
- ScoringPositiveEvent
- ApplicationDeclinedEvent
##### Idea of Bounded Context:
Please take a close look at how the CreditApplicationEnteredEvent is being reflected in the scoring application. Yes, we
take in all the payload from the broker but the public model of the event has a clear focus on the scoring context's view
on the data.
#### Events with a Resource URL
These Events do not contain a lot of information. They may contian something like a business process identifier such as
the applicationNumber in this example but for the purpose of this demo I refrained from doing that. So the CustomerCreatedEvent
only contians the URL to the Customer REST Resource from which interested contexts can obtain the payload from.
#### "Events" via Feeds
Althoug the usage of feeds is no plain and pure event driven processing style I think that they come in handy when you
are dealing with situations like these:
- you have issues with your message broker and firewalls and these issues can't be resolved easily
- you need to have an event replay functionality in place that enables consumers to restore their replicated data
You can find "Events via Feeds" in the customer and the credit-decision (see Feeds) applications. | 1 |
peterl1084/cdiexample | Vaadin CDI example project | null | # Vaadin Java EE app example
This is a work in progress Vaadin CDI example project that could be a starting point for a larger Java EE app.
Stuff that this app is built on:
* [Vaadin](https://vaadin.com/)
* [Java EE](http://www.oracle.com/technetwork/java/javaee/overview/index.html)
* [JPA](http://en.wikipedia.org/wiki/Java_Persistence_API)
* [Apache Shiro](http://shiro.apache.org)
* [Vaadin CDI](http://vaadin.com/addon/vaadin-cdi)
* [CDI events](http://docs.oracle.com/javaee/6/tutorial/doc/gkhic.html)
* [MVP](http://en.wikipedia.org/wiki/Model–view–presenter)
* The [new Valo theme for Vaadin](https://vaadin.com/blog/-/blogs/7-series) to make it look modern
TODO:
* Limited UI for customers to edit their own details
* Localization
* Clean up, blog posts etc.
## To get started (plaining with this app in your dev environment):
Build should be IDE/platform indendent. So just
* Checkout the the project with `git clone https://github.com/peterl1084/cdiexample.git`.
* (OPTIONAL) define a datasource and configure it in ```backend/src/main/resources/META-INF/persistence.xml```. Development friendly Java EE servers like TomEE, WildFly and GlassFish will do this automatically for you, as we haven't defined ```<jta-data-source>``` in ```persistence.xml```
* Build + Run/Debug in your favorite IDE
* ... or use ```mvn install; cd ui; mvn tomee:run``` to launch it in TomEE without any configuration
| 1 |
wkrzywiec/library-hexagonal | An example application written in Hexagonal (Ports and Adapter) architecture | cqrs ddd docker docker-compose domain-driven-design hexagonal-architecture java-11 ports-and-adapters postgres spring-boot tdd | # Library
> written in *Hexagonal (Ports & Adapters) Architecture*
 [](https://sonarcloud.io/dashboard?id=wkrzywiec_library-hexagonal) [](https://sonarcloud.io/dashboard?id=wkrzywiec_library-hexagonal) [](https://opensource.org/licenses/MIT)
This is a small application that provides basic REST endpoints for managing library (add new book, reserve, borrow it, etc.).
The technology behind it:
* Java 11
* Postgres
* Spring Boot
## Installing / Getting started
#### Using `docker-compose`
In the terminal run the following command:
```console
$ docker-compose up
```
#### Using Maven (with H2 or local Postgres database)
First compile an application:
```console
$ mvn clean package
```
Then, you have two options either run it with H2 database or with local Postgres database. For first approach just run:
```console
$ mvn spring-boot:run
```
For a second option, check in the configuration file - `src/main/resources/application.yml` for profile *local-postgres* if connection details are correct and if so, run the command:
```console
$ mvn spring-boot:run -P local-postgres
```
#### Inside IntelliJ (with H2 or Postgres database)
First configure how you run the `LibraryHexagonalApplication.java` by adding `--spring.profiles.active=h2` (for H2 database) or `--spring.profiles.active=postgres` (for Postgres database) as a **Program argument**.
Then just run the `LibraryHexagonalApplication.java` class so it will use H2 database (you don't need to have postgres database up and running).
| 1 |
mapr-demos/finserv-application-blueprint | Example blueprint application for processing high-speed trading data. | null | BY USING THIS SOFTWARE, YOU EXPRESSLY ACCEPT AND AGREE TO THE TERMS OF THE AGREEMENT CONTAINED IN THIS GITHUB REPOSITORY. See the file EULA.md for details.
# An Example Application for Processing Stock Market Trade Data on the MapR Converged Data Platform
Stock exchanges offer real-time streams of information about the state of the market and trading activity. These data feeds channel a firehose of information that can be used to analyze market activity. However, these applications require a highly performant and reliable streaming infrastructure with accompanying services to process and store unbounded datasets. It can be challenging to build such infrastructures without incurring untenable operational expense and administrative burden.
This demo application focuses on interactive market analysis with a graphical user interface in Apache Zeppelin, however our goal is to use this application to tell a larger story about how the MapR Converged Data Platform can be used to *cost effectively* explore other stream processing use-cases, such as analyzing stock exchange data feeds for algorithmic trading or automated market surveillance.
The intent of the application is to serve as a "blueprint" for building high-speed streaming applications on the MapR Converged Data Platform. You can use the code as a base for developing your own workflow, including producers, consumers and analytical engines, and run queries against the indexed topics.
## Overview
This project provides an engine for processing real time streams trading data from stock exchanges. The application consists of the following components:
- A Producer microservice that streams trades using the NYSE TAQ format. The data source is the Daily Trades dataset described [here](http://www.nyxdata.com/Data-Products/Daily-TAQ). The schema for our data is detailed in Table 6, "Daily Trades File Data Fields", on page 26 of [Daily TAQ Client Specification (from December 1st, 2013)](http://www.nyxdata.com/doc/212759).
- A multi-threaded Consumer microservice that indexes the trades by receiver and sender.
- Example Spark code for querying the indexed streams at interactive speeds, enabling Spark SQL queries.
- Example code for persisting the streaming data to MapR-DB
- Performance tests for benchmarking different configurations
- A supplementary python script to enhance the above TAQ dataset with "level 2" bid and ask data at a user-defined rate.
There are several beneficial aspects of the application that are worth highlighting:
- The Consumer microservice, performing the indexing, can be arbitrarily scaled simply by running more instances. See below in this README for how to start the application.
- Jackson annotations are provided for easy translation of the data structures to JSON and persistence to MapR-DB.
- The application can handle 300,000 entries/second on a 3-node cluster, which is suitable for testing. It does not require a large cluster, and takes advantage of the scaling properties of MapR Streams.
- The resulting index topics are small, and can be queried fast enough such that they can be used for interactive dashboards, such as in a Zeppelin notebook.
## Architecture
The application provides a financial intermediary service, running *bids* and *asks* between traders. Traders are identified with a unique ID and each bid and ask is sent from one trader to a set of N receiving traders.
The following diagram shows how data moves through the architecture. The rounded rectangles represent processes that produce and/or consume data from MapR Streams topics. Java based microservices are used to ingest and manipulate streaming data using the Kafka API. Spark and Apache Zeppelin are used to provide streaming analytics and batch oriented visualization.
<img src="https://github.com/mapr-demos/finserv-application-blueprint/blob/master/images/dataflow.gif" width="70%">
## Prerequisites
This application requires MapR 5.2 and Spark 2.0, which can be easily installed with the [MapR Ecosystem Pack 2.0](http://maprdocs.mapr.com/home/InteropMatrix/r_MEP_52.html).
You can use MapR's free [Converged Community Edition](http://mapr.com/download) or the [Converged Enterprise Edition](https://www.mapr.com/products/mapr-distribution-editions).
You will also need Git and Apache Maven in order to download and compile the provided source code.
## Building the application
Clone this repo and build the application with Maven. A pom.xml file is included in the base directory. The remainder of this guide will assume that you clone the package to /home/mapr/.
```
cd /home/mapr/
git clone http://github.com/mapr-demos/finserv-application-blueprint.git
cd finserv-application-blueprint
mvn clean install
```
At this point you should see the resulting jar file in the target/ directory: ```nyse-taq-streaming-1.0.jar```
Copy that jar package to the /home/mapr/ directory on each of your cluster nodes:
```
scp ./target/nyse-taq-streaming-1.0.jar mapr@<YOUR_MAPR_CLUSTER>:/home/mapr
```
### Step 1: Create the stream
A *stream* is a logical grouping of topics. They give us a way to group together topics and protect those topics with a single set of security permissions and other properties. MapR supports the Kafka API for interacting with streams. For more information on Streams, see [https://www.mapr.com/products/mapr-streams](https://www.mapr.com/products/mapr-streams).
Run the following command from any node in your MapR cluster:
```
maprcli stream create -path /user/mapr/taq -produceperm p -consumeperm p -topicperm p -ttl 900
```
In that command we created the topic with public permission since we want to be able to run producers and consumers from remote computers. Verify the stream was created with this command:
```
maprcli stream info -path /user/mapr/taq
```
### Step 2: Create the topics
We only need to create one topic to get started, the rest are created by the application. Topics are created with the `maprcli` tool. Run this command on a single node in the cluster:
```
maprcli stream topic create -path /user/mapr/taq -topic trades -partitions 3
```
Verify the topic was created successfully with this command:
```
maprcli stream topic list -path /user/mapr/taq
```
This enables 3 partitions in the topic for scaling across threads, more information on how partitions work can be found [here](http://maprdocs.mapr.com/51/MapR_Streams/concepts.html).
### Step 3: Start the "Fan Out" Consumer
We use a multi-threaded microservice that indexes the incoming information into separate topics by receiver and sender. We call this a "fan out" consumer, because it consumes tick data from incoming stock exchange stream and copies each tick record into topics belonging to all the participants of a trade. So for example, if this consumer sees an offer by Sender X to sell shares to recipients A, B, and C, then this consumer will copy that tick to four new topics, identified as sender_X, receiver_A, receiver_B, and receiver_C. This relationship is illustrated below:
<img src="https://github.com/mapr-demos/finserv-application-blueprint/blob/master/images/fanout.png" width="40%">
A "tick" of this data consists of:
```
{time, sender, id, symbol, prices, ..., [recipient*]}
```
For each message in the stream there is a single sender and multiple possible receipients. The consumer will index these into separate topics so they can be queried.
Run the following command to start the consumer:
```
java -cp `mapr classpath`:/home/mapr/nyse-taq-streaming-1.0.jar:/home/mapr/finserv-application-blueprint/src/test/resources com.mapr.demo.finserv.Run consumer /user/mapr/taq:trades 3
```
In this example we are starting 3 threads to handle the 3 partitions in topic, ```/user/mapr/taq:trades```.
### Step 4: Run the Producer
Run the producer with the following command. This will send all the trades contained in files under finserv-application-blueprint/data/ to `/user/mapr/taq:trades`, where '/user/mapr/taq' is the stream and 'trades' is the topic.
```
java -cp `mapr classpath`:/home/mapr/nyse-taq-streaming-1.0.jar com.mapr.demo.finserv.Run producer /home/mapr/finserv-application-blueprint/data/080449 /user/mapr/taq:trades
```
A small data file representing one second of trades, bids and asks (```data/080449```) is provided for convenience. To generate more data, see the section 'Generating Data' below.
You should see the producer running and printing throughput numbers:
```
Throughput = 0.00 Kmsgs/sec published. Threads = 1. Total published = 2.
Throughput = 202.78 Kmsgs/sec published. Threads = 1. Total published = 411107.
Throughput = 377.08 Kmsgs/sec published. Threads = 1. Total published = 1139858.
Throughput = 463.34 Kmsgs/sec published. Threads = 1. Total published = 1865937.
Throughput = 478.99 Kmsgs/sec published. Threads = 1. Total published = 2406537.
```
This simulates "live" bids, asks and trades streaming from an exchange.
### Step 5: Persist stream data in a database
We'll explain two ways in which streaming data can be persisted into long-term storage. First we'll see how this can be done with MapR-DB, then we'll see how this can be done with Apache Hive.
#### Persist stream data with MapR-DB
The ```Persister.java``` class uses Spark SQL to persist JSON records from MapR Streams into MapR-DB.
This class can be run with the following command:
```
java -cp `mapr classpath`:/home/mapr/finserv-application-blueprint/target/nyse-taq-streaming-1.0.jar com.mapr.demo.finserv.Persister -topics /user/mapr/taq:sender_0310,/user/mapr/taq:sender_0410 -table /user/mapr/ticktable -droptable -verbose
```
This creates a stream consumer that persists trades from senders #0310 and #0410 to MapR-DB in a table located at /mapr/my.cluster.com/user/mapr/ticktable (which will be overwritten if it already exists, per the ```-droptable``` option). That command will only see *new* messages in the trades topic because it tails the log, so run the following command to put more trade data into the stream:
```
java -cp `mapr classpath`:/home/mapr/nyse-taq-streaming-1.0.jar:/home/mapr/finserv-application-blueprint/src/test/resources com.mapr.demo.finserv.Run consumer /user/mapr/taq:trades 3 &
java -cp `mapr classpath`:/home/mapr/nyse-taq-streaming-1.0.jar com.mapr.demo.finserv.Run producer /home/mapr/finserv-application-blueprint/data/ /user/mapr/taq:trades
```
Here are some examples of how you can query the table created by the Persister:
Query the MapR-DB table with dbshell:
```
mapr dbshell
maprdb mapr:> find /user/mapr/ticktable
```
Query the MapR-DB table from the Apache Drill command line:
/opt/mapr/drill/drill-*/bin/sqlline -u jdbc:drill:
0: jdbc:drill:> SELECT * FROM dfs.`/user/mapr/ticktable` LIMIT 10;
Query the MapR-DB table from the Apache Drill web interface, as shown below:
<img src = "images/drill_query.png" width=600px>
#### Persist stream data with Apache Hive
The ```SparkStreamingToHive``` class uses the Spark Streaming API to copy messages from the tail of streaming topics to Hive tables that can be analyzed in Zeppelin. Zeppelin can't directly access stream topics, so we use this utility to access streaming data from Zeppelin. Here's how to run this class:
```
/opt/mapr/spark/spark-2.0.1/bin/spark-submit --class com.mapr.demo.finserv.SparkStreamingToHive /home/mapr/nyse-taq-streaming-1.0-jar-with-dependencies.jar --topics <topic1>,<topic2>... --table <destination Hive table>
```
That command will only see *new* messages in the taq:trades topic because it tails the log, so when it says "Waiting for messages" then run the following command to put more trade data into the stream. This command was described in [step 4](https://github.com/mapr-demos/finserv-application-blueprint#step-4-run-the-producer):
```
java -cp `mapr classpath`:/home/mapr/nyse-taq-streaming-1.0.jar com.mapr.demo.finserv.Run producer /home/mapr/finserv-application-blueprint/data/ /user/mapr/taq:trades
```
In the previous command, we consumed from the trades topic, which is the raw stream for every trader. If you want to save only trades from a specific trader, then run the following command. This will read messages from the topic associated with the trader called ```sender_0410``` and copy those messages to Hive. Remember, this tail operation so it will wait for new messages on that topic.
```
/opt/mapr/spark/spark-2.0.1/bin/spark-submit --class com.mapr.demo.finserv.SparkStreamingToHive /home/mapr/nyse-taq-streaming-1.0-jar-with-dependencies.jar --topics /user/mapr/taq:sender_0410 --table ticks_from_0410
```
### Step 6: Build a Dashboard in Apache Zeppelin
There are many frameworks we could use to build an operational dashboard. [Apache Zeppelin](https://zeppelin.apache.org/) is a good choice because it supports a variety of ways to access data. Our goal is to build a dashboard that looks like this:
<img src = "images/zepdash.png" width=400px>
Here's what you need to do to setup a dashboard like that:
#### Install and Configure Zeppelin
We're going to assume you've already installed Apache Zeppelin and configured its Spark SQL interpretter to access Hive tables. If you haven't done that, then follow the instructions for installing Zeppelin in this blog post:
https://community.mapr.com/docs/DOC-2029-how-to-use-spark-pyspark-with-zeppelin-on-mapr-cdp-draft
#### Create Hive tables
Import the bank company names into a new hive table. Later, we'll map these names to bank IDs with a table join in Zeppelin.
```
hive
hive> CREATE TABLE banks(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
hive> LOAD DATA LOCAL INPATH '/home/mapr/finserv-application-blueprint/resources/bank_list.csv' INTO TABLE banks;
```
Run this command to load streaming data into a Hive table:
```
/opt/mapr/spark/spark-2.0.1/bin/spark-submit --class com.mapr.demo.finserv.SparkStreamingToHive /home/mapr/nyse-taq-streaming-1.0.jar --topics /user/mapr/taq:trades --table streaming_ticks
```
That command will only see *new* messages in the trades topic because it tails the log, so when it says "Waiting for messages" then run the following command to put more trade data into the stream:
```
java -cp `mapr classpath`:/home/mapr/nyse-taq-streaming-1.0.jar com.mapr.demo.finserv.Run producer /home/mapr/finserv-application-blueprint/data/ /user/mapr/taq:trades
```
#### Create a Zeppelin notebook
Open Zeppelin in a web browser.
Zeppelin divides notebooks into subsections called *paragraphs*. Create a new notebook in Zeppelin, then in a new paragraph enter the following and press the 'Play' icon (or keyboard shortcut, Shift+Enter):
```
%sql show tables
```
You should see the ```banks``` and ```streaming_ticks``` tables in the list. If not, look for an error in the logs under `/opt/zeppelin/logs/`.
We've provided a sample Zeppelin notebook which includes some sample SQL queries and charts to get you started. Load the `finserv-application-blueprint/resources/sample_zeppelin_notebook.json` file with the Import feature in the Zeppelin web UI. After you've imported it, you should see a new notebook called "Stock Exchange Analysis" in Zeppelin, which looks like this:
<img src = "images/zepdash2.png" width=600px>
## (Optional) Generate More Data
You can download 500MB more trade data with the following commands. Note, these data files are kept in a separate GitHub repository in order to keep this one to a manageable size.
```
git clone https://github.com/mapr-demos/finserv-data-files
cd finserv-data-files
mkdir data
tar xvfz starter_datafiles.tar.gz -C data
```
You can then pass this ```data``` directory to the producer command described in [step 4](https://github.com/mapr-demos/finserv-application-blueprint#step-4-run-the-producer):
```
java -cp `mapr classpath`:/home/mapr/nyse-taq-streaming-1.0.jar com.mapr.demo.finserv.Run producer ./data/ /user/mapr/taq:trades
```
## (Optional) Clean Up
To save disk space, its a good idea to remove the stream and Hive tables that you created in prior steps.
Here's how to delete a stream and all the associated topics:
```
maprcli stream delete -path /user/mapr/taq
```
Here's how to delete a Hive table:
```
rm -rf /mapr/my.cluster.com/user/hive/warehouse/streaming_ticks/
```
# Get Community Support!
Visit the [MapR Community](https://community.mapr.com/) pages where you can post questions and discuss your use case.
| 1 |
huaweicse/ServiceComb-Company-WorkShop | MicroSerivce WorkShop Example for user to use ServiceComb | null | # ServiceComb Demo - Company [](https://travis-ci.org/ServiceComb/ServiceComb-Company-WorkShop)[](https://coveralls.io/github/ServiceComb/ServiceComb-Company-WorkShop)
## Purpose
In order for users to better understand how to develop microservices using ServiceComb, an easy to
understand demo is provided.
## Architecture of Company
* Manager (API gateway)
* Doorman (authentication service)
* Worker (computing service)
* Beekeeper (computing service)
* Bulletin board (service registry)
* Project archive (request cache)
* Human resource (service governance)
Please read the [blog post](http://servicecomb.io/docs/linuxcon-workshop-demo/) on the detailed explanation of this project.
## Prerequisites
You will need:
1. [Oracle JDK 1.8+][jdk]
2. [Maven 3.x][maven]
3. [Docker][docker]
4. [Docker compose(optional)][docker_compose]
5. [Docker machine(optional)][docker_machine]
6. [curl][curl]
7. [MySQL][mysql]
[jdk]: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[maven]: https://maven.apache.org/install.html
[docker]: https://www.docker.com/get-docker
[docker_compose]: https://docs.docker.com/compose/install/
[docker_machine]: https://docs.docker.com/machine/install-machine/
[curl]: https://curl.haxx.se
[mysql]: https://dev.mysql.com/downloads/
## Run Services
A `docker-compose.yaml` file is provided to start all services and their dependencies as docker containers.
1. Build all service images using command `mvn package -Pdocker`
2. Run all service images using command `docker-compose up`
If you are using [Docker Toolbox](https://www.docker.com/products/docker-toolbox), please add an extra profile `-Pdocker-machine`.
```mvn package -Pdocker -Pdocker-machine```
## Run Integration Tests
```
mvn verify -Pdocker -Pdocker-machine
```
## Verify services
You can verify the services using curl by the following steps:
1. Retrieve manager's ip address
* If you use docker compose:
```bash
export HOST="127.0.0.1:8083"
```
* If you use docker machine(supposed your docker machine name is `default`):
```bash
export HOST=$(docker-machine ip default):8083
```
2. Log in and retrieve token from `Authorization` section
```bash
curl -v -H "Content-Type: application/x-www-form-urlencoded" -d "username=jordan&password=password" -XPOST "http://$HOST/doorman/rest/login"
```
Then you can copy the token from the `Authorization` section and use it to replace the `Authorization` header in the following requests.
3. Get the sixth fibonacci number from the worker service
```bash
curl -H "Authorization: replace_with_the_authorization_token" -XGET "http://$HOST/worker/fibonacci/term?n=6"
```
4. Get the number of drone's ancestors at the 30th generation from the beekeeper service
```bash
curl -H "Authorization: replace_with_the_authorization_token" -XGET "http://$HOST/beekeeper/rest/drone/ancestors/30"
```
5. Get the number of queen's ancestors at the 30th generation from the beekeeper service
```bash
curl -H "Authorization: replace_with_the_authorization_token" -XGET "http://$HOST/beekeeper/rest/queen/ancestors/30"
```
## Auto deploy on [Huawei Cloud][huawei_cloud]
To auto compile, build, deploy and run this workshop demo on Huawei Cloud's [Service Stage Platform][service_stage], you need the following steps:
1. A registered [Service Stage][service_stage] account.
2. Auto build and publish your docker image to Huawei's Image Warehouse, details can refer to [auto publish guide][publish_guide].
3. Auto deploy using Huawei Cloud's orchestration feature, details can refer to [orchestration guide][orchestration_guide].
[huawei_cloud]: http://www.hwclouds.com
[publish_guide]: docs/how-to-auto-publish-images-to-huawei-cloud.md
[orchestration_guide]: docs/how-to-auto-deploy-on-huawei-cloud.md
## Auto deploy on kubernetes cluster
To auto pull images from servicecomb in docker hub, run on kubernetes cluster whether on gce or bare-metal.
Reference to [Run Company on Kubernetes Cluster](kubernetes/README.md)
## 在华为云上自动部署
本章节介绍基于华为微服务云应用平台[Service Stage ][service_stage],实现自动编译、构建、部署和运行的步骤。
1. 一个已注册的[Service Stage][service_stage]帐号。
2. 自动编译、构建和发布Docker镜像到华为的镜像仓库,详情可见[自动发布指南][publish_guide_cn] 。
3. 使用华为云的编排功能自动部署微服务,详情可见[自动部署指南][orchestration_guide_cn] 。
[service_stage]: https://servicestage.hwclouds.com/servicestage
[publish_guide_cn]: docs/how-to-auto-publish-images-to-huawei-cloud-cn.md
[orchestration_guide_cn]: docs/how-to-auto-deploy-on-huawei-cloud-cn.md
| 1 |
bduncavage/recyclerViewToTheRescue | Examples of how to use RecyclerView to achieve complex list UIs | null | # RecyclerView to The Rescue
Examples of how to use RecyclerView to achieve complex list UIs
RecyclerView is the evolution of the legacy ListView and GridView in Android. If you aren't using it yet, you should be.
This repository contains a simple sample project that illustrates how to use RecyclerView to achieve simple and complex list UIs.
## Simple List

## Complex Grid

| 0 |
k33ptoo/JavaFX-MySQL-Login | A JavaFX example SignIn/SignUp with MySQL database intergration. | java javafx scenebuilder | # JAVAFX LOGIN UI DESIGN WITH MYSQL DB INTERGRATION
Focus areas
- UI Design
- DB integration
- Insert - Retrieve
A simple sample to get you started.

 | 1 |
minhthuy30197/LearnSpringBoot | My example course code when learning Spring Boot | null | # TUTORIAL HƯỚNG DẪN HỌC SPRING BOOT
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.