full_name stringlengths 7 104 | description stringlengths 4 725 ⌀ | topics stringlengths 3 468 ⌀ | readme stringlengths 13 565k ⌀ | label int64 0 1 |
|---|---|---|---|---|
akashbangad/NavigationView | Example app to show the implementation of NavigationView from the design support library | null | null | 1 |
mfaisalkhatri/selenium4poc | Learn Web Automation testing using Selenium Webdriver 4. | beginner-friendly examples hacktoberfest java selenium selenium-java selenium-webdriver test-automation testing tutorial webautomation webautomationtesting | 
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/mfaisalkhatri/selenium4poc/actions/workflows/maven.yml)
[](https://github.com/mfaisalkhatri/selenium4poc/actions/workflows/codeql-analysis.yml)
## Don't forget to give a :star: to make the project popular.
## :question: What is this Repository about?
- This repo has example codes with Selenium 4 features.
- Websites used for testing are: [automationpractice.com](http://automationpractice.com/index.php), [saucedemo.com](https://www.saucedemo.com),
[the-internet](http://the-internet.herokuapp.com/) [owasp-juice-shop](https://github.com/juice-shop/juice-shop)
and [LambdaTest e-commerce Playground](https://ecommerce-playground.lambdatest.io/)
- This repo uses `Maven` as build tool and `TestNG` testing framework to run the tests.
## Talking more about the Scenarios Covered in this project:
- I have tried to answer the below questions by providing working code example in this repo:
1. How do I select a value from Table?
2. How do I tick and untick checkboxes using selenium
3. How do I right-click using selenium?
4. How do I drag and drop using selenium?
5. How do I write code to log in and logout using Selenium?
6. How do I pass multiple test data value using DataProvider in tests?
7. How do I mouse hover an element using selenium?
8. How do I download a file using Selenium?
9. How do I upload file using selenium?
10. How do I press keys using selenium?
11. How do I work with multiple Tab windows in selenium?
12. How do I work with iFrames using Selenium?
13. How do I double-click using Selenium WebDriver?
14. How to check for chrome generated logs when selenium tests are run?
## :writing_hand: Blog Links
- [Selenium 4 WebDriver Hierarchy: A Detailed Explanation](https://medium.com/@iamfaisalkhatri/selenium-4-webdriver-hierarchy-a-detailed-explanation-lambdatest-18771c5fd3e9)
- [Selenium Manager in Selenium 4.11.0: New Features and Improvements](https://medium.com/@iamfaisalkhatri/selenium-manager-in-selenium-4-11-0-new-features-and-improvements-lambdatest-761593a7f009)
- [Different Types of Locators in Selenium WebDriver](https://www.lambdatest.com/blog/locators-in-selenium-webdriver-with-examples/)
- [How to Locate Elements Using CSS Selectors in Selenium](https://www.lambdatest.com/learning-hub/css-selectors)
- [How to Use @FindBy Annotation in Selenium Java](https://www.lambdatest.com/blog/findby-annotation-selenium-java/)
- [Understanding CSS Selectors in Selenium](https://medium.com/@iamfaisalkhatri/understanding-css-selectors-in-selenium-pcloudy-blog-3e4b09672264)
- [End to End testing using Selenium WebDriver and Java](https://medium.com/@iamfaisalkhatri/end-to-end-testing-using-selenium-webdriver-and-java-4ff8667716ca)
- [Writing Selenium Web Automation tests in Fluent way!](https://medium.com/@iamfaisalkhatri/writing-selenium-web-automation-tests-in-fluent-way-864db95ee67a)
- [How To Automate Shadow DOM In Selenium WebDriver?](https://medium.com/@iamfaisalkhatri/how-to-automate-shadow-dom-in-selenium-webdriver-lambdatest-blog-3884698b995)
- [How to setup GitHub Actions for Java with Maven project?](https://mfaisalkhatri.github.io/2022/04/26/githubactions-for-java-maven-project/)
- [How to Automate ServiceNow with Selenium](https://medium.com/@iamfaisalkhatri/how-to-automate-servicenow-with-selenium-511e41172161)
- [Cross browser testing in Selenium WebDriver](https://medium.com/@iamfaisalkhatri/cross-browser-testing-in-selenium-webdriver-pcloudy-blog-46e9d70fa13a)
- [How to Handle ElementClickInterceptedException in Selenium Java](https://www.lambdatest.com/blog/elementclickinterceptedexception-in-selenium-java/)
## :movie_camera: Tutorial Videos
[]( https://www.youtube.com/live/bhZX7apMqR8?si=4n0u5YiMuz5vTiHd)
[](https://youtu.be/uHLYoJmZxWc?si=3nevAn0Z3QZycdG8)
[](https://youtu.be/_hlXjVTa-jo?si=PfOfU7ihb8eEgduh)
[]( https://youtu.be/sVBgpz1z9Ts?si=azE1_vquOwT9jFT1d)
## End-to-End Tests for OWASP-Juice-Shop
- End-to-End tests for Juice Shop Website are running on `http://localhost:3000` inside the container in GitHub actions.
- GitHub Actions is used for setting up CI/CD Pipeline
### Following is the Automation Test Strategy used for writing End-to-End Tests:
1. User will navigate to the website and close all the pop-up first.
2. User will click on Login link and click on `Not yet a customer link` and register himself on the website.
3. Once the Registration is successful, User will Log in with that username and password.
4. After successful Login, User will Add AppleJuice and BananaJuice to the Basket.
5. After asserting the messages for items added to basket, user will check for the count of items displayed on top
of `Your Basket` link.
6. User will click on `Your Basket` link and check the order details and click on Checkout.
7. User will enter a new Address for Delivery and select it to process further.
8. User will continue further to Card for Payment and select the card added to make payment.
9. On the Order Summary page, user will verify all the details like Name, Address, Order details and total amount to be
paid and place order.
10. User will re-check the details on Order confirmation page and check for `Thank You` message order confirmation and
delivery message.
## End-to-End Tests for LambdaTest ECommerce Playground Website
### Following is the automation test strategy used for writing end-to-end tests:
1. The User will navigate to the website.
2. From the Home Page of the screen, user will navigate to the Registration Page and register himself. Verification will
be done by asserting the registration successful message.
3. User will click on the Shop by Category option on the top left and select a category for selecting the product to
purchase.
4. From the Product Page, the user will hover on any product which he likes and select the Add to cart option. Once a
product is added to cart, assertions will be performed to check the success message displayed.
5. On the Checkout page, user will provide the billing address details and also assertion will be made for product name
and its respective price.
6. Once a product is checked out, the user lands on the Order Confirmation page, where product name, price and shipping
address will be asserted and after that Order would be marked as confirmed.
7. Finally, an Order confirmation message would be verified in the tests which marks the end of the test journey.
## How to run the Tests?
### Running Juice Shop Tests on your local machine:
- Start `Juice-Shop` website locally, for doing this we will make use of `docker-compose-v3-juiceshop.yml` which is
available in the root folder of this project.
- Open terminal/command prompt and navigate to the root folder of the project and run the following command:
`docker-compose -f docker-compose-v3-juiceshop.yml up -d`
- Once the `Juice-Shop` website is up and running, we are good to run the end-to-end tests using the juice shop website.
- There are 2 ways to run the tests, those are as follows:
### 1. TestNG:
- Right-Click on the `test-suite\testng-juice-shop.xml` and select `Run ...\test-suite\testng-juice-shop.xml`
### 2. Maven:
- To run the tests in headless mode update the value for `headless` property variable to `true`
`mvn clean install -Dsuite-xml=test-suite\testng-juice-shop.xml -Dheadless=true`
- To run the tests without headless mode(to see the test running in browser) update the value for headless property
variable to
`false`
`mvn clean install -Dsuite-xml=test-suite\testng-juice-shop.xml -Dheadless=false`
- Stopping the Juice Shop website running in local
`docker-compose -f docker-compose-v3-juiceshop.yml down`
### Running Selenium Grid on local and running tests using Selenium Grid
- Start the Selenium Grid in local using the `docker-compose-v3-seleniumgrid.yml` file.
- Run the following command:
`docker-compose -f docker-compose-v3-seleniumgrid.yml up -d`
This will start the selenium grid which can be access using `http://localhost:4444`.
- To run the tests on Selenium Grid using `TestNG`:
Right click on `test-suite\testng-seleniumgrid-theinternet.xml` and
select `Run test-suite\testng-seleniumgrid-theinternet.xml`
- To run the tests on Selenium Grid using `Maven`:
`mvn clean install -Dsuite-xml=test-suite\testng-seleniumgrid-theinternet.xml`
- Stopping the Selenium Grid:
`docker-compose -f docker-compose-v3-seleniumgrid.yml down`
### Running all the tests in one go:
- Start the `Juice -Shop` website using following command:
`docker-compose -f docker-compose-v3-juiceshop.yml up -d`
- Start `Selenium Grid` using following command:
`docker-compose -f docker-compose-v3-seleniumgrid.yml up -d`
- Run the tests using `TestNG`:
Right click on `test-suite\testng.xml` and select `Run test-suite\testng.xml`
- Run the tests using `Maven` in headless mode:
`mvn clean install -Dheadless=true`
- Stopping the `Juice-Shop` website and `Selenium Grid`:
`docker-compose -f docker-compose-v3-juiceshop.yml down --remove-orphan`
### Running LambdaTest ECommerce Playground Tests on your local machine:
- There are 2 ways to run the tests, those are as follows:
### 1. TestNG:
- Right-Click on the `test-suite\testng-lambdatestecommerce.xml` and
select `Run ...\test-suite\testng-lambdatestecommerce.xml`
### 2. Maven:
- To run the tests in headless mode update the value for `headless` property variable to `true`
`mvn clean install -Dsuite-xml=test-suite\testng-lambdatestecommerce.xml -Dheadless=true`
- To run the tests without headless mode(to see the test running in browser) update the value for headless property
variable to
`false`
`mvn clean install -Dsuite-xml=test-suite\testng-lambdatestecommerce.xml -Dheadless=false`
## :question: Need Assistance?
- Discuss your queries by writing to me @ `mohammadfaisalkhatri@gmail.com`
OR ping me on any of the social media sites using the below link:
- [Linktree](https://linktr.ee/faisalkhatri)
## :computer: Paid Trainings
- Contact me for Paid trainings related to Test Automation and Software Testing,
mail me @ `mohammadfaisalkhatri@gmail.com` or ping me on [LinkedIn](https://www.linkedin.com/in/faisalkhatri/)
## :thought_balloon: Checkout the blogs related to Testing written by me on the following links:
- [Medium Blogs](https://medium.com/@iamfaisalkhatri)
- [LambdaTest Blogs](https://www.lambdatest.com/blog/author/mfaisalkhatri/)
- [My Website](https://mfaisalkhatri.github.io)
## :bulb: Cloud platform supporter
### Big thanks to **LambdaTest** for their support to the project with their open source license:
<a href="http://www.lambdatest.com?fp_ref=faisal58" target="_blank" style="outline:none;border:none;"><img src="https://d2gdx5nv84sdx2.cloudfront.net/uploads/n3ufe5o3/marketing_asset/banner/6476/728_x_90.png" alt="lambdatest" border="0"/></a>
| 0 |
Pragmatists/eventsourcing-java-example | A simplified (in memory) example of Event Sourcing implementation for banking domain. | ddd ddd-sample event-sourcing | # Event sourcing example in Java
A simplified (in memory) example of Event Sourcing implementation in Java for banking domain.
Repository is splitted into exercises adding step by step more functionality towards good design of event sourcing with CQRS.
You can play around and try to implement exercises or You can check out solution branches.
## Step 1 - In memory iplementation of event sourcing

- Provide simple in-memory implementation of Event Store
- Make all test passing using event sourcing
#### soultion
- branch [exercise_1_solution](https://github.com/michal-lipski/eventsourcing-example/tree/excercise_1_solution)
## Step 1a (optional) - Unit of work pattern
- Implement [Unit of Work](https://martinfowler.com/eaaCatalog/unitOfWork.html) pattern where events are stored outside of aggregate
#### soultion
- WIP
## Step 1b (optional) - Projections
- Implement Projections on Account to get number of transactions performed on account
- eventStore.store() method shoud accept Event playload instead of domain Events
- what should be api of eventStream()?
#### soultion
- WIP
## Step 2 (optional) - Optimistic locking
- add optimistic locking
#### soultion
- WIP
## Step 3 - new Aggregate extraction

- Refactor to move all money transfer related stuff to separate aggregate
- New aggregate will be also using Event Store
#### soultion
- WIP
## Step 4 - adding CQRS

- Apply CQRS rule and separate the command and reading side
- Solution will use Eventual Consistency approach
#### soultion
- WIP
## Step 5
- Provide additional (not transient) implementation of Event Store. (https://geteventstore.com/)
#### soultion
- WIP
| 1 |
cristianprofile/spring-boot-mvc-complete-example | spring boot mvc complete example integration and unit test with @config classes | null | ## Spring Boot Maven/Gradle Java 1.8 (Spring MVC jsp and tiles, Spring Data Rest, Jenkins 2 ready to use with full support to Maven and Gradle)
[](https://coveralls.io/r/cristianprofile/spring-boot-mvc-complete-example) [](https://travis-ci.org/cristianprofile/spring-boot-mvc-complete-example)
[](https://gitter.im/cristianprofile/spring-boot-mvc-complete-example?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
If you don`t have gradle or maven on your computer you can use gradlew to be able to run the project
```
gradle wrapper (run Gradle wrapper)
After this operation you can run every Gradle command of this guide with
./gradlew xxxxxtask (Unix, Linux)
gradlew.bat XXXtask (Windows)
Example
./gradlew clean compile (Unix, Linux)
gradlew.bat clean compile (Windows)
```
You can build this project with Maven or Gradle. Here you have several snippets about how to use them:
```
mvn clean install (install jar to your local m2 )
mvn spring-boot:run (run web app modules)
gradle buid (build modules)
gradle bootRun (run web app modules)
```
**Important!!!!!If you use Maven First of all you have to install with "mvn install" modules "mylab-parent-pom" and after "mvn install" of module "mylab-core".**
## Spring Boot mvc web with tiles app
run spring-boot-mvc-web-example module with maven "mvn spring-boot:run" (if you want to use Gradle run "gradle bootRun" into spring-boot-mvc-web-example ) and access to http://localhost:9090/pizza
and user: "admin@ole.com" and password "admin@ole.com".You can create other users with ROLE_USER at add user left menu
option.
- Spring boot MVC with Spring Security Access
- I18n
- Responsive Bootstrap css witn Tiles 3
- Password encoding with Bcrypt [BCRYPT password encoding](http://www.baeldung.com/spring-security-registration-password-encoding-bcrypt "BCRYPT password encoding")
- Unit Testing and Integration Testing with spring-boot-starter-test dependency (all dependecies are transitive like mockito junit etc...)
## Rest service layer with Spring Boot Mvc
If you want to access to Rest Service with Spring boot module "spring-boot-mvc" first run mvn spring-boot:run (if you want to use Gradle run "gradle bootRun"
into spring-boot-mvc-rest folder ):
```
- http://localhost:9090/base (get list of all bases)
- http://localhost:9090/base/1 (get base info with id=1)
- http://localhost:9090/base/1 (delete base info with id=1)
- http://localhost:9090/base (post create new base sending json info. Example "name":"rolling pizza" )
- http://localhost:9090/base (update update existing base sending json info. Example {"name":"rolling pizza 2","id":1})
```
When you run Spring boot app Spring actuator add features to monitore your services:
```
- (get) http://localhost:9091/manage/metrics (Spring Boot Actuator includes a metrics service with
“gauge” and “counter” support. A “gauge” records a single value; and a “counter” records a delta
(an increment or decrement). Metrics for all HTTP requests are automatically
recorded, so if you hit the metrics endpoint should see a sensible response.)
- (get) http://localhost:9091/manage/health (you can check if your app is available)
- (get) http://localhost:9091/manage/mappings (list of your app HTTP endpoints)
- (post) http://localhost:9091/manage/shutdown (list of your app HTTP endpoints)
```

More info about Spring Actuator at: [Spring Actuator](https://github.com/spring-projects/spring-boot/tree/master/spring-boot-actuator "Spring Actuator")
## Rest service layer with Spring Data Rest
Spring Data REST builds on top of Spring Data repositories, analyzes your application’s domain model and exposes hypermedia-driven HTTP resources for aggregates contained in the model.
If you want to access to Rest Service with Spring boot module "spring-boot-data-rest" first run mvn spring-boot:run (if you want to use Gradle run "gradle bootRun"
into spring-boot-data-rest):
```
- http://localhost:9090/api/bases to get all bases (get list of all bases)
- http://localhost:9090/api/bases (post create new base sending json info. Example "name":"rolling pizza" )
- http://localhost:9090/api/browser/index.html#/api to access to your rest api HAL Browser to be served up when you visit your application’s root URI in a browser.
```

More info about Spring Data Rest at: [Spring Data Rest](http://projects.spring.io/spring-data-rest/ "Spring Data Rest")
## Git commit info in Spring boot and jar maven/gradle package
Maven and Gradle allow to generate a git.properties file containing information about the state of your git source code repository when the project was built.
For Maven users the spring-boot-starter-parent POM includes a pre-configured plugin to generate a git.properties file. Simply add the following declaration to your POM:
```
<build>
<plugins>
<plugin>
<groupId>pl.project13.maven</groupId>
<artifactId>git-commit-id-plugin</artifactId>
</plugin>
</plugins>
</build>
```
Gradle users can archieve the same result using the gradle-git-properties plugin
```
plugins {
id "com.gorylenko.gradle-git-properties" version "1.4.17"
}
```
***New in Spring 1.4:***
Git commit id plugin show complete git commit id plugin in "/info" endpoint of Actuator. In yml properties file add:
```
management:
port: 9091
info:
git:
enabled: true
mode: full
```

You can read more info about Spring boot how to config here: [Spring boot oficial documentation](http://docs.spring.io/spring-boot/docs/current/reference/html/howto-build.html#howto-git-info "Spring boot oficial documentation")
Spring boot app screen-shots:



## Testing Spring mvc rest model views
It can sometimes be useful to filter contextually objects serialized to the HTTP response body.
In order to provide such capabilities, Spring MVC now has builtin support for Jackson’s Serialization Views (as of Spring Framework 4.2, JSON Views are supported on @MessageMapping handler methods as well).
Model view Summary/Internal
```
package com.mylab.cromero.controller.view;
public class View {
public static class Summary {}
public static class Internal extends Summary {}
}
```
Json View model
```
package com.mylab.cromero.controller.view;
public class Message {
@JsonView(View.Summary.class)
private Long id;
@JsonView(View.Summary.class)
private String name;
@JsonView(View.Internal.class)
private String title;
private String body;
```
An Example controller named "MessageController" has been created to be able to test this Spring feature (Spring boot mvc rest module)
[Message controller info](/spring-boot-mvc-rest/src/main/java/com/mylab/cromero/controller/MessageController.java#L32)

Screen-shots url view controller test:
Summary controller test: (http://localhost:9090/message/summary)

Internal controller test:(http://localhost:9090/message/internal)

Full controller test: (http://localhost:9090/message/full)

[Aditional Spring oficial example](https://spring.io/blog/2014/12/02/latest-jackson-integration-improvements-in-spring "Aditional Spring oficial example")
## Jenkins 2 support with jenkins file
Jenkins 2 automatic multibranch plugin mode with JenkinsFile file in main directory. More interesting information about new Jenkins 2 Pipeline script configuration at:
- [DZONE refcard jenkins pipeline](https://dzone.com/refcardz/continuous-delivery-with-jenkins-workflow "DZONE refcard jenkins pipeline")
- [Github examples](https://github.com/jenkinsci/pipeline-examples "Github examples")
Docker integration in feature branch called: docker_container_jenkins
- [Docker container feature branch](https://github.com/cristianprofile/spring-boot-mvc-complete-example/blob/feature/docker_container_jenkins/Jenkinsfile "Run IC in a Docker container")

## ELK SUPPORT IN WEB APP MODULE(Elasticsearch/Kibana/Logstash)
First of all you need and ELK installed in you machine. The easiest way is to use docker image (https://hub.docker.com/r/nshou/elasticsearch-kibana/) :
- Start your container with Kibana and ElasticSearch.
- Edit spring-boot-mvc-web/src/main/resources/logstash/logstash-spring-boot-json.conf with your elasticsearch port
- Download losgstash and run logstash command from web app initial folder "./logstash -vf spring-boot-mvc-web/src/main/resources/logstash/logstash-spring-boot-json.conf --debug"
- Run Spring boot web app: gradle bootRun or mvn spring-boot:run. Now your app will create 2 logs files in tmp folder: spring-boot-mvc.log and spring-boot-mvc.log.json
- Logstash is monitoring .json file and create new document in elasticsearch for each new line
- Go to you kibana url: It should be running at http://localhost:32771/.
First, you need to point Kibana to Elasticsearch index(s) of your choice. Logstash creates indices with the name pattern of logstash-YYYY.MM.DD. In Kibana Settings → Indices configure the indices:
1. Index contains time-based events (select this option)
2. Use event times to create index names (select this option)
3. Index pattern interval: Daily
4. Index name or pattern: [logstash-]YYYY.MM.DD
5. Click on "Create Index"
6. Now click on "Discover" tab.
In my opinion, "Discover" tab is really named incorrectly in Kibana - it should be labeled as "Search" instead of "Discover" because it allows you to perform new searches and also to save/manage them. Log events should be showing up now in the main window. If they're not, then double check the time period filter in to right corner of the screen. Default table will have 2 columns by default: Time and _source. In order to make the listing more useful, we can configure the displayed columns. From the menu on the left select level, class and logmessage.
Link to youtube video demo:
[](https://youtu.be/A64aO6_d8rw)
ScreenShots Images:




Aditional info ELK and Spring boot: - [Aditional info ELK and Spring boot](https://blog.codecentric.de/en/2014/10/log-management-spring-boot-applications-logstash-elastichsearch-kibana/ "Aditional info ELK and Spring boot")
Kibana Lucene Query language Sintax: [Kibana Lucene Query language Sintax](https://www.elastic.co/guide/en/beats/packetbeat/current/_kibana_queries_and_filters.html "Kibana Lucene Query language Sintax")
## Config logback with Spring boot web app modules
Logs in Spring boot web modules has been configured with logback. Spring boot has support with spring boot profiles to be able to set variable in logback-spring.xml:
```
<springProfile name="develop">
<logger name="com.mylab.cromero" level="DEBUG"/>
</springProfile>
```
If you want to log debug logs in our example app package you must use this commands with maven/gradle to activate develop profile:
```
gradle -Dspring.profiles.active=develop bootRun
mvn -Dspring.profiles.active=develop spring-boot:run
```
Aditional Spring Boot documentation: - [Aditional Spring Boot documentation](http://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-logging.html "Aditional Spring Boot documentation")
| 1 |
umermansoor/hadoop-java-example | A very simple example of using Hadoop's MapReduce functionality in Java. | null | ## Hadoop Map-Reduce Example in Java
**Get up and running in less than 5 minutes**
### Overview
This program demonstrates Hadoop's Map-Reduce concept in Java using a very simple example. The input is raw data files listing earthquakes by region, magnitude and other information.
> nc,71920701,1,”Saturday, January 12, 2013 19:43:18 UTC”,38.7865,-122.7630,**1.5**,1.10,27,**“Northern California”**
The fields in bold are magnitude of the quake and name of region where the reading was taken, respectively. The _goal_ is to process all input files to find the maximum magnitude quake reading for every region listed. The output is in the form:
"region_name" <maximum magnitude of earthquake recorded>
The raw data files are in the `input/` folder.
### Instructions for Setting Up Hadoop
1. Download Hadoop 1.1.1 binary. [Mirror](http://mirror.csclub.uwaterloo.ca/apache/hadoop/common/hadoop-1.1.1/hadoop-1.1.1.tar.gz)
2. Extract it to a folder on your computer:
$ tar xvfz hadoop-1.1.1.tar.gz
3. Setup JAVA_HOME environment variable to point to the directory where Java is installed. For my Mac OS X, I did the following:
$ export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home
Note: If you are running Lion, you may want to update the JAVA_HOME to point to `java_home` command which outputs Java's home directory, that is,
$ export JAVA_HOME=$(/usr/libexec/java_home)
4. Setup HADOOP_INSTALL environment variable to point the directory where you extracted hadoop binary in step 2:
$ export HADOOP_INSTALL=/Users/umermansoor/Documents/hadoop-1.1.1
5. Edit the PATH environment variable:
$ export PATH=$PATH:$HADOOP_INSTALL/bin
> Or you can add these variables to your standard shell script. For example, checkout my Mac OSX's [`~/.bash_profile`](https://gist.github.com/4525814)
### Instructions for Running the Sample
1. Clone the project:
$ git clone git@github.com:umermansoor/hadoop-java-example.git
2. Change to the project directory:
$ cd hadoop-java-example
3. Build the project:
$ mvn clean install
4. Setup the HADOOP_CLASSPATH environment variable to tell Hadoop where to find the java classes for the sample:
$ export HADOOP_CLASSPATH=target/classes/
5. Run the sample. The `output` directory shouldn't exists otherwise this will fail.
$ hadoop com.umermansoor.App input/ output
> Note: the output will go to the `output/` folder which Hadoop will create when run. The output will be in a file called `part-r-00000`.
### Common Errors:
1. Exception: java.lang.NoClassDefFoundError
Cause: You didn't setup the HADOOP_CLASSPATH environment variable. You need to tell Hadoop where to find the java classes.
Resolution: In this case, execute the following to setup HADOOP_CLASSPATH variable to point to the `target/classes/` folder.
$ export HADOOP_CLASSPATH=target/classes/
2. Exception: org.apache.hadoop.mapred.FileAlreadyExistsException or 'Output directory output already exists'.
Cause: Output directory already exists. Hadoop requires that the output directory doesn't exists when run.
Resolution: Change the output directory or remove the existing one:
$ hadoop com.umermansoor.App input/input.csv output_new
> Note: Hadoop failing if the output folder already exists is a good thing: it ensures that you don't accidentally overwrite your previous output, as typical Hadoop jobs take hours to complete.
| 1 |
thundergolfer/example-bazel-monorepo | 🌿💚 Example Bazel-ified monorepo, supporting Golang, Java, Python, Scala, and Typescript | bazel bazel-monorepo blaze buck build-tool platform-engineering | <h1 align="center">Example Bazel Monorepo</h1>
<p align="center">
<a href="https://buildkite.com/thundergolfer-inc/the-one-true-bazel-monorepo">
<img src="https://badge.buildkite.com/aa36b75077a5c69156bc143b32c8c2db04c4b20b8706b8a99b.svg?branch=master">
</a>
</p>
----
> *Note:* Currently supporting the latest Bazel version as at mid June 2021, [4.1.0](https://github.com/bazelbuild/bazel/releases/tag/4.1.0)
Example Bazel-ified monorepo, supporting *Golang*, *Java*, *Python*, *Scala*, and *Typescript*.
Cloud Infrastructure-as-Code is done using _Terraform_.
I use this project to explore how Bazel works with different languages and
developer tools, and keep a record of best-practices I've learnt. So it is a work in progress.
Others can use it to check out the Bazel way of doing things and use parts
as a reference implementation.
Rather than the typical To-Do list, this project's code uses the contrived scenario of a book shop and reading catalogue website called *Antilibrary*. 📗📕📒📚
## Getting Started
#### Prerequisites:
- [**Install Bazel**](https://docs.bazel.build/versions/master/install.html) (Currently supporting ~= `4.x.x`)
- **Python 2 or 3**. Should only be required to [do some bootstrapping under-the-hood](https://github.com/bazelbuild/bazel/issues/8446).
- [**`yarn`**](https://yarnpkg.com/) or [**`npm`**](https://www.npmjs.com/) for the NodeJS and Typescript code
Bazel aims to be 'build anything, anywhere' system, so building and testing should be as simple as `bazel test //...`. If it's not, please [create an issue](https://github.com/thundergolfer/example-bazel-monorepo/issues/new/choose).
## Why use a Monorepo?
The following few articles together provide a good overview of the
motivations behind maintaining a Monorepo. For heaps more information,
[korfuri/awesome-monorepo](https://github.com/korfuri/awesome-monorepo)
is a good place to go.
* [*Why Google Stores Billions of Lines in a Single
Repository*](http://delivery.acm.org/10.1145/2860000/2854146/p78-potvin.pdf?ip=60.240.50.147&id=2854146&acc=OA&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E5945DC2EABF3343C&__acm__=1558760299_19ae56a814d1fe05de26b4844a658e52)
* [*Monorepos, please
do!*](https://medium.com/@adamhjk/monorepo-please-do-3657e08a4b70), by
Adam Jacob, former CTO of [Chef](https://www.chef.io/)
* [*Repo Style Wars: Mono vs. Multi*](https://gigamonkeys.com/mono-vs-multi/), by Peter Seibel
* [*Advantages of Monorepos*](https://danluu.com/monorepo/), by Dan Luu
### Related Projects
* [github.com/lucperkins/colossus](https://github.com/lucperkins/colossus) - A demo using Bazel in monorepo fashion. Compared with this project, it goes far deeper on microservice architecture components and Kubernetes, and is not focused on Bazel.
* [github.com/enginoid/monorepo-base](https://github.com/enginoid/monorepo-base) - Employs Bazel, gRPC, and Kubernetes like the above, and is similarly not as broad and deep on Bazel as this project.
## Project Structure
### *Golang* Support
There's Golang code in [`/cli`](/cli). It implements a simple CLI for the common 'Blind Date With a 📖' product.
##### Dependency Management
Third-party dependencies are managed in [`3rdparty/go_workspace.bzl`](/3rdparty/go_workspace.bzl).
### *Java* Support
There's a [Spring Boot](https://spring.io/projects/spring-boot) (with PostGres) application in [`/store-api`](/store-api) and some other Java code in [`/store/layoutsolver`](/store/layoutsolver).
##### Dependency Management
Its third-party dependencies are managed by [`rules_jvm_external`](https://blog.bazel.build/2019/03/31/rules-jvm-external-maven.html) in the [`WORKSPACE`](/WORKSPACE) (See the `# JAVA SUPPORT` section).
### *Scala* Support
There's Scala code contained in [`scala-book-sorting`](/scala-book-sorting).
##### Dependency Management
Its third-party dependencies are managed
by [`johnynek/bazel-deps`](https://github.com/johnynek/bazel-deps). The usage of that tool is wrapped up in a script
as [`tools/update_jvm_dependencies.sh`](tools/update_jvm_dependencies.sh).
To use it, you update [`tools/dependencies/jvm_dependencies.yaml`](tools/dependencies/jvm_dependencies.yaml) and then run the script.
### *Python* Support
There's Python code in the [`/book_sorting`](/book_sorting) and [`/scraping`](/scraping).
[`bazelbuild/rules_python`](https://github.com/bazelbuild/rules_python) is used for the core `py_*` rules.
##### Dependency Management
In order to add new third-party packages for Python, add them to [`3rdparty/requirements.in`](/3rdparty/requirements.in) and run `bazel run //3rdparty:requirements.update`.
##### Gradual Type-Checking (MyPy)
[thundergolfer/bazel-mypy-integration](https://github.com/thundergolfer/bazel-mypy-integration) is used to check any type annotations at `bazel build` time.
### Infrastructure-as-Code (Hashicorp Terraform)
The [`infrastructure/`](/infrastructure) top-level folder contains Terraform defining various AWS resources and their configuration.
----
## Development
### Build
`bazel build //...`
### Testing
`bazel test //...`
### Continuous Integration (CI)
This repository's CI is managed by [Buildkite](https://buildkite.com), the CI platform used by Pinterest and Canva to manage Bazel monorepos,
as well as being [used by the Bazel open-source project itself](https://buildkite.com/bazel).
### Deployment & Distribution
Deployable artifacts are pushed to S3 under commit-hash-versioned keys.
Currently only the `store-api` deploy/fat JAR is deployable.
[`graknlabs/bazel-distribution`](https://github.com/graknlabs/bazel-distribution) is used to publish Python packages to PyPi.
### Build Observability + Analysis
This project is using [Buildbuddy.IO](https://buildbuddy.io/). Every build run locally or in CI get its own `https://app.buildbuddy.io/invocation/xyz123...` URL which analyses and records the build's information.
### Linting
[thundergolfer/bazel-linting-system](https://github.com/thundergolfer/bazel-linting-system) is used. [`./tools/linting/lint.sh`](tools/linting/lint.sh) will lint all source-code in the repo and [`./tools/linting/lint_bzl_files.sh`](tools/linting/lint_bzl_files.sh) will lint all Bazel files.
| 1 |
interseroh/demo-gwt-springboot | Simple Example WebApp for GWT and Spring Boot | null | # demo-gwt-springboot
## Build Status
[](https://travis-ci.org/interseroh/demo-gwt-springboot)
## Table of Contents
- [Demo in Heroku](#demo-in-heroku)
- [Introduction](#introduction)
- [Architecture](#architecture)
- [Model for Services and Domains](#model-for-services-and-domains)
- [Architecture](#architecture)
- [Mock Mechanism](#mock-mechanism)
- [Run the WebApp for Development](#run-the-webapp-for-development)
- [Server: Start the WebApp with Spring Boot](#server-start-the-webapp-with-spring-boot)
- [Client: Start GWT SuperDev Mode transpiler](#client-start-gwt-superdev-mode-transpiler)
- [Browser: Call the WebApp demo-gwt-springboot from a web browser](#browser-call-the-webapp-demo-gwt-springboot-from-a-web-browser)
- [Heroku: Test the Webapp from Heroku](#heroku-test-the-webapp-from-heroku)
- [Logging](#logging)
- [Server: Logging at the Spring Boot Console](#server-logging-at-the-spring-boot-console)
- [Client: Logging at the Browser Console](#client-logging-at-the-browser-console)
- [Debugging](#debugging)
- [Server: Debugging Spring Boot](#server-debugging-spring-boot)
- [Client: Debugging with GWT SuperDev Mode](#client-gwt-debugging-with-gwt-superdev-mode)
- [Client: Debugging with Eclipse SDBG](#client-gwt-debugging-with-eclipse-sdbg)
- [Client: Debugging with IntelliJ IDEA](#client-gwt-debugging-with-intellij-idea)
- [Unit and Integration Testing](#unit-and-integration-testing)
- [Server: Spring Test](#server-spring-test)
- [Client: GWT Mockito](#client-gwt-mockito)
## Demo in Heroku
- [Spring GWT Demo in Heroku](https://demo-gwt-springboot.herokuapp.com/demogwt/index.html)
## Introduction
This is an example Maven project for following frameworks:
- User Interfaces (Client):
- GWT
- GWTBootstrap3 for the UI
- RestyGWT for the RESTful access to backend services
- GIN for Dependency Injection
- GWT Event Binder for event bus
- GWT Mockito for UI logic test
- Controllers and Services (Server):
- KissMDA
- Spring Boot for business logic implementations
- All the standard stuffs used by Spring Framework
- Domains (Server):
- KissMDA
- JPA with Hibernate
The idea of this project is to offer a simple application template
for the mentioned frameworks above. If you need a more sophisticated GWT application
framework you can use following frameworks:
- ArcBees GWT-Platform: Model-View-Presenter Framework for GWT
- JBoss Errai Framework
- Sencha GXT
The development is based on Maven so this project can be used with Eclipse, IntelliJ or NetBeans.
## Architecture
### Model for Services and Domains
There are two services: *UserService* and *PersonService* and two Entities: *Person* and *Address*. Following diagram shows the structure of the services and the domains.

### Architecture
Following diagram shows the architecture of the **Microservice Demo**.
The naming of the packages *client*, *mock*, *server*, *shared* and *resource* (not shown in diagram) is based on this architecture.

#### Client
All the GWT (UI and REST client) classes should be located in this package. GWT transpiles all the Java sources into JavaScript sources.
#### Mock
The package consists of the mock implementation of the REST services at the client side (GWT). Instead of calling the real REST services
it will create the mock data. For this purpose you can use the *development-mock* profile of Maven. It will compile the mock package
and uses the mock implementation to handle the services. If you want to call the real REST services you can use *development* profile
and GWT transpiler will remove the mock part. Please take a look the mock mechanism below.
#### Shared
In this package you can put any classes which will be used from both sides: client and server. It is advisable to put *constants* and *endpoints* of the RESTful services so that they point to the same address. Also *DTO* (Data Transfer Objects) for RESTful services should be included in this package. GWT transpiles this package into JavaScript sources.
#### Server
All the *controller*, *service*, *repository* and *domain* classes - based on Spring Framework - should reside in this package. This package will __not be included__ in GWT transpiler.
#### Resource
All the themes for GWTBootstrap3 and general Bootstrap themes like Bootswatch should be located in this package.
You can take a look the GWT [configuration file](https://github.com/lofidewanto/demo-gwt-springboot/blob/master/src/main/resources/com/lofidewanto/demo/DemoGwt.gwt.xml) to see which packages will be included in GWT transpiler.
### Mock Mechanism
The idea is to be able to develop the UI without any dependencies to the functionality of the REST API. We should be
able to mock the data which come from the REST API.
Following points are important to know:
- All the REST API call should be first implemented using POJO interface so this interface does not
extend the RestyGWT `RestService` interface. Example: [UserClient.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/client/domain/UserClient.java).
- Following is the mock implementation of [MockUserClient.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/mock/domain/MockUserClient.java)
and real implementation [RestUserClient.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/client/domain/RestUserClient.java).
- We also need to do the same thing for the [ServicePreparator](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/client/common/ServicePreparator.java) class.
- We need a Maven profile `development-mock`. In this profile we call a special GWT module file which will be used to transpile the Java code:
- Maven [pom.xml](https://github.com/interseroh/demo-gwt-springboot/blob/master/pom.xml) with a profile: `development-mock`.
- GWT Module [DemoGwtDevelopmentMock](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/resources/com/lofidewanto/demo/DemoGwtDevelopmentMock.gwt.xml).
In this GWT module file we define the `src path` to transpile the `mock` package.
Also we define what EntryPoint `DemoGwtMockEntryPoint` class we would like to use in this profile:
- Real EntryPoint: [DemoGwtEntryPoint.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/client/DemoGwtEntryPoint.java)
- Mock EntryPoint: [DemoGwtMockEntryPoint.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/mock/DemoGwtMockEntryPoint.java)
- In the Dependency Injection Gin module we instantiate the correct implementation
for the ["real - DemoGwtGinModule.java"](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/client/DemoGwtGinModule.java)
or for the ["mock - DemoGwtMockWebAppGinModule.java"](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/main/java/com/lofidewanto/demo/mock/DemoGwtMockWebAppGinModule.java).
With this mechanism we can develop the UI very fast and we don't need to wait for the REST API to
be implemented.
## Run the WebApp for Development
### Server: Start the WebApp with Spring Boot
Just run the class *DemoGwtSpringbootApplication* or if you are using Spring Tool Suite just run it with Spring Boot Dashboard:

#### Tips and Tricks
##### JRebel
- If you are using JRebel you need to put following parameter in VM Arguments, something like:
```java
-javaagent:C:\progjava\jrebel\jrebel.jar
```
or the newer version of JRebel
```java
-agentpath:C:\progjava\jrebel\lib\jrebel64.dll
```

- You also have to comment out the Spring Boot Dev Tools dependency in pom.xml.
```java
<!-- Use this Spring Tool for restarting the app automatically -->
<!-- Only use this if you don't use JRebel! -->
<!--
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
-->
```
- To be able to generate the *rebel.xml* you need to compile the project with Maven profile *development*.

##### Spring Boot Dev Tools
Spring Boot Dev Tools restarts the Spring Boot App automatically if your codes have changed.
You have to deactivate JRebel if you want to use this tool. This Spring Boot Dev Tools dependency should be activated:
```java
<!-- Use this Spring Tool for restarting the app automatically -->
<!-- Only use this if you don't use JRebel! -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
```
### Client: Start GWT SuperDev Mode transpiler
To be able to test quickly you can use GWT SuperDev Mode. With this tool you can just recompile the changes in GWT Java codes into JavaScript codes without restarting the process.
Follow following steps:
#### Starting GWT SuperDev Mode
Starting GWT SuperDev Mode Compiler from command line or within the development environment with Maven:
```java
mvn -P development gwt:run-codeserver
```
To start with Mock:
```java
mvn -P development-mock gwt:run-codeserver
```
At the end you can see following message:
```java
...
[INFO] The code server is ready at http://localhost:9876/
...
```
#### Bookmark *Dev Mode On*
Now you can go to the given address and boomark the *Dev Mode On* through *drag and drop* into your bookmark menu.

That's it. You can just push *Dev Mode On* to run the transpiler directly and the WebApp will be reloaded automatically.

### Browser: Call the WebApp demo-gwt-springboot from a web browser
Go to the application URL with a web browser:
```java
http://localhost:9014/demogwt/index.html
```
or just
```java
http://localhost:9014/demogwt
```
### Heroku: Test the Webapp from Heroku
The webapp is installed at Heroku PaaS and you can test it from this address:
[Demo Webapp](https://demo-gwt-springboot.herokuapp.com/demogwt/)
## Logging
The GWT logging is activated (see [configuration file](https://github.com/lofidewanto/demo-gwt-springboot/blob/master/src/main/resources/com/lofidewanto/demo/DemoGwt.gwt.xml)) at both sides: Client and Server.
### Server: Logging at the Spring Boot Console

### Client: Logging at the Browser Console

## Debugging
### Server: Debugging Spring Boot
Debugging the Spring Boot part can be achieved easily by starting the Spring Boot with Debug mode.
### Client GWT: Debugging with GWT SuperDev Mode
You need to update following file: [configuration file for development](https://github.com/lofidewanto/demo-gwt-springboot/blob/master/src/main/resources/com/lofidewanto/demo/DemoGwtDevelopment.gwt.xml)
```java
<!-- Compiler agent - we only need to compile for one web browser in development -->
<!-- If you want to use SDBG for debugging you need to use Chrome == safari -->
<set-property name="user.agent" value="safari" />
```
For all purposes of debugging you need to use Google Chrome as your browser.
### Client GWT: Debugging with Eclipse SDBG
Debugging the GWT part with Eclipse should be done by using [SDBG](https://sdbg.github.io/).
**Tips and Tricks for Optimizing Transpiler Speed**
There are two GWT configuration files: [_DemoGwtDevelopment.gwt.xml_](https://github.com/lofidewanto/demo-gwt-springboot/blob/master/src/main/resources/com/lofidewanto/demo/DemoGwtDevelopment.gwt.xml) and [_DemoGwt.gwt.xml_](https://github.com/lofidewanto/demo-gwt-springboot/blob/master/src/main/resources/com/lofidewanto/demo/DemoGwt.gwt.xml).
- _DemoGwtDevelopment.gwt.xml_: this config will be used to make the GWT compiling process faster. This only compiles for one web browser and use INFO as logging output.
- _DemoGwt.gwt.xml_: this config will be used for production transpilling. This is optimized for many many production purposes.
### Client GWT: Debugging with IntelliJ IDEA
For debugging gwt with IntelliJ IDEA proceed the following stets.
#### Prequesites
- JetBrains IntelliJ 2016 Ultimate (Community doesn't support it)
- Chrome browser
- [JetBrains IDE Support Chrome Browser Plugin](https://chrome.google.com/webstore/detail/jetbrains-ide-support/hmhgeddbohgjknpmjagkdomcpobmllji)
- Enabled GWT Plugin in IntelliJ
#### Overview
The following diagram shows the different parts of the setup:

#### Step by step
##### Open Project in IntelliJ

After this the project is loaded and the `DemoGwtSpringbootApplication` will be added to the `RunConfigurations` automatically.

##### Configure Web Facet
Open in the `FileMenu` the `Project Structure`

Add add under `Facets` a `Web Facet` to the project

Add the facet to the `demo-gwt-springboot` module:

The path must be set to `src/main/resources/public` and the context must be `/demogwt`.
**Important**
Do not add the web.xml to git. Just ignore it.

###### Do not generate Artifacts

Close the `Project Structure` with `Ok` and reopen it. Now the `Web Facet` can be selected in the GWT Module.

After this you should select only the GWT Module `DemoGwtDevelopment`

##### GWT Configuration
Add a new Run Configuration

And a GWT Configuration:

After this you start the "Spring Boot Project" first and after this the "GWT-Project" in Debug mode.
##### Codeserver
Now you have to repeat the steps to configure the code server (see above).
##### Running the debugger with the IDE Support Plugin
You should see the alert that the »JetBrains IDE Support« is running in debug mode.
If you have any trouble connecting the browser with the idea, please check the ports of the browser plugin and Intellij.
Right click on the Life Edit extension and choose Options:

The default port is `63342`.

And check if the port in the Intellij IDEA debugger is configured on the same port.

## Unit and Integration Testing
### Server: Spring Test
Examples of unit test with POJO and Mockito:
- [PersonImplTest.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/test/java/com/lofidewanto/demo/server/domain/PersonImplTest.java)
- [PersonServiceImplTest.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/test/java/com/lofidewanto/demo/server/service/person/PersonServiceImplTest.java)
Examples of integration test with Spring and in memory database:
- [PersonServiceImplIT.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/test/java/com/lofidewanto/demo/server/service/person/PersonServiceImplIT.java)
### Client: GWT Mockito
We use GWT Mockito for writing the GWT user interface unit test. Following is an example of GWT Mockito unit test:
- [MainPanelViewTest.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/test/java/com/lofidewanto/demo/client/ui/main/MainPanelViewTest.java)
- [PersonPanelViewTest.java](https://github.com/interseroh/demo-gwt-springboot/blob/master/src/test/java/com/lofidewanto/demo/client/ui/person/PersonPanelViewTest.java)
| 1 |
runabol/spring-security-passwordless | Passwordless authentication example application using Spring Boot and Spring Security | apache2 java passwordless-login springboot springsecurity | # Introduction
We all have a love/hate relationship with passwords. They protect our most valuable assets but they are so god damn hard to create and remember.
And just to make things even harder for us humans, more and more companies are now enforcing two factor authentication (you know, the little phone pincode thing) to make it even more complicated to login to our accounts.
Despite advances in biometric authentication (fingerprint, face recognition etc.), passwords still remain the most ubiqutous form of authentication.
So what can we do to help our fellow users to access our application in an easier manner but without compromising security?
This is where passwordless login comes in.
How does it work?
If you ever went to a website, realized you forgot your password and then used their "Forgot Password" then you know what passwordless login is.
After you entered your email address on the Reset Password page you were sent a "magic" link with a special code (a.k.a "token") embedded in it which provided you with the ability to reset your password.
That website piggy-backed on your already-password-protected email address to create a secure, one-time-password "magic" link to your account.
Well, if we can do all that in a presumably safe way when the user loses his password why can't we do it whenever a user wants to login? Sure we can.
Oh, and just in case you're wondering some big name (Slack, Medium.com, Twitter) companies are already using this method of authentication.
Alright, let's get down to business then.
# The nitty gritty
1. Create a [sign-up/sign-in page](https://github.com/creactiviti/spring-security-passwordless/blob/master/src/main/resources/templates/signin.html). It basically needs only one field: email.
```
<input type="email" name="email" class="form-control" placeholder="Email address" required autofocus>
```
2. Create an [endpoint](https://github.com/creactiviti/spring-security-passwordless/blob/master/src/main/java/com/creactiviti/spring/security/passwordless/web/SigninController.java#L35) to handle the form submission:
```
private final TokenStore tokenStore;
private final Sender sender;
@PostMapping("/signin")
public String signin (@RequestParam("email") String aEmail) {
// verify that the user is in the database.
// ...
// create a one-time login token
String token = tokenStore.create(aEmail);
// send the token to the user as a "magic" link
sender.send(aEmail, token);
return "login_link_sent";
}
```
3. Create an [endpoint](https://github.com/creactiviti/spring-security-passwordless/blob/master/src/main/java/com/creactiviti/spring/security/passwordless/web/SigninController.java#L48) to authenticate the user based on the "magic" link:
```
private final Authenticator authenticator;
@GetMapping("/signin/{token}")
public String signin (@RequestParam("uid") String aUid, @PathVariable("token") String aToken) {
try {
authenticator.authenticate(aUid, aToken);
return "redirect:/";
}
catch (BadCredentialsException aBadCredentialsException) {
return "invalid_login_link";
}
}
```
And that's about it.
# Securing the "magic" link.
There are few precautions you should take to keep the "magic" link as secure as possible:
1. When sending the link to the user communicate to your email server over SSL.
2. Tokens should only be usable once.
3. Tokens should not be easily guessable. Use a good, cryptographically strong random number generator. e.g:
```
SecureRandom random = new SecureRandom();
byte bytes[] = new byte[TOKEN_BYTE_SIZE];
random.nextBytes(bytes);
String token = String.valueOf(Hex.encode(bytes));
```
4. Tokens should expire after a reasonable amount of time (say 15 minutes). In this example I use an in-memory `TokenStore` implementation backed by a `SelfExpringHashMap` which as its name suggests expires entries after a given amount of time. In a real-world scenario you will most likely use a database to store your generated tokens so your website can run on more than one machine and so these tokens survive a crash. But the principle is the same. You can have a `created_at` field which stamps the time the token was created so you can determine if it expired or not.
# Running the demo
1. Clone the repo:
```
git clone https://github.com/creactiviti/spring-security-passwordless.git
```
2. Build
```
mvn clean spring-boot:run -Dspring.mail.host=<SMTP HOST> -Dspring.mail.username=<SMTP USERNAME> -Dspring.mail.password=<SMTP PASSWORD> -Dpasswordless.email.from=<SENDER EMAIL ADDRESS>
```
3. Sign-in
Go to [http://localhost:8080/signin](http://localhost:8080/signin)
# License
Apache License version 2.0.
| 1 |
lomza/AppBar-ScrollFlags-Example | Examples of AppBarLayout's usage of layout_scrollFlags attribute | null | # AppBar-ScrollFlags-Example
Examples of AppBarLayout's usage of layout_scrollFlags attribute

| 0 |
thegreystone/java-svc | Java serviceability examples. Includes simple example apps for jmc, jfr, attach, jmx, jplis, jdi and perfcounters. | hacktoberfest hacktoberfest2021 | # java-svc
This repository contains a set of small examples that can be used to demonstrate various popular Java serviceability technologies. The examples are focused on making it easy to getting going with the various serviceability technologies.
Note that there are already technology demonstrators for most technologies among the standard java demos. The demos in this repository, however, are focusing on making it easier to get started. The examples are easy to build and run, and they are easily digested. Everyone should dare experimenting with these, even relatively inexperienced developers. Not to mention that I needed examples that fit on a slide for a talk. ;)
## Prerequisites
All projects can build with JDK11, and most will build on JDK 8 as well.
You will also need to have Maven 3.5.3+ installed.
## Building
To build all the projects in one go, ensure that you are using JDK 11, and simply run:
```bash
mvn package
```
The projects can also be built individually by entering the subprojects. Some projects may require a `mvn install` of a dependent project to be built in such a manner.
## Running the Projects
Check the README.md files in the subfolders for instructions on how to run the examples.
| 1 |
qct/swagger-example | Introduction and Example for OpenAPI specification & Swagger Open Source Tools, including swagger-editor, swagger-codegen and swagger-ui. Auto generation example for client SDKs, server code, asciidoctor and html documents. | asciidoc asciidoctor asciidoctor-converter asciidoctor-pdf spring-boot springfox swagger swagger-api swagger-codegen swagger-docs swagger-editor swagger-generator swagger-spec swagger-specification swagger-ui swagger2 swagger2markup | # Swagger Introduction & Examples
- [Quick Start](#quick-start)
- [OpenAPI & Swagger](#openapi--swagger)
* [OpenAPI](#openapi)
* [Swagger](#swagger)
* [Why Use OpenAPI?](#why-use-openapi)
- [Introduction to OpenAPI Specification](#introduction-to-openapi-specification)
* [Basic Structure](#basic-structure)
* [Metadata](#metadata)
* [Base URL](#base-url)
* [Consumes, Produces](#consumes-produces)
* [Paths](#paths)
* [Parameters](#parameters)
* [Responses](#responses)
* [Input and Output Models](#input-and-output-models)
* [Authentication](#authentication)
- [Introduction to Swagger Open Source Tools](#introduction-to-swagger-open-source-tools)
* [Swagger Editor](#swagger-editor)
* [Swagger Codegen](#swagger-codegen)
* [Swagger UI](#swagger-ui)
- [asciidoctor](#asciidoctor)
[中文版本Chinese version](README.zh-CN.md)
## Quick Start
1. install: after git clone, execute commands below in root directory:
```
swagger-server/bin/install.sh
```
doing that will produce some client SDKs, server code, asciidoc and html documents, look like this:
```
+---asciidoc //asciidoc document
+---client //auto Generated client SDKs
| +---go //-- client SDK in go programming language
| +---html2 //-- html document
| \---java //-- client SDK in java programming language
+---docs //html document
| swagger-example.html
+---server //auto generated server code
| +---jaxrs-resteasy //-- jaxrs server code that uses resteasy
| \---spring //-- server code that uses spring mvc
\---swagger-server // example
```
2. run swagger-server:
```
java -jar swagger-server/target/swagger-server-${version}.jar
```
3. explore:
swagger.json: `http://127.0.0.1:8080/v2/api-docs`
swagger-ui: `http://127.0.0.1:8080/swagger-ui.html`
swagger-ui looks like this:

---
### ***Introduction to OpenAPI & Swagger Open Source Tools***
## OpenAPI & Swagger
### OpenAPI
**OpenAPI Specification** (formerly Swagger Specification) is an API description format for REST APIs. An OpenAPI file allows you to describe your entire API, including:
* Available endpoints (```/users```) and operations on each endpoint (```GET /users```, ```POST /users```)
* Operation parameters Input and output for each operation
* Authentication methods
* Contact information, license, terms of use and other information.
API specifications can be written in YAML or JSON. The format is easy to learn and readable to both humans and machines. The complete OpenAPI Specification can be found on GitHub:
[OpenAPI 2.0 Specification](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md),
[OpenAPI 3.0 Specification](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md)
### Swagger
Swagger is a set of open-source tools built around the OpenAPI Specification that can help you design, build, document and consume REST APIs. The major Swagger tools include:
* [Swagger Editor](http://editor.swagger.io/?_ga=2.27098621.139862542.1529283950-1958724428.1521772135) – browser-based editor where you can write OpenAPI specs.
* [Swagger Codegen](https://github.com/swagger-api/swagger-codegen) – generates server stubs and client libraries from an OpenAPI spec.
* [Swagger UI](https://swagger.io/swagger-ui/) – renders OpenAPI specs as interactive API documentation.
### Why Use OpenAPI?
The ability of APIs to describe their own structure is the root of all awesomeness in OpenAPI. Once written, an OpenAPI specification and Swagger tools can drive your API development further in various ways:
* Design-first users: use [Swagger Codegen](https://github.com/swagger-api/swagger-codegen) to **generate a server stub** for your API. The only thing left is to implement the server logic – and your API is ready to go live!
* Use [Swagger Codegen](https://github.com/swagger-api/swagger-codegen) to **generate client libraries** for your API in over 40 languages.
* Use [Swagger UI](https://swagger.io/swagger-ui/) to generate **interactive API documentation** that lets your users try out the API calls directly in the browser.
* Use the spec to connect API-related tools to your API. For example, import the spec to [SoapUI](https://soapui.org/) to create automated tests for your API.
* And more! Check out the [open-source tools](https://swagger.io/open-source-integrations/) that integrate with Swagger.
-----
## Introduction to OpenAPI Specification
### **Basic Structure**
Swagger can be written in JSON or YAML. In this guide, we only use YAML examples, but JSON works equally well. A sample Swagger specification written in YAML looks like:
```yaml
swagger: "2.0"
info:
title: Sample API
description: API description in Markdown.
version: 1.0.0
host: api.example.com
basePath: /v1
schemes:
- https
paths:
/users:
get:
summary: Returns a list of users.
description: Optional extended description in Markdown.
produces:
- application/json
responses:
200:
description: OK
```
### **Metadata**
Every Swagger specification starts with the Swagger version, 3.0 being the latest version. A Swagger version defines the overall structure of an API specification -- what you can document and how you document it.
```yaml
swagger: "2.0"
```
Then, you need to specify the ```API info``` -- ```title```, ```description``` (optional), ```version``` (API version, not file revision or Swagger version).
```yaml
info:
title: Sample API
description: API description in Markdown.
version: 1.0.0
```
```version``` can be a random string. You can use major.minor.patch (as in [semantic versioning](http://semver.org/)), or an arbitrary format like 1.0-beta or 2016.11.15.
```description``` can be [multiline](http://stackoverflow.com/a/21699210) and supports [GitHub Flavored Markdown](https://guides.github.com/features/mastering-markdown/) for rich text representation.
```info``` also supports other fields for contact information, license and other details. Reference: [Info Object](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#infoObject).
### **Base URL**
The base URL for all API calls is defined using ```schemes```, ```host``` and ```basePath```:
```yaml
host: api.example.com
basePath: /v1
schemes:
- https
```
All API paths are relative to the base URL. For example, /users actually means *https://api.example.com/v1/users.
More info*: [API Host and Base URL](https://swagger.io/docs/specification/2-0/api-host-and-base-path/).
### **Consumes, Produces**
The ```consumes``` and ```produces``` sections define the MIME types supported by the API. The root-level definition can be overridden in individual operations.
```yaml
consumes:
- application/json
- application/xml
produces:
- application/json
- application/xml
```
*More info*: [MIME Types](https://swagger.io/docs/specification/2-0/mime-types/).
### **Paths**
The ```paths``` section defines individual endpoints (paths) in your API, and the HTTP methods (operations) supported by these endpoints. For example, ```GET /users``` can be described as:
```yaml
paths:
/users:
get:
summary: Returns a list of users.
description: Optional extended description in Markdown.
produces:
- application/json
responses:
200:
description: OK
```
*More info*: [Paths and Operations](https://swagger.io/docs/specification/2-0/paths-and-operations/).
### **Parameters**
Operations can have parameters that can be passed via URL path (```/users/{userId}```), query string (```/users?role=admin```), headers (```X-CustomHeader: Value```) and request body.
You can define the parameter types, format, whether they are required or optional, and other details:
```yaml
paths:
/users/{userId}:
get:
summary: Returns a user by ID.
parameters:
- in: path
name: userId
required: true
type: integer
minimum: 1
description: Parameter description in Markdown.
responses:
200:
description: OK
```
*More info*: [Describing Parameters](https://swagger.io/docs/specification/2-0/describing-parameters/).
### **Responses**
For each operation, you can define possible status codes, such as 200 OK or 404 Not Found, and ```schema``` of the response body.
Schemas can be defined inline or referenced from an external definition via ```$ref```. You can also provide example responses for different content types.
```yaml
paths:
/users/{userId}:
get:
summary: Returns a user by ID.
parameters:
- in: path
name: userId
required: true
type: integer
minimum: 1
description: The ID of the user to return.
responses:
200:
description: A User object.
schema:
type: object
properties:
id:
type: integer
example: 4
name:
type: string
example: Arthur Dent
400:
description: The specified user ID is invalid (e.g. not a number).
404:
description: A user with the specified ID was not found.
default:
description: Unexpected error
```
*More info*: [Describing Responses](https://swagger.io/docs/specification/2-0/describing-responses/).
### **Input and Output Models**
The global ```definitions``` section lets you define common data structures used in your API. They can be referenced via ```$ref```
whenever a ```schema``` is required -- both for request body and response body. For example, this JSON object:
```json
{
"id": 4,
"name": "Arthur Dent"
}
```
can be represented as:
```yaml
definitions:
User:
properties:
id:
type: integer
name:
type: string
# Both properties are required
required:
- id
- name
```
and then referenced in the request body schema and response body schema as follows:
```yaml
paths:
/users/{userId}:
get:
summary: Returns a user by ID.
parameters:
- in: path
name: userId
required: true
type: integer
responses:
200:
description: OK
schema:
$ref: '#/definitions/User'
/users:
post:
summary: Creates a new user.
parameters:
- in: body
name: user
schema:
$ref: '#/definitions/User'
responses:
200:
description: OK
```
### **Authentication**
The ```securityDefinitions``` and ```security``` keywords are used to describe the authentication methods used in your API.
```yaml
securityDefinitions:
BasicAuth:
type: basic
security:
- BasicAuth: []
```
Supported authentication methods are:
* [Basic authentication](https://swagger.io/docs/specification/2-0/authentication/basic-authentication/)
* [API key](https://swagger.io/docs/specification/2-0/authentication/api-keys/) (as a header or query parameter)
* OAuth 2 common flows (implicit, password, application and access code)
*More info*: [Authentication](https://swagger.io/docs/specification/2-0/authentication/).
## Introduction to Swagger Open Source Tools
### **Swagger Editor**
Design, describe, and document your API on the first open source editor fully dedicated to OpenAPI-based APIs.
The Swagger Editor is great for quickly getting started with the OpenAPI (formerly known as the Swagger Specification) specification, with support for Swagger 2.0 and OpenAPI 3.0.
* Runs Anywhere: The Editor works in any development environment, be it locally or in the web
* Smart Feedback: Validate your syntax for OAS-compliance as you write it with concise feedback and error handling
* Instant Visualization: Render your API specification visually and interact with your API while still defining it
* Intelligent Auto-completion: Write syntax faster with a smart and intelligent auto-completion
* Fully Customizable: Easy to configure and customize anything, from line-spacing to themes
* All About Your Build: Generate server stubs and client libraries for your API in every popular language
### **Swagger Codegen**
Swagger Codegen can simplify your build process by generating server stubs and client SDKs for any API, defined with the OpenAPI (formerly known as Swagger) specification,
so your team can focus better on your API’s implementation and adoption.
* Generate Servers: Remove tedious plumbing and configuration by generating boilerplate server code in over 20 different languages
* Improve API Consumption: Generate client SDKs in over 40 different languages for end developers to easily integrate with your API
* Continuously Improved: Swagger Codegen is always updated with the latest and greatest changes in the programming world
### **Swagger UI**
Swagger UI allows anyone — be it your development team or your end consumers — to visualize and interact with the API’s resources without having any of the implementation logic in place.
It’s automatically generated from your OpenAPI (formerly known as Swagger) Specification, with the visual documentation making it easy for back end implementation and client side consumption.
* Dependency Free: The UI works in any development environment, be it locally or in the web
* Human Friendly: Allow end developers to effortlessly interact and try out every single operation your API exposes for easy consumption
* Easy to Navigate: Quickly find and work with resources and endpoints with neatly categorized documentation
* All Browser Support: Cater to every possible scenario with Swagger UI working in all major browsers
* Fully Customizable: Style and tweak your Swagger UI the way you want with full source code access
* Complete OAS Support: Visualize APIs defined in Swagger 2.0 or OAS 3.0
## **asciidoctor**
* asciidoc
* asciidoctor
[Asciidoctor](https://asciidoctor.org/) is a fast text processor and publishing toolchain for converting [AsciiDoc](https://asciidoctor.org/docs/what-is-asciidoc) content to HTML5, DocBook, PDF, and other formats.
Asciidoctor is written in Ruby, packaged and distributed as a gem to [RubyGems.org](https://rubygems.org/gems/asciidoctor), and packaged for popular Linux distributions, including Fedora, Debian, Ubuntu, and Alpine.
Asciidoctor can be run on the JVM using AsciidoctorJ and in all JavaScript environments using Asciidoctor.js. Asciidoctor is [open source software](https://github.com/asciidoctor/asciidoctor/blob/master/LICENSE)
and hosted on [GitHub](https://github.com/asciidoctor/asciidoctor).
| 1 |
Udinic/PerformanceDemo | Simple demonstrations of performance issues. Using these examples will allow practicing performance analyzing tools, such as Systrace, Traceview and more. | null | # PerformanceDemo
Simple demonstrations of performance issues. Using these examples will allow practicing performance analyzing tools, such as Systrace, Traceview and more.
## Perf Demo
This is the main app, showing a few options to simulate different performance issues.
## Keep Busy app
Simple app to keep the CPU busy. Useful to test how other processes affects your app, and more.
Currently, the app creates 4 threads and takes about ~8 seconds to complete. You can play with the values in the service to tweak that.
Running the process is possible through the adb command:
adb shell am broadcast -a com.udinic.keepbusyapp.ACTION_KEEP_BUSY
Note: you must open the app's activity at least once before that, due to Android security measures to prevent apps from responding to broadcasts before opening the app for the first time.
| 0 |
luontola/cqrs-hotel | Example application about CQRS and Event Sourcing #NoFrameworks | null |
# CQRS Hotel
Example application demonstrating the use of [CQRS](http://martinfowler.com/bliki/CQRS.html) and [Event Sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) within the domain of hotel reservations. #NoFrameworks
This project is a sandbox for exploring how CQRS+ES affects the design of a system. The hypothesis is that it will lead to a better design than a typical database-centric approach; a design that is easily testable and does not detiorate as features are added. To answer that question, the problem being solved needs to be complex enough.
This project strives to differ from your typical toy examples in that *the problem domain is complex enough to warrant all the techniques being used.* The solution has been simplified, but the implemented features are production quality.
**Source Code:** <https://github.com/luontola/cqrs-hotel>
`master` branch | API container | Web container
:----------------:|:-------------:|:-------------:
[](https://travis-ci.org/luontola/cqrs-hotel) | [](https://hub.docker.com/r/luontola/cqrs-hotel-api/) | [](https://hub.docker.com/r/luontola/cqrs-hotel-web/)
## Project Status
- technical features
- [x] event store
- [x] aggregate roots (write model)
- [x] projections (read model)
- [ ] process managers
- [ ] GDPR compliance
- business features
- [x] making a reservation
- [ ] room allocation
- [ ] payment
- [ ] check-in, check-out
- [ ] changing the departure date
- [ ] changing the room
## Getting Started / Codebase Tour
Here are some pointers for where to look first in the code.
The [**web application's**](https://github.com/luontola/cqrs-hotel/tree/master/src/main/js) entry point is [index.js](https://github.com/luontola/cqrs-hotel/blob/master/src/main/js/index.js) and the entry points for each page are in [routes.js](https://github.com/luontola/cqrs-hotel/blob/master/src/main/js/routes.js). The UI is a single-page application which uses React and Redux but otherwise tries to avoid frameworks.
The [**backend application's**](https://github.com/luontola/cqrs-hotel/tree/master/src/main/java/fi/luontola/cqrshotel) main method is in [Application.java](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/Application.java) and the entry points for each operation are in [ApiController.java](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/ApiController.java). External dependencies are wired with Spring in `Application`, but the application core is wired in `ApiController` constructor. See there the command handlers and query handlers which are the entry point to the business logic.
The **framework** code is in the [fi.luontola.cqrshotel.framework package](https://github.com/luontola/cqrs-hotel/tree/master/src/main/java/fi/luontola/cqrshotel/framework). It contains in-memory and PostgreSQL implementations of the event store (the latter's PL/SQL scripts are in [src/main/resources/db/migration](https://github.com/luontola/cqrs-hotel/tree/master/src/main/resources/db/migration)), and base classes for aggregate roots and projections. CQRS with event sourcing requires very little infrastructure code, so you can easily write it yourself without external frameworks, which helps to reduce complexity.
To learn how the **write models** work, read how a reservation is made, starting from [SearchForAccommodationCommandHandler](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/reservation/commands/SearchForAccommodationCommandHandler.java) and [MakeReservationHandler](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/reservation/commands/MakeReservationHandler.java). The handlers contain no business logic. Instead, they delegate to [Reservation](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/reservation/Reservation.java) which does all the work. Read the [AggregateRoot](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/framework/AggregateRoot.java) base class, including its documentation, to understand how it should be used.
Of particular interest is how easy it is to **unit test** event sourced business logic. See [SearchForAccommodationTest](https://github.com/luontola/cqrs-hotel/blob/master/src/test/java/fi/luontola/cqrshotel/reservation/SearchForAccommodationTest.java) and [MakeReservationTest](https://github.com/luontola/cqrs-hotel/blob/master/src/test/java/fi/luontola/cqrshotel/reservation/MakeReservationTest.java). The given/when/then methods are in the simple [AggregateRootTester](https://github.com/luontola/cqrs-hotel/blob/master/src/test/java/fi/luontola/cqrshotel/framework/AggregateRootTester.java) base class.
To learn how the **read models** work, read [ReservationsView](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/reservation/queries/ReservationsView.java) and the base class [Projection](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/framework/Projection.java). Unit testing is again simple: [ReservationsViewTest](https://github.com/luontola/cqrs-hotel/blob/master/src/test/java/fi/luontola/cqrshotel/reservation/queries/ReservationsViewTest.java). Unlike aggregate roots, projections can listen to all events in the system; for example [CapacityView](https://github.com/luontola/cqrs-hotel/blob/master/src/main/java/fi/luontola/cqrshotel/capacity/CapacityView.java) is based on events from both Rooms and Reservations.
## Running
The easiest way to run this project is to use [Docker](https://www.docker.com/community-edition).
Start the application
docker-compose pull
docker-compose up -d
The application will run at http://localhost:8080/
View application logs (in `--follow` mode)
docker-compose logs -f api
Stop the application
docker-compose stop
Stop the application and remove all data
docker-compose down
## Developing
To develop this project, you must have installed recent versions of [Java (JDK)](http://www.oracle.com/technetwork/java/javase/downloads/), [Maven](https://maven.apache.org/), [Node.js](https://nodejs.org/), [Yarn](https://yarnpkg.com/) and [Docker](https://www.docker.com/community-edition). You can do a clean build with the `./build.sh` script. You can run this project's components individually with the following commands.
Start the database
docker-compose up -d db
Start the API backend (if not using an IDE)
mvn spring-boot:run
Start the web frontend (with live reloading)
yarn install
yarn start
The application will run at http://localhost:8080/
You may also start just the frontend or backend using Docker if you're developing only one layer of the application.
docker-compose up -d api
docker-compose up -d web
## More Resources
This example was mostly inspired by the following resources.
* [Greg Young's CQRS Class](https://goodenoughsoftware.net/online-videos/)
* [An older free video](https://www.youtube.com/watch?v=whCk1Q87_ZI) and [its documentation](https://cqrs.wordpress.com/documents/)
* [Simple CQRS example](https://github.com/gregoryyoung/m-r)
* [Building an Event Storage](https://cqrs.wordpress.com/documents/building-event-storage/)
For more resources visit [Awesome Domain-Driven Design](https://github.com/heynickc/awesome-ddd). Ask questions at the [DDD/CQRS discussion group](https://groups.google.com/forum/#!forum/dddcqrs).
| 1 |
jakubnabrdalik/hentai | Example of Hexagonal architecture with high cohesion modularization, CQRS and fast BDD tests in Java | null | # Example of Hexagonal architecture with high cohesion modularization, CQRS and fast BDD tests in Java
This repo is an example of Hexagonal architecture with sensible modularization on a package level, that provides high cohesion, low coupling, and allows for Behaviour Driven Development that has
- allows for most tests to be run in milliseconds (unit tests without IO)
- while at the same time not falling into the trap of testing INTERNALS of a module (no test-per-class mistake)
- tests that focus on behaviour of each module (refactoring does not require changing test)
- just enough intergration/acceptance tests with focus on performance (minimum waiting for tests to pass)
- tests that describe requirements (living documentation)
- modules that have high cohesion (everything hidden except for APIs) and low coupling (modules connected via their APIs
- easy to explain, understand and follow
This example follows the type of code I write at work on a daily basis. So while this is an artificial example, all the rules and architecture approach are the effect of what works for my teams in real life projects.
I use this project to teach Behaviour Driven Development, Domain Driven Design, Command Query Responsibility Segregation and to show Spring live-coding.
Pull requests are welcome.
---
# The problem
Each project starst with a problem, from which we get a set of requirements. Here I'm using a task I once received as a homework from a company, that wanted to asses new candidates.
## Project – Video rental store
For a video rental store we want to create a system for managing the rental administration.
We want three primary functions.
- Have an inventory of films
- Calculate the price for rentals
- Keep track of the customers “bonus” points
## Price
The price of rentals is based type of film rented and how many days the film is rented for.
The customers say when renting for how many days they want to rent for and pay up front. If
the film is returned late, then rent for the extra days is charged when returning.
## Film types
The store has three types of films.
- New releases – Price is <premium price> times number of days rented.
- Regular films – Price is <basic price> for the fist 3 days and then <basic price> times the number of days over 3.
- Old film - Price is <basic price> for the fist 5 days and then <basic price> times the number of days over 5
<premium price> is 40 SEK
<basic price> is 30 SEK
The program should expose a rest-ish HTTP API.
The API should (at least) expose operations for
- Renting one or several films and calculating the price.
- Returning films and calculating possible surcharges.
## Examples of price calculations
Matrix 11 (New release) 1 days 40 SEK
Spider Man (Regular rental) 5 days 90 SEK
Spider Man 2 (Regular rental) 2 days 30 SEK
Out of Africa (Old film) 7 days 90 SEK
Total price: 250 SEK
When returning films late
Matrix 11 (New release) 2 extra days 80 SEK
Spider Man (Regular rental) 1 days 30 SEK
Total late charge: 110 SEK
## Bonus points
Customers get bonus points when renting films. A new release gives 2 points and other films
give one point per rental (regardless of the time rented).
---
# Acceptance specifications
After gathering a problem description in a natural language, the next step is to crete Specifications for our project. That is, to split our requirements into a set of scenarios that describe the behaviour of a system.
Years ago this used to be done using Use Cases. Later on, the industry simplified this to user stories, and now we follow the best practices of BDD. For this very simple project, we can create one main happy path specification. If this specification is implemented, our project brings money.
## Happy path scenario:
As a hipster-deviant, to satisfy my weird desires, I want to:
given inventory has an old film "American Clingon Bondage" and a new release of "50 shades of Trumpet"
when I go to /films
then I see both films
when I go to /points
then I see I have no points
when I post to /calculate with both films for 3 days
then I can see it will cost me 120 SEK for Trumpet and 90 SEK for Clingon
when I post to /rent with both firms for 3 days
then I have rented both movies
when I go to /rent
then I see both movies are rented
when I go to /points
then I see I have 3 points
when I post to /return with Trumper
then trumper is returned
when I go to /rent
then I see only Clingon is rented
---
# Modules
Now, let's do just enough design up front. Let's split the application into modules.
This is the list of our modules with their responsibilities
films
- list
- show
- add
rentals
- rent
- calculatePrice
- return
- list
points
- list
- addForRent
user
- getLoggedUser
We verify that our module design is solid by checking the number of communications between modules. High cohesion / low coupling means, that modules do not talk to often with each other, and that our API stays small.
---
# Implementation
We are ready to actually implement something using BDD. We shall start with implementing the acceptance spec (the only integration test so far), and then the film module. Watch git history for more details about each step.
| 1 |
codetojoy/easter_eggs_for_java_9 | Basic examples for Java 9 modules. Usage of 'egg' here is SSCCE: http://sscce.org | null | ### Eggs for Java 9 Modules
* some basic examples for Java 9 modules
* usage of *'egg'* here is [SSCCE](http://sscce.org/) **not** a [hidden feature](https://en.wikipedia.org/wiki/Easter_egg_(media)) !
* see notes below to use JDK9 in Docker
* see README.md in each folder for steps to execute
### validation log
* confirmed 29-JUN-2017 with b175
* confirmed 01-JUN-2017 with b171 via Travis-CI
* confirmed 12-MAY-2017 with b169 via Docker automenta/javai [image](https://hub.docker.com/r/automenta/javai/)
* set JAVA_HOME for jlink
* confirmed 05-MAY-2017 with b168 via Docker automenta/javai [image](https://hub.docker.com/r/automenta/javai/)
* set JAVA_HOME for jlink
* confirmed 05-MAY-2017 with b167 via sdkman
* confirmed 03-APR-2017 with b161
* build tickle here: 01-JUN-2017
### Setup for Docker (optional)
* These instructions work for Mac OS X. Tweak as appropriate
* Open 'Docker Quick Start Terminal'
* set `MY_SRC_HOME` to be appropriate directory on your computer where this repo is located
* steps:
<pre>
docker pull automenta/javai:latest
cd $MY_SRC_HOME
docker run --rm -t -i -v $(pwd):/data automenta/javai bash
export JAVA_HOME=/j/jdk9/bin
java --version
cd /data
</pre>
| 0 |
LambdaTest/java-testng-selenium | Run TestNG and Selenium scripts on LambdaTest automation cloud. A sample repo to help you run TestNG framework based test scripts in parallel with LambdaTest | automation automation-testing cloud example examples java lambdatest selenium selenium-grid selenium-webdriver selenium4 test-automation testing testng | # Run Selenium Tests With TestNG On LambdaTest

<p align="center">
<a href="https://www.lambdatest.com/blog/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Blog</a>
⋅
<a href="https://www.lambdatest.com/support/docs/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Docs</a>
⋅
<a href="https://www.lambdatest.com/learning-hub/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Learning Hub</a>
⋅
<a href="https://www.lambdatest.com/newsletter/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Newsletter</a>
⋅
<a href="https://www.lambdatest.com/certification/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Certifications</a>
⋅
<a href="https://www.youtube.com/c/LambdaTest" target="_bank">YouTube</a>
</p>
 
 
 
*Learn how to use TestNG framework to configure and run your Java automation testing scripts on the LambdaTest platform*
[<img height="58" width="200" src="https://user-images.githubusercontent.com/70570645/171866795-52c11b49-0728-4229-b073-4b704209ddde.png">](https://accounts.lambdatest.com/register?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
## Table Of Contents
* [Pre-requisites](#pre-requisites)
* [Run Your First Test](#run-your-first-test)
* [Parallel Testing With TestNG](#executing-parallel-tests-using-testng)
* [Local Testing With TestNG](#testing-locally-hosted-or-privately-hosted-projects)
## Pre-requisites
Before you can start performing Java automation testing with Selenium, you would need to:
- Install the latest **Java development environment** i.e. **JDK 1.6** or higher. We recommend using the latest version.
- Download the latest **Selenium Client** and its **WebDriver bindings** from the [official website](https://www.selenium.dev/downloads/). Latest versions of Selenium Client and WebDriver are ideal for running your automation script on LambdaTest Selenium cloud grid.
- Install **Maven** which supports **JUnit** framework out of the box. **Maven** can be downloaded and installed following the steps from [the official website](https://maven.apache.org/). Maven can also be installed easily on **Linux/MacOS** using [Homebrew](https://brew.sh/) package manager.
### Cloning Repo And Installing Dependencies
**Step 1:** Clone the LambdaTest’s Java-TestNG-Selenium repository and navigate to the code directory as shown below:
```bash
git clone https://github.com/LambdaTest/Java-TestNG-Selenium
cd Java-TestNG-Selenium
```
You can also run the command below to check for outdated dependencies.
```bash
mvn versions:display-dependency-updates
```
### Setting Up Your Authentication
Make sure you have your LambdaTest credentials with you to run test automation scripts. You can get these credentials from the [LambdaTest Automation Dashboard](https://automation.lambdatest.com/build?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) or by your [LambdaTest Profile](https://accounts.lambdatest.com/login?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium).
**Step 2:** Set LambdaTest **Username** and **Access Key** in environment variables.
* For **Linux/macOS**:
```bash
export LT_USERNAME="YOUR_USERNAME"
export LT_ACCESS_KEY="YOUR ACCESS KEY"
```
* For **Windows**:
```bash
set LT_USERNAME="YOUR_USERNAME"
set LT_ACCESS_KEY="YOUR ACCESS KEY"
```
## Run Your First Test
>**Test Scenario**: The sample [TestNGTodo1.java](https://github.com/LambdaTest/Java-TestNG-Selenium/blob/master/src/test/java/com/lambdatest/TestNGTodo1.java) tests a sample to-do list app by marking couple items as done, adding a new item to the list and finally displaying the count of pending items as output.
### Configuring Your Test Capabilities
**Step 3:** In the test script, you need to update your test capabilities. In this code, we are passing browser, browser version, and operating system information, along with LambdaTest Selenium grid capabilities via capabilities object. The capabilities object in the above code are defined as:
```java
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("browserName", "chrome");
capabilities.setCapability("version", "70.0");
capabilities.setCapability("platform", "win10"); // If this cap isn't specified, it will just get the any available one
capabilities.setCapability("build", "LambdaTestSampleApp");
capabilities.setCapability("name", "LambdaTestJavaSample");
```
You can generate capabilities for your test requirements with the help of our inbuilt [Desired Capability Generator](https://www.lambdatest.com/capabilities-generator/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium).
### Executing The Test
**Step 4:** The tests can be executed in the terminal using the following command.
```bash
mvn test -D suite=single.xml
```
Your test results would be displayed on the test console (or command-line interface if you are using terminal/cmd) and on LambdaTest automation dashboard.
## Run Parallel Tests Using TestNG
Here is an example `xml` file which would help you to run a single test on various browsers at the same time, you would also need to generate a testcase which makes use of **TestNG** framework parameters (`org.testng.annotations.Parameters`).
```xml title="testng.xml"
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite thread-count="3" name="LambaTestSuite" parallel="tests">
<test name="WIN8TEST">
<parameter name="browser" value="firefox"/>
<parameter name="version" value="62.0"/>
<parameter name="platform" value="WIN8"/>
<classes>
<class name="LambdaTest.TestNGToDo"/>
</classes>
</test> <!-- Test -->
<test name="WIN10TEST">
<parameter name="browser" value="chrome"/>
<parameter name="version" value="79.0"/>
<parameter name="platform" value="WIN10"/>
<classes>
<class name="LambdaTest.TestNGToDo"/>
</classes>
</test> <!-- Test -->
<test name="MACTEST">
<parameter name="browser" value="safari"/>
<parameter name="version" value="11.0"/>
<parameter name="platform" value="macos 10.13"/>
<classes>
<class name="LambdaTest.TestNGToDo"/>
</classes>
</test> <!-- Test -->
</suite>
```
### Executing Parallel Tests Using TestNG
To run parallel tests using **TestNG**, we would have to execute the below commands in the terminal:
- For the above example code
```bash
mvn test
```
- For the cloned Java-TestNG-Selenium repo used to run our first sample test
```bash
mvn test -D suite=parallel.xml
```
## Testing Locally Hosted Or Privately Hosted Projects
You can test your locally hosted or privately hosted projects with LambdaTest Selenium grid using LambdaTest Tunnel. All you would have to do is set up an SSH tunnel using tunnel and pass toggle `tunnel = True` via desired capabilities. LambdaTest Tunnel establishes a secure SSH protocol based tunnel that allows you in testing your locally hosted or privately hosted pages, even before they are live.
Refer our [LambdaTest Tunnel documentation](https://www.lambdatest.com/support/docs/testing-locally-hosted-pages/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) for more information.
Here’s how you can establish LambdaTest Tunnel.
Download the binary file of:
* [LambdaTest Tunnel for Windows](https://downloads.lambdatest.com/tunnel/v3/windows/64bit/LT_Windows.zip)
* [LambdaTest Tunnel for macOS](https://downloads.lambdatest.com/tunnel/v3/mac/64bit/LT_Mac.zip)
* [LambdaTest Tunnel for Linux](https://downloads.lambdatest.com/tunnel/v3/linux/64bit/LT_Linux.zip)
Open command prompt and navigate to the binary folder.
Run the following command:
```bash
LT -user {user’s login email} -key {user’s access key}
```
So if your user name is lambdatest@example.com and key is 123456, the command would be:
```bash
LT -user lambdatest@example.com -key 123456
```
Once you are able to connect **LambdaTest Tunnel** successfully, you would just have to pass on tunnel capabilities in the code shown below :
**Tunnel Capability**
```java
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("tunnel", true);
```
## Tutorials 📙
Check out our latest tutorials on TestNG automation testing 👇
* [JUnit 5 vs TestNG: Choosing the Right Framework for Automation Testing](https://www.lambdatest.com/blog/junit-5-vs-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How To Install TestNG?](https://www.lambdatest.com/blog/how-to-install-testng-in-eclipse-step-by-step-guide/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [Create TestNG Project in Eclipse & Run Selenium Test Script](https://www.lambdatest.com/blog/create-testng-project-in-eclipse-run-selenium-test-script/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [A Complete Guide for Your First TestNG Automation Script](https://www.lambdatest.com/blog/a-complete-guide-for-your-first-testng-automation-script/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Automate using TestNG in Selenium?](https://www.lambdatest.com/blog/testng-in-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Perform Parallel Test Execution in TestNG with Selenium](https://www.lambdatest.com/blog/parallel-test-execution-in-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [Creating TestNG XML File & Execute Parallel Testing](https://www.lambdatest.com/blog/create-testng-xml-file-execute-parallel-testing/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [Speed Up Automated Parallel Testing in Selenium with TestNG](https://www.lambdatest.com/blog/speed-up-automated-parallel-testing-in-selenium-with-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [Automation Testing With Selenium, Cucumber & TestNG](https://www.lambdatest.com/blog/automation-testing-with-selenium-cucumber-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Run JUnit Selenium Tests using TestNG](https://www.lambdatest.com/blog/test-example-junit-and-testng-in-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Group Test Cases in TestNG [With Examples]](https://www.lambdatest.com/blog/grouping-test-cases-in-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Set Test Case Priority in TestNG with Selenium](https://www.lambdatest.com/blog/prioritizing-tests-in-testng-with-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Use Assertions in TestNG with Selenium](https://www.lambdatest.com/blog/asserts-in-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Use DataProviders in TestNG [With Examples]](https://www.lambdatest.com/blog/how-to-use-dataproviders-in-testng-with-examples/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [Parameterization in TestNG - DataProvider and TestNG XML [With Examples]](https://www.lambdatest.com/blog/parameterization-in-testng-dataprovider-and-testng-xml-examples/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [TestNG Listeners in Selenium WebDriver [With Examples]](https://www.lambdatest.com/blog/testng-listeners-in-selenium-webdriver-with-examples/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [TestNG Annotations Tutorial with Examples for Selenium Automation](https://www.lambdatest.com/blog/complete-guide-on-testng-annotations-for-selenium-webdriver/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Use TestNG Reporter Log in Selenium](https://www.lambdatest.com/blog/how-to-use-testng-reporter-log-in-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [How to Generate TestNG Reports in Jenkins](https://www.lambdatest.com/blog/how-to-generate-testng-reports-in-jenkins/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
## Documentation & Resources :books:
Visit the following links to learn more about LambdaTest's features, setup and tutorials around test automation, mobile app testing, responsive testing, and manual testing.
* [LambdaTest Documentation](https://www.lambdatest.com/support/docs/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [LambdaTest Blog](https://www.lambdatest.com/blog/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
* [LambdaTest Learning Hub](https://www.lambdatest.com/learning-hub/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
## LambdaTest Community :busts_in_silhouette:
The [LambdaTest Community](https://community.lambdatest.com/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) allows people to interact with tech enthusiasts. Connect, ask questions, and learn from tech-savvy people. Discuss best practises in web development, testing, and DevOps with professionals from across the globe 🌎
## What's New At LambdaTest ❓
To stay updated with the latest features and product add-ons, visit [Changelog](https://changelog.lambdatest.com)
## About LambdaTest
[LambdaTest](https://www.lambdatest.com/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) is a leading test execution and orchestration platform that is fast, reliable, scalable, and secure. It allows users to run both manual and automated testing of web and mobile apps across 3000+ different browsers, operating systems, and real device combinations. Using LambdaTest, businesses can ensure quicker developer feedback and hence achieve faster go to market. Over 500 enterprises and 1 Million + users across 130+ countries rely on LambdaTest for their testing needs.
### Features
* Run Selenium, Cypress, Puppeteer, Playwright, and Appium automation tests across 3000+ real desktop and mobile environments.
* Real-time cross browser testing on 3000+ environments.
* Test on Real device cloud
* Blazing fast test automation with HyperExecute
* Accelerate testing, shorten job times and get faster feedback on code changes with Test At Scale.
* Smart Visual Regression Testing on cloud
* 120+ third-party integrations with your favorite tool for CI/CD, Project Management, Codeless Automation, and more.
* Automated Screenshot testing across multiple browsers in a single click.
* Local testing of web and mobile apps.
* Online Accessibility Testing across 3000+ desktop and mobile browsers, browser versions, and operating systems.
* Geolocation testing of web and mobile apps across 53+ countries.
* LT Browser - for responsive testing across 50+ pre-installed mobile, tablets, desktop, and laptop viewports
[<img height="58" width="200" src="https://user-images.githubusercontent.com/70570645/171866795-52c11b49-0728-4229-b073-4b704209ddde.png">](https://accounts.lambdatest.com/register?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
## We are here to help you :headphones:
* Got a query? we are available 24x7 to help. [Contact Us](mailto:support@lambdatest.com)
* For more info, visit - [LambdaTest](https://www.lambdatest.com?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
| 1 |
JeroenMols/ArtifactoryExample | Example code to upload and use artefacts from Artifactory | null | # ArtifactoryExample
This repository demonstrates how you easily generate Maven artifacts from an Android library and upload them to your own private repository (based on Artifactory). The precise details of how everything works can be found:
- For AwesomeLibrary and AwesomeApplication: [this blogpost](https://jeroenmols.github.io/blog/2015/08/06/artifactory/).
- For AwesomeAdvancedLibrary and AwesomeAdvancedApplication: [this blogpost](https://jeroenmols.github.io/blog/2015/08/13/artifactory2/).
## Usage
Make sure you have your own private Artifactory repository running on your local machine. You can set up one by following the instructions in [this blogpost](https://jeroenmols.github.io/blog/2015/08/06/artifactory/).
Clone the entire repository to your local machine:
```git
git clone git@github.com:JeroenMols/ArtifactoryExample.git
```
Open the `AwesomeLibrary` or `AwesomeAdvancedLibrary` project, compile a release version and upload the artifacts to your Artifactory repository:
```groovy
gradle assembleRelease artifactoryPublish
```
Open the `AwesomeApplication` or `AwesomeAdvancedApplication` project in Android Studio and run it on a connected device. Gradle will now download the dependency you just created from the Artifactory repository and build your project.
That's it, your done!
## Questions
@molsjeroen
| 1 |
loopj/proguard-gradle-example | Example app showing how to use proguard with gradle | null | Example Java App with Proguard
==============================
Building
--------
```shell
./gradlew clean build proguard
```
Running
-------
```shell
java -jar build/libs/proguard-gradle-example.jar
```
Outputs
-------
- `build/libs/proguard-gradle-example.jar` - main package
- `build/libs/proguard-gradle-example.out.jar` - obfuscated main package
- `proguard.map` - proguard obfuscation mapping file
| 1 |
coi-gov-pl/spring-clean-architecture | An example web app structured using Clean Architecture, implemented using Spring Framework. | null | # spring-clean-architecture
[](https://travis-ci.org/coi-gov-pl/spring-clean-architecture)
An example web app structured using [Clean Architecture][clean-arch],
implemented using [Spring Framework][spring].

Watch youtube video of Robert C. Martin "Uncle Bob" on Clean Architecture and Design:
[](https://www.youtube.com/watch?v=Nsjsiz2A9mg)
[clean-arch]: https://8thlight.com/blog/uncle-bob/2012/08/13/the-clean-architecture.html
[spring]: https://projects.spring.io/spring-framework/
### Un strict mode
If you feel that external configuration is a bit of a hasle, you can see example of integrating spring-context into domain logic. It's less flexible and surely it's not advised by Uncle Bob. If you feel that your thing, you might look at branch: `spring-in-domain-logic` (See diff here: https://github.com/coi-gov-pl/spring-clean-architecture/compare/spring-in-domain-logic).
| 1 |
jamesjieye/netty-socketio.spring | An example of real-time chat application built with netty-socketio and Spring Boot. | netty-socketio real-time-chat socket-io spring-boot | # An example of real-time chat application
Built with
- [netty-socketio 1.7.17](https://github.com/mrniko/netty-socketio)
- Spring Boot 2.1.1.RELEASE
- socket.io-client 2.2.0
In this example, `SocketIONamespace` is used for declaring modules.
This example project is inspired by the following projects.
- https://github.com/Heit/netty-socketio-spring
- https://github.com/mrniko/netty-socketio-demo
# Usage
## Server end
- Run server by command `mvn spring-boot:run`
- Or build single executable jar file with `mvn package` and un single jar `java -jar rt-server.jar`
## Client end
- Put the `/client` directory under an HTTP server. Then open it from the browser after starting the server side.
# License
MIT | 1 |
AkramChauhan/WhatsApp-Stickers-using-Flutter | This App is complete example of how to create WhatsApp Sticker Application using Flutter. | null | # WhatsApp Sticker App using Flutter

# Available in PlayStore
<a href="https://play.google.com/store/apps/details?id=com.gamacrack.trending_stickers">
</a>
| 1 |
benelog/lambda-resort | examples of filtering, sorting, mapping by Java, Groovy, Scala, Kotlin, Xtend, Ceylon | groovy java lambda xtend |
# Filtering, sorting, mapping
### Backgrounds
[Guest.java](src/main/java/com/naver/helloworld/resort/domain/Guest.java)
```java
public class Guest {
private final int grade;
private final String name;
private final String company;
...
}
```
[GuestRepository.java](src/main/java/com/naver/helloworld/resort/repository/GuestRepository.java)
```java
import java.util.List;
public interface GuestRepository {
public List<Guest> findAllGuest ();
}
```
[ResortService.java](src/main/java/com/naver/helloworld/resort/service/ResortService.java)
```java
public interface ResortService {
public List<String> findGuestNamesByCompany (String company);
}
```
## Implementations by classic Java
### JDK Collections framework
[ClassicJavaResort.java](src/main/java/com/naver/helloworld/resort/service/ClassicJavaResort.java)
```java
public List<String> findGuestNamesbyCompany(String company) {
List<Guest> all = repository.findAllGuest();
List<Guest> filtered = filter(guests, company);
sort(filtered);
return mapNames(filtered);
}
private List<Guest> filter(List<Guest> guests, String company) {
List<Guest> filtered = new ArrayList<>();
for(Guest guest : guests ) {
if (company.equals(guest.getCompany())) {
filtered.add(guest);
}
}
return filtered;
}
private void sort(List<Guest> guests) {
Collections.sort(guests, new Comparator<Guest>() {
public int compare(Guest o1, Guest o2) {
return Integer.compare(o1.getGrade(), o2.getGrade());
}
});
}
private List<String> mapNames(List<Guest> guests) {
List<String> names = new ArrayList<>();
for(Guest guest : guests ) {
names.add(guest.getName());
}
return names;
}
```
### [Guava](https://github.com/google/guava)
[GuavaResort.java](src/main/java/com/naver/helloworld/resort/service/GuavaResort.java)
```java
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAll();
List<Guest> sorted = FluentIterable.from(all)
.filter(new Predicate<Guest>() {
public boolean apply(Guest g) {
return company.equals(g.getCompany());
}
})
.toSortedList(Ordering.natural().onResultOf(
new Function<Guest, Integer>() {
public Integer apply(Guest g) {
return g.getGrade();
}
}));
return FluentIterable.from(sorted)
.transform(new Function<Guest, String>() {
public String apply(Guest g) {
return g.getName();
}
})
.toList();
}
```
### [Totally Lazy](http://totallylazy.com/)
[TotallyLazyResort.java](src/main/java/com/naver/helloworld/resort/service/TotallyLazyResort.java)
```java
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAll();
return Sequences.sequence(all)
.filter(new Predicate<Guest>() {
public boolean matches(Guest g) {
return company.equals(g.getCompany());
}
})
.sortBy(new Callable1<Guest, Integer>(){
public Integer call(Guest g) {
return g.getGrade();
}
})
.map(new Callable1<Guest, String>(){
public String call(Guest g) {
return g.getName();
}
})
.toList();
}
```
### [GS Collections](https://github.com/goldmansachs/gs-collections)
[GsCollectoinsResort.java](src/main/java/com/naver/helloworld/resort/service/GsCollectionsResort.java)
```java
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAll();
return FastList.newList(all)
.select(new Predicate<Guest>() {
public boolean accept(Guest g) {
return company.equals(g.getCompany());
}
})
.sortThisBy(new Function<Guest, Integer>() {
public Integer valueOf(Guest g) {
return g.getGrade();
}
})
.collect(new Function<Guest, String> () {
public String valueOf(Guest g) {
return g.getName();
}
});
}
```
### [Bolts](https://bitbucket.org/stepancheg/bolts/wiki/Home)
[BoltsResort.java](src/main/java/com/naver/helloworld/resort/service/BoltsResort.java)
```java
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAllGuest();
return Cf.list(all)
.filter(new Function1B<Guest>() {
public boolean apply(Guest g) {
return company.equals(g.getCompany());
}
})
.sortBy(new Function<Guest, Integer>() {
public Integer apply(Guest g) {
return g.getGrade();
}
})
.map(new Function<Guest, String>() {
public String apply(Guest g) {
return g.getName();
}
});
}
```
### [Op4j](www.op4j.org)
[Op4JResort.java](src/main/java/com/naver/helloworld/resort/service/Op4JResort.java)
```java
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAllGuest();
return Op.on(all)
.removeAllFalse(new IFunction<Guest, Boolean>() {
public Boolean execute(Guest g, ExecCtx ctx) throws Exception {
return company.equals(g.getCompany());
}
})
.sortBy(new IFunction<Guest, Integer>() {
public Integer execute(Guest g, ExecCtx ctx) throws Exception {
return g.getGrade();
}
})
.map(new IFunction<Guest, String>() {
public String execute(Guest g, ExecCtx ctx) throws Exception {
return g.getName();
}
}).get();
}
```
### [Lambdaj](https://code.google.com/p/lambdaj)
[LambdaJResort.java](src/main/java/com/naver/helloworld/resort/service/LambdaJResort.java)
```java
import static ch.lambdaj.Lambda.*;
import static org.hamcrest.Matchers.*;
...
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAll();
return LambdaCollections.with(all)
.retain(having(on(Guest.class).getCompany(), equalTo(company)))
.sort(on(Guest.class).getGrade())
.extract(on(Guest.class).getName());
}
```
### [Functional Java](http://functionaljava.org/)
[FunctionalJavaResort.java](src/main/java/com/naver/helloworld/resort/service/FunctionalJavaResort.java)
```java
public List<String> findGuestNamesByCompany(String company) {
List<Guest> all = repository.findAll();
Collection<String> mapped = Stream.iterableStream(all)
.filter(new F<Guest, Boolean>() {
public Boolean f(Guest g){
return company.equals(g.getCompany());
}
})
.sort(Ord.ord(
new F<Guest, F<Guest, Ordering>>() {
public F<Guest, Ordering> f(final Guest a1) {
return new F<Guest, Ordering>() {
public Ordering f(final Guest a2) {
int x = Integer.compare(a1.getGrade(), a2.getGrade());
return x < 0 ? Ordering.LT : x == 0 ? Ordering.EQ : Ordering.GT;
}
};
}
}))
.map(new F<Guest, String>() {
public String f(Guest g) {
return g.getName();
}
})
.toCollection();
return new ArrayList<String>(mapped);
}
```
### [Apache Commons Collections](http://commons.apache.org/proper/commons-collections/)
[CommonsCollectionsResort.java](src/main/java/com/naver/helloworld/resort/service/CommonsCollectionsResort.java)
```java
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAll();
List<Guest> filtered = ListUtils.select(all, new Predicate<Guest>() {
public boolean evaluate(Guest g) {
return company.equals(g.getCompany());
}
});
Collections.sort(filtered, new Comparator<Guest>() {
public int compare(Guest o1, Guest o2) {
return Integer.compare(o1.getGrade(), o2.getGrade());
}
});
Collection<String> names = CollectionUtils.collect(filtered, new Transformer<Guest, String>(){
public String transform(Guest g) {
return g.getName();
}
});
return new ArrayList<>(names);
}
```
### [Jedi](http://jedi.codehaus.org/)
[JediResort.java](src/main/java/com/naver/helloworld/resort/service/JediResort.java)
```java
public List<String> findGuestNamesByCompany(final String company) {
List<Guest> all = repository.findAll();
List<Guest> filtered = FunctionalPrimitives.select(all, new Filter<Guest>() {
public Boolean execute(Guest g) {
return company.equals(g.getCompany());
}
});
List<Guest> sorted = Comparables.sort(filtered, new Functor<Guest, Integer>() {
public Integer execute(Guest g) {
return g.getGrade();
}
});
return FunctionalPrimitives.map(sorted, new Functor<Guest, String>() {
public String execute(Guest g) {
return g.getName();
}
});
}
```
## Implementations by other JVM languages
- Groovy : 2.3.9
- Scala : 2.11.4
- Kotlin : 0.10.195
- Xtend : 2.7
- Ceylon : 1.1.0
### [Groovy](http://groovy.codehaus.org/)
[GroovyAdvancedResort.groovy](src/main/groovy/com/naver/helloworld/resort/service/GroovyAdvancedResort.groovy)
```groovy
List<String> findGuestNamesByCompany(String company) {
List<Guest> all = repository.findAll()
all.findAll { it.company == company }
.sort { it.grade }
.collect { it.name }
}
```
### [Scala](http://www.scala-lang.org/)
[ScalaAdvancedResort.scala](src/main/scala/com/naver/helloworld/resort/service/ScalaAdvancedResort.scala)
```scala
import scala.collection.JavaConversions._
...
def findGuestNamesByCompany(company: String): java.util.List[String] = {
val all = repository.findAll
all.filter ( _.getCompany == company)
.sortBy ( _.getGrade )
.map ( _.getName )
}
```
### [Kotlin](http://kotlinlang.org)
[KotlinAdvancedResort.kt](src/main/kotlin/com/naver/helloworld/resort/service/KotlinAdvancedResort.kt)
```kotlin
override fun findGuestNamesByCompany(company: String): List<String> {
val all = repository.findAll()
return all.filter { it.getCompany() == company }
.sortBy { it.getGrade() }
.map { it.getName() }
}
```
### [Xtend](http://www.eclipse.org/xtend/)
[XtendAdvancedResort.xtend](src/main/xtend/com/naver/helloworld/resort/service/XtendAdvancedResort.xtend)
```xtend
override findGuestNamesByCompany(String aCompany) {
val all = repository.findAll()
all.filter [company == aCompany]
.sortBy[grade]
.map[name]
}
```
### [Ceylon](http://ceylon-lang.org/)
[resort.ceylon](src/main/ceylon/com/naver/helloworld/resort/service/resort.ceylon)
```ceylon
import ceylon.interop.java { CeylonIterable }
import java.util {JList = List, JArrayList = ArrayList }
import java.lang {JString = String}
...
shared actual JList<JString> findGuestNamesByCompany(String company) {
value all = repository.findAll() ;
value names = CeylonIterable(all)
.filter((Guest g) => g.company == company)
.sort(byIncreasing((Guest g) => g.grade.intValue()))
.map((Guest g) => g.name);
value jnames = JArrayList<JString>();
for (name in names) {jnames.add(JString(name));}
return jnames;
}
```
## Implementations by modern Java
[ModernJavaAdvancedResort.java](src/main/java/com/naver/helloworld/resort/service/ModernJavaAdvancedResort.java)
```java
public List<String> findGuestNamesByCompany(String company) {
List<Guest> guests = repository.findAll();
return guests.stream()
.filter(g -> company.equals(g.getCompany()))
.sorted(Comparator.comparing(Guest::getGrade))
.map(Guest::getName)
.collect(Collectors.toList());
}
```
# Refactoring by lambda expressions
## Async Servlet
### Classic Java
[ClassicAsyncServlet.java](src/main/java/com/naver/helloworld/web/ClassicAsyncServlet.java)
```java
public void doGet(final HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
final AsyncContext asyncContext = request.startAsync();
asyncContext.start(new Runnable() {
public void run() {
// long running job
asyncContext.dispatch("/status.jsp");
}
});
}
```
### Modern Java
[ModernAsyncServlet.java](src/main/java/com/naver/helloworld/web/ModernAsyncServlet.java)
```java
public void doGet(final HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
AsyncContext asyncContext = request.startAsync();
asyncContext.start(() -> {
// long running job
asyncContext.dispatch("/status.jsp");
});
}
```
## Spring JDBC
### Classic Java
[ClassicJdbcRepository.java](src/main/java/com/naver/helloworld/resort/repository/ClassicJdbcRepository.java)
```java
public List<Guest> findAll() {
return jdbcTemplate.query(SELECT_ALL, new RowMapper<Guest>(){
public Guest mapRow(ResultSet rs, int rowNum) throws SQLException {
return new Guest (
rs.getInt("id"),
rs.getString("name"),
rs.getString("company"),
rs.getInt("grade")
);
}
});
}
```
### Modern Java
[ModernJdbcRepository.java](src/main/java/com/naver/helloworld/resort/repository/ModernJdbcRepository.java)
```java
public List<Guest> findAll() {
return jdbcTemplate.query(SELECT_ALL,
(rs, rowNum) ->new Guest (
rs.getInt("id"),
rs.getString("name"),
rs.getString("company"),
rs.getInt("grade")
)
);
}
```
### Event bindings in Android
### Classic Java
[ClassicFragment.java](src/main/java/com/naver/helloworld/resort/android/ClassicFragment.java)
```java
Button calcButton = (Button) view.findViewById(R.id.calcBtn);
Button sendButton = (Button) view.findViewById(R.id.sendBtn);
calcButton.setOnClickListener(new OnClickListener() {
public void onClick(View view) {
calculate();
}
});
sendButton.setOnClickListener(new OnClickListener() {
public void onClick(View view) {
send();
}
});
```
### Modern Java
[ModernFragment.java](src/main/java/com/naver/helloworld/web/ModernAsyncServlet.java)
```java
Button calcButton = (Button) view.findViewById(R.id.calcBtn);
Button sendButton = (Button) view.findViewById(R.id.sendBtn);
calcButton.setOnClickListener(v -> calculate());
sendButton.setOnClickListener(v -> send());
```
# Frameworks using lambda expressions
### [Lambda Behave](http://richardwarburton.github.io/lambda-behave/)
[ResortServiceSpec.java](src/test/java/com/naver/helloworld/resort/service/ResortServiceSpec.java)
```java
@RunWith(JunitSuiteRunner.class)
public class ResortServiceSpec {{
GuestRepository repository = new MemoryRepository();
ResortService service = new ModernJavaResort(repository);
describe("ResortService with modern Java", it -> {
it.isSetupWith(() -> {
repository.save(
new Guest(1, "jsh", "Naver", 15),
new Guest(2, "hny", "Line", 10),
new Guest(3, "chy", "Naver", 5)
);
});
it.isConcludedWith(repository::deleteAll);
it.should("find names of guests by company ", expect -> {
List<String> names = service.findGuestNamesByCompany("Naver");
expect.that(names).isEqualTo(Arrays.asList("chy","jsh"));
});
});
}}
```
### [Jinq](http://www.jinq.org/)
[JinqResort.java](src/main/java/com/naver/helloworld/resort/service/JinqResort.java)
```java
private EntityManager em;
@Autowired
public JinqResort(EntityManager em) {
this.em = em;
}
private <T> JinqStream<T> stream(Class<T> clazz) {
return new JinqJPAStreamProvider(em.getEntityManagerFactory()).streamAll(em, clazz);
}
public List<String> findGuestNamesByCompany(String company) {
return stream(Guest.class)
.where(g -> g.getCompany().equals(company))
.sortedBy(Guest::getGrade)
.select(Guest::getName)
.toList();
}
```
A query generated by JinqResort
```sql
Hibernate: select guest0_.id as id1_0_, guest0_.company as company2_0_, guest0_.grade as grade3_0_, guest0_.name as name4_0_ from guest guest0_ where guest0_.company=? order by guest0_.grade ASC limit ?
```
### [Spark](http://www.sparkjava.com/)
[SparkServer.java](src/main/java/com/naver/helloworld/web/SparkServer.java)
```java
import static spark.Spark.*;
import com.naver.helloworld.resort.service.ResortService;
public class SparkServer {
public static void main(String[] args) {
get("/guests/:company", (request, response) -> {
String company = request.params(":company");
return "No guests from " + company;
});
}
}
```
[ResortServer.java](src/main/java/com/naver/helloworld/resort/ResortServer.java) (Spark + Spring)
```java
@SpringBootApplication
public class ResortServer {
@Autowired
private ResortService service;
public void start() {
get("/guests/:company", (request, response) -> {
String company = request.params(":company");
List<String> names = service.findGuestNamesByCompany(company);
return "Guests from " + company + " : " + names;
});
}
public static void main(String[] args) {
ApplicationContext context = SpringApplication.run(ResortServer.class);
context.getBean(ResortServer.class).start();
}
}
```
| 0 |
stream-iori/vertx-rpc-example | The Example of vertx-rpc | null | # vertx-rpc-example
The Example of vertx-rpc
| 1 |
xtext/maven-xtext-example | An Xtext language and example usage of it built with Maven | null | [](https://github.com/xtext/maven-xtext-example/actions?query=workflow%3ABuild)
# An Xtext Language Built with Maven
A small example to show how to configure a Maven build for an Xtext language and how to use it from Maven and Gradle.
## Language Build
If you use Xtext 2.9 or higher, the Maven build for your language is auto-generated. Just skip ahead to the usage section.
- see my.mavenized.herolanguage.* projects
- Language plug-ins, updatesite and Eclipse feature built via Maven/Tycho
- Xtext Code Generation (Language infrastructure generated from grammar)
- Xtend Code Generation
## Language Usage
- example-project
- example-project-gradle
- Example Language (herolanguage) Code Generation
- Xtend Code Generation
Try it out!
# Steps
## 1. Increase memory
```bash
export MAVEN_OPTS="-Xmx512m"
```
## 2. Build the language
```bash
mvn clean install
```
## 3. Build the example projects
```bash
cd ../example-project/
mvn clean install
```
```bash
cd ../example-project-gradle/
./gradlew build
```
# Builds
We now have automatic builds:
https://github.com/xtext/maven-xtext-example/actions?query=workflow%3ABuild
# Maven Archetype
There is also a Maven Archetype available that automatically creates your new project based on this example:
https://github.com/fuinorg/emt-xtext-archetype
# Known Issues
## 1. Build fails due to version conflicts
The build will fail immediately because of version conflicts. A possible error might look similar to the following:
* ```No versions available for org.eclipse.emf:org.eclipse.emf.mwe2.runtime:jar:[2.9.1.201705291010] within specified range```
Even if the specified version (see pom) is available on the central maven repository, updating related snapshots will most likely help the problem.
* ```mvn clean install -U```
| 1 |
bzdgn/spring-boot-restful-web-service-example | A detailed Standalone RESTful web service example application with the use of Spring Boot framework | h2-database h2-embedded-database java jersey jersey-spring-hibernate microservice microservices restful-api restful-webservices spring-boot spring-framework standalone uberjar | Spring Boot RESTful Web Service Example
=======================================
The main purpose of this sample project is to demonstrate the capabilities of spring boot.
But additionally, I want to show challenging problems that can occur during the development
while using the Spring Boot.
First goal is to show how it is easy to start a web service with embedded tomcat and embedded
H2 database. This is the main goal of the project.
Secondly, we are using Spring and I have used dependency injection. But what is a challenging
problem about dependency injection. Assume that you have two implementations ready for one
implementation, how are you going to select the implementation? I'll explain several ways but
also I'll demonstrate how we can select our implementation via external configuration so that
we can update our configuration and don't need to touch the code, restart our jar file and
that's all.
Thirdly, I also have demonstrated how to use Java Application Configuration within the double
implementation for the single interface scenario I've explained above.
Lastly, I will explain all the deployment details, the main configuration of the whole project
including H2 database configuration.
Moreover, I will also demonstrate how you are going to test your RESTful application with
Postman tool.
TOC
---
- [0 Prerequisite And Demo App](#0-prerequisite-and-demo-app) <br/>
- [1 About Spring Boot](#1-about-spring-boot) <br/>
- [2 Create Spring Boot Project With Maven](#2-create-spring-boot-project-with-maven) <br/>
- [3 Spring Boot Dependencies](#3-spring-boot-dependencies) <br/>
- [4 Making Uber Jar](#4-making-uber-jar) <br/>
- [5 Project Overview](#5-project-overview) <br/>
- [6 External Configuration Example](#6-external-configuration-example) <br/>
- [7 Application Properties](#7-application-properties) <br/>
- [8 H2 Database Preparation](#8-h2-database-preparation) <br/>
- [9 Sending And Receiving JSONs With Postman](#9-sending-and-receiving-jsons-with-postman) <br/>
* [9-a- Test](#9-a-test) <br/>
* [9-b- List](#9-b-list) <br/>
* [9-c- Create](#9-c-create) <br/>
* [9-d- Retrieve](#9-d-retrieve) <br/>
* [9-e- Update](#9-e-update) <br/>
* [9-f- Delete](#9-f-delete) <br/>
- [10 Building And Running The Standalone Application](#10-building-and-running-the-standalone-application) <br/>
0 Prerequisite And Demo App
----------------------------
To use this project, you are going to need;
- Java JDK 8 (1.8)
- Maven compatibile with JDK 8
- Any Java IDE
- [Postman tool](https://www.getpostman.com/) (optional, will be used for testing web service)
We are going to build a demo app named as consultant-api. This will be a simple web service with
basic CRUD operations. I'm going to demonstrate default and external configuration, how to use
multiple implementation and autowire them within the code and outside the code with an external
configuration file. Our app will be a standalone application that we can use independently, and
we are going to use an embedded tomcat, an embedded H2 database.
[Go back to TOC](#toc)
1 About Spring Boot
--------------------
Whenever there is a new framework on the town, you must think two thinks. One, why should I use this
framework which means "what are the benefits of this framework", also can be interpreted like "what
this framework solves?". Two, "When should I use this framework?", also can be interpreted as "on
which specific scenarios this framework is useful" or can be simplified as "what is the problem domain
of this framework?".
When we make a web service with spring framework, we have to generate a war file, we need to configure
web.xml, and also if we are going to use the connection pool, the configuration is costly. All of these
increases the cost of time. So instead of writing your code, doing your development, you a lot of time
is wasted during the configuration. This is where Spring Boot comes to the action. Spring Boot simplifies
configuration, reduces boilerplate code that puts no any value to your software development.
So, what Spring Boot solves is the time lost for the configuration. For example, you can create a web
service with Spring Boot that runs on an embedded Tomcat server which is automatically configured and
you don't have to deal with the configuration. You can do all your configuration parameters via default
application properties. Also you can connect to an H2 embedded database, same applies for the
configuration here.
Secondly, you don't have to generate a war file. All Spring Boot applications run as a standalone java
jar file. Where is it useful then? If you are using a microservice architecture which runs especially
on a cloud (but not necessarily), then you can easily do your development via Spring Boot. In my opinion,
Spring Boot is one of the best frameworks you should use on such a scenario and architecture. You can
easily create simple web services, put them inside a Docker container (which is not a part of this
tutorial) and run them on the Amazon Web Services or on any cloud environment.
Additionally, you don't have to track the versioning of your dependencies. Normally, if you are using
a version of spring, then you have to use the appropriate versions of your other dependencies which are
dependent to the main dependency. With Spring Boot, you don't have to check out what version you have
to use for Jackson which is compatible with the version of Jersey. You don't even need to write the
version of your dependencies. I'm going to demonstrate all of these benefits.
[Go back to TOC](#toc)
2 Create Spring Boot Project With Maven
----------------------------------------
What we need to setup a Spring Boot project. However there are other ways (like spring initializer),
I'll go with setting up our project with maven.
Because that we are creating a web application here, we will first create a maven project with web
application archetype, then we will add the spring boot dependencies;
You can use the following maven command to create a project. In this project, I've used this exact
maven command to create our project;
```
mvn archetype:generate -DgroupId=com.levent.consultantapi -DartifactId=consultant-api -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
```
[Go back to TOC](#toc)
3 Spring Boot Dependencies
---------------------------
How we make our project a Spring Boot project. We simply define a parent project in our POM file
just as below;
```
<!-- Spring Parent Project -->
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.3.1.RELEASE</version>
</parent>
```
Then our dependencies aware of our Spring Boot project. Then for example, if we are going to create
a web application which is true, then we add the Spring Boot Starter dependencies. On this project,
it is vital for us to add the dependency below;
```
<!-- Spring boot starter web: integrates and auto-configures -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
```
See that we did not use the version attribute of the dependency block. That's what Spring Boot is
going to handle but just adding this starter dependency, we will have all what we need like Jersey,
Jackson, and the rest. Also the versioning is managed by Spring Boot, we don't have to check if
the versions of our transitive dependencies are compatible with each other or not.
Moreover, in this project we will need an H2 embedded database. So we are going to add the following
dependency to our dependencies block;
```
<!-- H2 Embedded Database -->
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</dependency>
```
The last thing additional to our dependencies in the POM file is to use the @SpringBootApplication
annotation on our [EntryPoint](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/EntryPoint.java) class.
With this, Spring Boot configuration is complete.
[Go back to TOC](#toc)
4 Making Uber Jar
------------------
When we are developing a web service, let's say, using Jersey framework within the Spring context,
we make a .war file and upload it to a container. However, Spring Boot is a containerless framework.
So we do not need any web container, which means also we won't need to generate a .war file. What
we have to do is pack all the libraries and frameworks we are using in our project into a big jar
file, the Uber Jar (a.k.a. Fat Jar).
To do so, our build tool maven has a plugin named as Maven Shade Plugin. We are going to define
it within the build block of our POM file. You can see the sample build block as below;
```
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/spring.handlers</resource>
</transformer>
<transformer
implementation="org.springframework.boot.maven.PropertiesMergingResourceTransformer">
<resource>META-INF/spring.factories</resource>
</transformer>
<transformer
implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/spring.schemas</resource>
</transformer>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" />
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>com.levent.consultantapi.EntryPoint</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
As you can see, there are two plugins used in the plugins block of the build block above. I've
used maven compiler plugin so that I can define the source and destination version. The other
plugin is the Maven Shade Plugin, which we use in order to pack our Uber Jar.
In the Maven Shade Plugin block, we define our mainClass. Because our application is a standalone
Java application, we have to define the Entry Point, the starter class of our appliacation. The
name of this class is arbitrary.
You can check out the full POM file: [Project Object Model](https://github.com/bzdgn/simple-grizzly-standalone-restful-webservice-example/blob/master/pom.xml)
[Go back to TOC](#toc)
5 Project Overview
-------------------
Our project consist of several layers. For the simplicity, if we exclude the [EntryPoint](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/EntryPoint.java) class
which is located at the top level package, we have the following packages;
- [controller package](https://github.com/bzdgn/spring-boot-restful-web-service-example/tree/master/src/main/java/com/levent/consultantapi/controller)
- [service package](https://github.com/bzdgn/spring-boot-restful-web-service-example/tree/master/src/main/java/com/levent/consultantapi/service)
- [repository package](https://github.com/bzdgn/spring-boot-restful-web-service-example/tree/master/src/main/java/com/levent/consultantapi/repository)
- [model package](https://github.com/bzdgn/spring-boot-restful-web-service-example/tree/master/src/main/java/com/levent/consultantapi/model)
You can see the logical representation below of these packages;

Controller package has one controller class which is [ConsultantController](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/controller/ConsultantController.java). With [ConsultantController](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/controller/ConsultantController.java),
you can define all the RESTful methods. This class need to use a Service Layer implementation, thus
[ConsultantController](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/controller/ConsultantController.java) has a [ConsultantService](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/service/ConsultantService.java) interface and we are using the @Autowired annotation
so that Spring context is going to find the appropriate implementation. In our case, we have only
one Service implementation which is [ConsultantServiceImpl](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/service/consultant/impl/ConsultantServiceImpl.java).
You can also see the [InfoService](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/service/InfoService.java) interface wrapped inside the [ConsultantController](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/controller/ConsultantController.java) class. It has also
marked with the @Autowired annotation. I'll explain it later for the simplicity but our first focus
in this project overview is to explain the main structure of this simple application.
At service layer, we have the interface [ConsultantService](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/service/ConsultantService.java) and one implementation that fits with this
interface: [ConsultantServiceImpl](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/service/consultant/impl/ConsultantServiceImpl.java). You will see that [ConsultantServiceImpl](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/service/consultant/impl/ConsultantServiceImpl.java) has a [ConsultantRepository](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/repository/ConsultantRepository.java)
interface and that interface also marked with the @Autowired annotation.
At repository layer, we have a different situation. There are two implementation fits with this
[ConsultantRepository](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/repository/ConsultantRepository.java) interface;
- [ConsultantStubImpl](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/repository/impl/ConsultantStubImpl.java)
- [ConsultantJPARepositoryImpl](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/repository/impl/ConsultantJPARepositoryImpl.java)
So, how Spring handles if there are two implementation classes those implements one interface with
@Autowired annotation? How spring is going to select the implementation candidates? Normally, Spring
is going to get confused, throw Exceptions and you will find the following message inside the stack
trace;
```
Caused by: org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [com.levent.consultantapi.repository.ConsultantRepository] is defined: expected single matching bean but found 2: consultantJPARepositoryImpl,consultantStubImpl
```
Because that there are two candidate implementation for one single interface, autowiring functionality
of Spring Context will fail. The solution is to use @Primary annotation. You can see it inside the
[ConsultantJPARepositoryImpl](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/repository/impl/ConsultantJPARepositoryImpl.java). When there are multiple implementation for the same interface marked with
@Autowired annotation, you have to use @Primary on one of the implementations.
But what if we want to have two different implementations and we want to change which implementation
to use, without changing the code ? Then we can use an external configuration, which I'm going to
explain it on next chapter.
[Go back to TOC](#toc)
6 External Configuration Example
---------------------------------
Normally, Spring Boot configuration is defined with [application.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/application.properties). It is default location can be either under
src/main/resources folder, or under a subfolder of current directoy named with "config". I'm using the second way, created
a config folder under the project, and put the main [application.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/application.properties) file there.
However, I also want to have the common properties on [application.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/application.properties) file, and in addition to it, for specific
purposes, I want to use a secondary properties folder. In this chapter, I'm going to demonstrate how to use externalized
custom properties file.
But first, let's go back to our previous problem that which I've talked about. The scenario is as follows: I've two or multiple
implementations for a single interface with @Autowired annotation, and I don't want to do any code change when I switch between
the implementations. Spring's solution for that was using the @Primary annotation so that Spring Context is not going to throw an exception because that it does not know which implementation to use.
Here is my solution;
- I create an externalized configuration file named as [implementation.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/implementation.properties)
- I've created [AppConfig](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/config/AppConfig.java) class to read the custom configuration file
If you look at to [AppConfig](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/config/AppConfig.java) class under the config package, you will see that the class file is marked with two annotations.
First it is marked with @Configuration annotation because it needs to be done before the injection of the autowired variable.
And secondly, it is marked with @PropertySource, so that we are going to point out which specific property file we are going
to use.
The process is as follows;
- Spring context searchs for the @Configuration annotation, finds the [AppConfig](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/config/AppConfig.java)
- Via [AppConfig](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/config/AppConfig.java), reads the [implementation.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/implementation.properties) file
- The greeter.implementation property's value is loaded to the impl variable in the [AppConfig](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/config/AppConfig.java)
- Via the #getImplementationFromPropertiesFile method, the implementation bean is created based on the config file
- The context has the right implementation based on the configuration
- During the creation of the [ConsultantController](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/controller/ConsultantController.java) class, Spring Context finds the autowired field: greeter which is an interface: [InfoService](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/service/InfoService.java)
- The implementation bean is autowired to this greeter field
You can see the overview of this process via the diagram below;

[Go back to TOC](#toc)
7 Application Properties
-------------------------
Spring Boot solves our problem with automatic configuration as we use an embedded Tomcat and an embedded H2
database but how are we going to specify the running port of the Tomcat container, the target database,
connection pool parameters and so on?
Spring Boot provides a default configuration properties file called as [application.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/application.properties). Within this file
there are hundreds of configuration parameters we can use. You can see the detailed parameter list via following
link;
[Spring Boot Application Properties Reference](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html)
The default locations of the application.properties file is either somewhere within the classpath, for example
under src/main/resources in a maven project, or a inside config folder under current working directory. It is
better to put the file under config folder which will make it easy to deploy inside a docker container, but
the choice is yours. I place [application.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/application.properties) under config folder.
Because that we are using a server, H2 database, a datasource, a db connection pool and lastly, hibernate,
we should define parameters in this [application.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/application.properties) file, based on the [documented reference list](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html).
Let's take a detailed look;
- Server Configuration
The default configuration port is 8080, however we may want to change this. Thus I add the port configuration
as below;
```
#server port
server.port=8080
```
- H2 Database Configuration
 We also need to specify whether the console is activated, so that we can use H2 database via the console, create
our tables and initialize our db entries.
```
#H2 configuration
spring.h2.console.enabled=true
spring.h2.console.path=/h2
```
- DataSource Configuration
Instead of writing a connection string, we are defining the parameters via our properties file as below;
Notify that we defined our database as a file and the name of the database is "consultantapi". We are going to use it when
we need to connect the database via the console;
```
#Data source configuration
spring.datasource.url=jdbc:h2:file:~/consultantapi
spring.datasource.username=sa
spring.datasource.password=
spring.datasource.driver-class-name=org.h2.Driver
```
- Connection Pool Configuration
Here we define the connection pool parameters;
```
#DB Pool conf
spring.datasource.max-active=10
spring.datasource.max-idle=8
spring.datasource.max-wait=10000
spring.datasource.min-evictable-idle-time-millis=1000
spring.datasource.min-idle=8
spring.datasource.time-between-eviction-runs-millis=1
```
- Hibernate Configuration
We don't want Hibernate to delete our database entries on every restart of our server, so we need to configure as below;
```
#Hibernate Config
spring.jpa.hibernate.ddl-auto=false #false for persistent database
```
You can check our project's application properties file via here: [application.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/application.properties)
[Go back to TOC](#toc)
8 H2 Database Preparation
--------------------------
Before using our database implementation instead of using our stub implementation, we need to prepare our table
and initial entries within the database. We have two things to do;
1. We need to create a CONSULTANT table to store our consultant model, defined in [Consultant](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/model/Consultant.java) class.
2. We need to insert some trivial entries to the table.
The related SQL commands are located under the [consultant.sql](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/resources/consultant.sql)
As it is written on previous chapter, we have defined our H2 console with "/h2" postfix. Also we have defined our server
port as "8080", so connection url will be [http://localhost:8080/h2/](http://localhost:8080/h2/). We can connect our H2
database console as below;

After connecting to the database via the console, we can easily run our sql commands defined in [consultant.sql](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/resources/consultant.sql) as below;

Now our database is ready to go!
[Go back to TOC](#toc)
9 Sending And Receiving JSONs With Postman
-------------------------------------------
In this part, I'm going to demonstrate all operations defined in our [ConsultantController](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/src/main/java/com/levent/consultantapi/controller/ConsultantController.java) class and how to test them
either with our web browser or with the [Postman tool](https://www.getpostman.com/). We can test our GET methods with any web browser but for
operations that use HTTP POST, PUT, DELETE methods, we cannot execute them with a simple web browser, so I'm
going to use [Postman tool](https://www.getpostman.com/) for that.
You can download Postman via [this link](https://www.getpostman.com/)
I've provided CRUD operations within postman, so that you can load all the prepared operations in Postman tool. You can
find the content under misc directory;
[Postman Collection](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/misc/Consultant_API.postman_collection.json)
9-a Test
---------
```
Sub Path: /test
Full URL: http://localhost:8080/api/v1/test
Method: GET
Sends: N/A
Receives: Text
Sample Input: N/A
Sample Output;
Consultant-Api Version: 1.0.0 Written by: Levent Divilioglu
```
We can simply use our web browser and receive the text output as below;

But take into the consideration that this response based on which implementation we define on our custom configuration
file which is: [implementation.properties](https://github.com/bzdgn/spring-boot-restful-web-service-example/blob/master/config/implementation.properties).
You can select one of the four different implementations via the configuration file and see the results. For
more information, you can go back to the [6 External Configuration Example](#6-external-configuration-example) section.
[Go back to Sending And Receiving JSONs With Postman](#9-sending-and-receiving-jsons-with-postman) <br/>
[Go back to TOC](#toc)
9-b List
---------
```
Sub Path: /consultants
Full URL: http://localhost:8080/api/v1/consultants
Method: GET
Sends: N/A
Receives: JSON
Sample Input: N/A
Sample Output;
[{
"id": 1,
"firstName": "Levent",
"lastName": "Divilioglu",
"age": 36,
"client": null,
"assigned": false
},
{
"id": 2,
"firstName": "Altug",
"lastName": "Timuroglu",
"age": 41,
"client": "Altinorda IT",
"assigned": true
},
{
"id": 3,
"firstName": "Bugra",
"lastName": "Cengizoglu",
"age": 37,
"client": "KizilTug TECH",
"assigned": true
}]
```
Again we can use web browser to get the results as below;

[Go back to Sending And Receiving JSONs With Postman](#9-sending-and-receiving-jsons-with-postman) <br/>
[Go back to TOC](#toc)
9-c Create
-----------
```
Sub Path: /consultants
Full URL: http://localhost:8080/api/v1/consultants
Method: POST
Sends: JSON
Receives: JSON
Sample Input;
{
"id": 4,
"firstName": "John",
"lastName": "Doe",
"age": 99,
"client": "Example Tech",
"assigned": true
}
Sample Output;
{
"id": 4,
"firstName": "John",
"lastName": "Doe",
"age": 99,
"client": "Example Tech",
"assigned": true
}
```
This time, the operation we are using is POST, so we cannot do that with our browser, we have to use our tool
Postman. Here is how I create my HTTP request;

With the '+' sign above on the Postman screen, we can create our HTTP request. Then we select HTTP Method as POST.
We paste the URL, select "raw", and JSON for our input content. Then we select the "body" tag, and paste our content
which we want to POST. Under the second half of the screen, on the "body" tag, we will retrieve our JSON response.
As you can see, the created content is returned. We can also test the result using our list service via postman (or
web browser);

[Go back to Sending And Receiving JSONs With Postman](#9-sending-and-receiving-jsons-with-postman) <br/>
[Go back to TOC](#toc)
9-d Retrieve
-------------
```
Sub Path: /consultants
Full URL: http://localhost:8080/api/v1/consultants/{id}
Method: GET
Sends: N/A
Receives: JSON
Sample Input: N/A
Sample Output;
{
"id": 4,
"firstName": "John",
"lastName": "Doe",
"age": 99,
"client": "Example Tech",
"assigned": true
}
```
It is a simple GET again, and let's use our browser for testing. We are going to use a path
parameter which will be "4", the full path is as below;
```
http://localhost:8080/api/v1/consultants/4
```
The output will be as follows;

[Go back to Sending And Receiving JSONs With Postman](#9-sending-and-receiving-jsons-with-postman) <br/>
[Go back to TOC](#toc)
9-e Update
-----------
```
Sub Path: /consultants
Full URL: http://localhost:8080/api/v1/consultants/{id}
Method: PUT
Sends: JSON
Receives: JSON
Sample Input;
{
"id": 4,
"firstName": "Jayne",
"lastName": "Smith",
"age": 66,
"client": "Example New Company",
"assigned": true
}
Sample Output;
{
"id": 4,
"firstName": "Jayne",
"lastName": "Smith",
"age": 66,
"client": "Example New Company",
"assigned": true
}
```
In order to update, we are going to use PUT method, so we will use Postman again.
For update method, we have to give the id in the URL as a path parameter, so URL will be as below;
```
http://localhost:8080/api/v1/consultants/4
```

As you can see, the updated content is returned. We can also test the result using our retrieve service via postman (or
web browser);

[Go back to Sending And Receiving JSONs With Postman](#9-sending-and-receiving-jsons-with-postman) <br/>
[Go back to TOC](#toc)
9-f Delete
-----------
```
Sub Path: /consultants
Full URL: http://localhost:8080/api/v1/consultants/{id}
Method: DELETE
Sends: N/A
Receives: JSON
Sample Input: N/A
Sample Output;
{
"id": 4,
"firstName": "Jayne",
"lastName": "Smith",
"age": 66,
"client": "Example New Company",
"assigned": true
}
```
Again, we will use Postman for DELETE operation. As it is shown above, we don't have to provide
a body for this operation, but we have to provide the id of the consultant to be deleted within the
URL as below;
```
http://localhost:8080/api/v1/consultants/4
```

As you can see, the deleted content is returned. We can also test the result using our retrieve service via postman (or
web browser);

[Go back to Sending And Receiving JSONs With Postman](#9-sending-and-receiving-jsons-with-postman) <br/>
[Go back to TOC](#toc)
10 Building And Running The Standalone Application
---------------------------------------------------
Now we can demonstrate how to run our consultant-api as a standalone application. First we must build with the following
maven command;
```
mvn clean package
```
This command is going to collect all the needed jars and pack them into an Uber Jar (a.k.a. fat jar). We can find this
Uber Jar under the "target" folder. The name of the file will be: "consultant-api-1.0-SNAPSHOT.jar"
We are going to take this file and copy it to another arbitrary folder. Remember that we also need two configuration files,
those that located under the config file. We will also copy those config files to our arbitrary folder.
I copied all the files I mentioned above to the folder "D:\consultant-api\", the structure is as follows;
```
D:\consultant-api
|
|___ consultant-api-1.0-SNAPSHOT.jar
|___ config
|______ application.properties
|______ implementation.properties
```
If the structure of the arbitrary folder (here it is 'consultant-api'), then we can try to run our standalone application
to see if it is working. Here is a successful output. For the simplicity, I'm not going to paste all the log output;
```
D:\consultant-api>java -jar consultant-api-1.0-SNAPSHOT.jar
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.0-SNAPSHOT)
2018-07-01 16:16:08.705 INFO 6616 --- [ main] com.levent.consultantapi.EntryPoint : Starting EntryPoint v1.0-SNAPSHOT on LEVASUS with PID 6616 (D:\consultant-api\consultant-api-1.0-SNAPSHOT.jar started by Levent in D:\consultant-api)
2018-07-01 16:16:08.711 INFO 6616 --- [ main] com.levent.consultantapi.EntryPoint : No active profile set, falling back to default profiles: default
2018-07-01 16:16:08.785 INFO 6616 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@27808f31: startup date [Sun Jul 01 16:16:08 CEST 2018]; root of context hierarchy
2018-07-01 16:16:10.487 INFO 6616 --- [ main] o.s.b.f.s.DefaultListableBeanFactory : Overriding bean definition for bean 'beanNameViewResolver' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration; factoryMethodName=beanNameViewResolver; initMet
```
Then we can test if our standalone application is working fine with our web browser;

Yes, our standalone application is working fine. We can easily deploy it in a Docker container, or just run it as
it is.
[Go back to TOC](#toc)
| 1 |
jghoman/finagle-java-example | Quick example of a Java Thrift server and client using Finagle | null | null | 1 |
cryxli/sr201 | Protocol example for SR-201 Series network relay | null | # Simple Client Application for SR-201 Ethernet Relay Board
Resently I ordered a little board 50mm x 70mm with two relays. They can be switched by sending commands over TCP or UDP. The only problem with it is, that the code examples and instruction manual are entierly written in Chinese. Therefore, I created this repo to keep track of my findings regarding the SR-201-2.
## Models
The same idea, switching relays over ethernet, resulted in at least four different models of the SR-201:
* SR-201-1CH - Cased, single relay
* SR-201-2 - Plain board, two relays (mine)
* SR-201-RTC - Cased, four relays
* SR-201-E8B - Plain board, eight relays
They all seem to work with the same chip and software. Although, e.g., the SR-201-2 only has two relays, it also has an extension port with another 6 pins which can be switched, too.
## Protocols and Ports
The board supports the protocols ARP, ICMP, IP, TCP, UDP. Or short, everything needed to allow TCP and UDP connections.
When connected over TCP (port **6722**), the board will only accept 6 connections at a time. To prevent starving, it will close TCP connection after they have been idle for 15 seconds.
Since UDP (port **6723**) is not an end-to-end connection, there are no restrictions. But it is noteworthy that the board will execute UDP commands, but it will never answer. Therefore querying the state of the relays has to be done over TCP.
The board also listens to the TCP port **5111**. Over this connection the board can be configured. E.g., its static IP address can be changed.
## Factory Defaults
* Static IP address : 192.168.1.100
* Subnet mask : 255.255.255.0
* Default Gateway : 192.168.1.1
* Persistent relay state when power is lost : off
* Cloud service password : 000000
* DNS Server : 192.168.1.1
* Cloud service : connect.tutuuu.com
* Cloud service enabled: false
## Example Code
This repo contains the following modules:
* sr201-config-client - Client to read and change the config of the board.
* sr201-client - Simple client with 8 toggle buttons to change the state of the relays.
* sr201-server - REST interface to change the state of the relays.
* sr201-php-cloud-service - Example implementation of a cloud service back-end in PHP provided by [hobpet](https://github.com/hobpet) following the findings of [anakhaema](https://github.com/anakhaema).
Maven will create an executable JAR in each of the modules target directories.
## Scripts
In addition to my Java code examples that are clearly intended as a replacement for the default VB and Delphi programs, I added a scripts directory that contains simpler more pragmatic approaches to the SR-201 communication scheme.
* perl-config-script - A PERL script to manipulate the board's configuration by Christian DEGUEST.
* python-config-script - A python script to manipulate the board's configuration.
Many thanks to anyone who contributed to this knowledge base!
## Own Scripts
If you want to quickly setup your SR-201 without even starting a script or anything else, just check the protocol [Config commands](https://github.com/cryxli/sr201/wiki/Config-commands) and e.g. send a command via netcat:
printf "#11111;" | nc [yourip] 5111
Note: It is crucial to use printf here, as newlines are seen as errors. It drove me crazy to find out about this one.
| 1 |
DataStax-Examples/spring-k8s-cassandra-microservices | Example microservices with Spring, Kubernetes, and Cassandra | cassandra kubernetes microservices spring spring-boot spring-cloud spring-data-cassandra | # Microservices with Spring, Kubernetes, and Cassandra
This repository contains sample inventory microservices to demonstrate how to use Spring, Kubernetes and Cassandra together a single stack.

#### Contributors:
- [Cedrick Lunven](https://github.com/clun) - twitter handdle [@clun](https://twitter.com/clunven)
- [Chris Splinter](https://github.com/csplinter)
- [Frank Moley](https://github.com/fpmoles) - twitter handle [@fpmoles](https://twitter.com/fpmoles)
#### Modules:
- [`microservice-spring-boot`](microservice-spring-boot): Service for Products
- **Persistence Layer** : uses Cassandra Java driver's `CqlSession` directly for queries to products table
- **Exposition Layer** : uses `spring-web` `@Controller`
- [`microservice-spring-data`](microservice-spring-data): Service for Orders
- **Persistence Layer** : uses Spring Data Cassandra for data access to orders table
- **Exposition Layer** : uses Spring Data REST for API generation
- [`gateway-service`](gateway-service): Spring Cloud Gateway to route to the microservices
## 1. Objectives
Show a working set of microservices illustrating how to build Spring microservices with Kubernetes and Cassandra.
This repo leverages Spring modules:
- `spring-data`
- `spring-boot`
- `spring-data-rest`
- `spring-web`
- `spring-cloud-kubernetes`
- `spring-cloud-gateway`
## 2. How this Works
The primary mode of deployment is on a local Kubernetes cluster, though each service can be run standalone or in Docker.
The purpose is to show the many utilities of Spring in Kubernetes with Cassandra as the backing storage tier.
The business domain is an inventory / ecommerce application.
## 3. Setup and Running
### 3.a - Prerequisites
The prerequisites required for this application to run
* Docker
* Kubernetes
* JDK 11+
* Maven
### 3.b - Setup
Clone the current repository
```
git clone https://github.com/DataStax-Examples/spring-k8s-cassandra-microservices.git
```
Start minikube
```
# use docker as the virtualization driver
minikube start --driver=docker --extra-config=apiserver.authorization-mode=RBAC,Node
# tell minikube to use local docker registry
eval `minikube docker-env`
```
Build the services
```
# from the spring-k8s-cassandra-microservices directory
mvn package
```
Build the docker images
```
cd microservice-spring-boot; docker build -t <your-docker-username>/spring-boot-service:1.0.0-SNAPSHOT .
cd microservice-spring-data; docker build -t <your-docker-username>/spring-data-service:1.0.0-SNAPSHOT .
cd gateway-service; docker build -t <your-docker-username>/gateway-service:1.0.0-SNAPSHOT .
```
Alter deployment.yml files with your docker username
```
# replace image name in deploy/spring-boot/spring-boot-deployment.yml
# replace image name in deploy/spring-data/spring-data-deployment.yml
# replace image name in deploy/gateway/gateway-deployment.yml
```
Create namespaces
```
kubectl create ns cass-operator
kubectl create ns spring-boot-service
kubectl create ns spring-data-service
kubectl create ns gateway-service
```
### 3.c - Setup DataStax Astra or Cassandra Kubernetes Operator
#### DataStax Astra
Create a free tier database in [DataStax Astra](https://astra.datastax.com/) with keyspace name `betterbotz`
Download the secure connect bundle from the Astra UI ([docs](https://docs.datastax.com/en/astra/aws/doc/dscloud/astra/dscloudObtainingCredentials.html))
Create secrets for the Astra username/password and secure connect bundle
```
DB_USER=<astra-db-user>
DB_PASSWORD=<astra-db-password>
SECURE_CONNECT_BUNDLE_PATH=<path-to-secure-connect-bundle>
```
```
kubectl -n spring-boot-service create secret generic db-secret --from-literal=username=$DB_USER --from-literal=password=$DB_PASSWORD
kubectl -n spring-boot-service create secret generic astracreds --from-file=secure-connect-bundle=$SECURE_CONNECT_BUNDLE_PATH
```
```
kubectl -n spring-data-service create secret generic db-secret --from-literal=username=$DB_USER --from-literal=password=$DB_PASSWORD
kubectl -n spring-data-service create secret generic astracreds --from-file=secure-connect-bundle=$SECURE_CONNECT_BUNDLE_PATH
```
Change Spring Boot [ConfigMap](deploy/spring-boot/spring-boot-service-configmap.yml) to use secure connect bundle
```
apiVersion: v1
kind: ConfigMap
metadata:
name: spring-boot-service
data:
application.yml: |-
astra.secure-connect-bundle: /app/astra/creds
```
Change Spring Data [ConfigMap](deploy/spring-data/spring-data-service-configmap.yml) to use secure connect bundle
```
apiVersion: v1
kind: ConfigMap
metadata:
name: spring-data-service
data:
application.yml: |-
astra.secure-connect-bundle: /app/astra/creds
```
Uncomment the following lines in [Spring Boot Deployment.yml](deploy/spring-boot/spring-boot-deployment.yml) and [Spring Data Deployment.yml](deploy/spring-data/spring-data-deployment.yml)
```
volumes:
- name: astravol
secret:
secretName: astracreds
items:
- key: secure-connect-bundle
path: creds
...
volumeMounts:
- name: astravol
mountPath: "/app/astra"
readOnly: true
```
You're ready to go!
#### Cassandra Kubernetes Operator
Start the Cassandra operator
```
# create the storage class for the database
kubectl -n cass-operator apply -f deploy/storage-class.yml
# apply the operator manifest
kubectl -n cass-operator apply -f https://raw.githubusercontent.com/DataStax-Academy/kubernetes-workshop-online/master/1-cassandra/11-install-cass-operator-v1.1.yaml
# start a single C* 4.0 node
kubectl -n cass-operator apply -f deploy/cassandra-4.0.0-1node.yml
```
Create the Kubernetes Secrets for database username and password
```
# get the username and password from the secret
DB_USER=$(kubectl -n cass-operator get secret cluster1-superuser -o yaml | grep username | cut -d " " -f 4 | base64 -d)
DB_PASSWORD=$(kubectl -n cass-operator get secret cluster1-superuser -o yaml | grep password | cut -d " " -f 4 | base64 -d)
# create k8s secrets for the services (skip cmd for Spring Boot service if using Astra)
kubectl -n spring-boot-service create secret generic db-secret --from-literal=username=$DB_USER --from-literal=password=$DB_PASSWORD
kubectl -n spring-data-service create secret generic db-secret --from-literal=username=$DB_USER --from-literal=password=$DB_PASSWORD
```
### Running
Start the services
```
# from the spring-k8s-cassandra-microservices directory
kubectl -n spring-boot-service apply -f deploy/spring-boot
kubectl -n spring-data-service apply -f deploy/spring-data
kubectl -n gateway-service apply -f deploy/gateway
```
Expose the Gateway endpoint
```
# get the gateway-service pod
GATEWAY_POD=$(kubectl -n gateway-service get pods | tail -n 1 | cut -f 1 -d ' ')
# forward the port
kubectl -n gateway-service port-forward $GATEWAY_POD 8080:8080
```
Optionally expose the Spring Boot service endpoints (useful for testing)
```
# get the spring-boot-service pod
BOOT_SERVICE_POD=$(kubectl -n spring-boot-service get pods | tail -n 1 | cut -f 1 -d ' ')
# forward the port
kubectl -n spring-boot-service port-forward $BOOT_SERVICE_POD 8083:8083
```
Optionally expose the Spring Data service endpoints (useful for testing)
```
# get the spring-data-service pod
DATA_SERVICE_POD=$(kubectl -n spring-data-service get pods | tail -n 1 | cut -f 1 -d ' ')
# forward the port
kubectl -n spring-data-service port-forward $DATA_SERVICE_POD 8081:8081
```
#### Gateway Service endpoints
The Spring Cloud Gateway is running on port 8080 and forwards requests
to the Spring Boot and Spring Data endpoints below. To test that this is
working, you can replace the URLs below with `localhost:8080` with the same
curl commands.
#### Spring Boot service endpoints
Explore the endpoints with Swagger (only works if endpoints exposed above): http://localhost:8083/swagger-ui.html

Add products
```
curl -X POST -H "Content-Type: application/json" -d '{"name": "mobile", "id":"123e4567-e89b-12d3-a456-556642440000", "description":"iPhone", "price":"500.00"}' http://localhost:8083/api/products/add
curl -X POST -H "Content-Type: application/json" -d '{"name": "mobile", "id":"123e4567-e89b-12d3-a456-556642440001", "description":"Android", "price":"600.00"}' http://localhost:8083/api/products/add
```
Get products with name = mobile
```
curl http://localhost:8083/api/products/search/mobile
```
Get products with name = mobile and id = 123e4567-e89b-12d3-a456-556642440001
```
curl http://localhost:8083/api/products/search/mobile/123e4567-e89b-12d3-a456-556642440001
```
Delete product with name = mobile and id = 123e4567-e89b-12d3-a456-556642440001
```
curl -X DELETE http://localhost:8083/api/products/delete/mobile/123e4567-e89b-12d3-a456-556642440001
```
#### Spring Data service endpoints
Add orders
```
curl -H "Content-Type: application/json" -d '{"key": {"orderId":"123e4567-e89b-12d3-a456-556642440000", "productId":"123e4567-e89b-12d3-a456-556642440000"}, "productName":"iPhone", "productPrice":"500.00", "productQuantity":1, "addedToOrderTimestamp": "2020-04-12T11:21:59.001+0000"}' http://localhost:8081/api/orders/add
curl -H "Content-Type: application/json" -d '{"key": {"orderId":"123e4567-e89b-12d3-a456-556642440000", "productId":"123e4567-e89b-12d3-a456-556642440001"}, "productName":"Android", "productPrice":"600.00", "productQuantity":1, "addedToOrderTimestamp": "2020-04-12T11:22:59.001+0000"}' http://localhost:8081/api/orders/add
```
Get orders with order_id = 123e4567-e89b-12d3-a456-556642440000
```
curl http://localhost:8081/api/orders/search/order-by-id?orderId=123e4567-e89b-12d3-a456-556642440000
```
Get order with order_id = 123e4567-e89b-12d3-a456-556642440000 and product_id = 123e4567-e89b-12d3-a456-556642440000
```
curl "http://localhost:8081/api/orders/search/order-by-product-id?orderId=123e4567-e89b-12d3-a456-556642440000&productId=123e4567-e89b-12d3-a456-556642440000"
```
Get only the product name and price of order_id = 123e4567-e89b-12d3-a456-556642440000
```
curl http://localhost:8081/api/orders/search/name-and-price-only?orderId=123e4567-e89b-12d3-a456-556642440000
```
Shows how to use a projection with Spring Data REST
```
curl "http://localhost:8081/api/orders/search/name-and-price-only?orderId=123e4567-e89b-12d3-a456-556642440000&projection=product-name-and-price"
```
Delete order with order_id = 123e4567-e89b-12d3-a456-556642440000 and product_id = 123e4567-e89b-12d3-a456-556642440000
```
curl -X DELETE "http://localhost:8081/api/orders/delete/product-from-order?orderId=123e4567-e89b-12d3-a456-556642440000&productId=123e4567-e89b-12d3-a456-556642440000"
```
Delete order with order_id = 123e4567-e89b-12d3-a456-556642440000
```
curl -X DELETE "http://localhost:8081/api/orders/delete/order?orderId=123e4567-e89b-12d3-a456-556642440000"
```
| 1 |
lurbas/ViperArchitectureExample | viper architecture example | null | # Search
[](http://android-arsenal.com/details/3/2448)
viper architecture example
This is example of application built with VIPER architecture. It's built on top of sockeqwe's [Mosby](https://github.com/sockeqwe/mosby).
### Diagram

### Viper
I encourage you to read more about this pattern [here](https://speakerdeck.com/sergigracia/clean-architecture-viper) (slide above from the same presentation)
### License
Copyright 2015 Lucas Urbas
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 1 |
noveogroup-amorgunov/spring-mvc-react | Example of using spring 4 (rest full api with hibernate) + react.js (client) | mysql react spring spring-mvc | # Spring 4 MVC + ReactJS

Very light version of [stackoverflow](http://stackoverflow.com/) build by [ReactJS](https://facebook.github.io/react/) (client-side) and [Spring4](https://spring.io/) (server-side).
## Features
- Authorization system (by [json web token](https://jwt.io/))
- Questions, answers, users, reputation, tags and votes!
- Localization in react using [localizify](https://github.com/noveogroup-amorgunov/localizify)
## Intallation
**0** Clone repository!
```shell
$ git clone https://github.com/noveogroup-amorgunov/spring-mvc-react.git
```
**1** Change database driver (by default set for MySQL) and connections parameters (url, user and password) in `src/main/resources/app.properties`
**2** Change `jwt` secret key in `src/main/resources/app.properties` too (not nessasary)
**3** Create schema. After run application table will be created in auto mode. Follow example for MySQL
```sql
CREATE SCHEMA `spring-mvc-react` DEFAULT CHARACTER SET utf8 ;
```
**4** Install and build frontend dependencies
```shell
$ cd src/main/webapp
$ npm install
$ npm install webpack -g # intstall webpack globally
$ npm run build # build bundle.js file
```
Use `npm run watch` for work in watch-mode. When you change some javascript file, here will be build new bundle.js
**5** Run server
```shell
$ mvn jetty:run
```
Access ```http://localhost:4017/spring4ajax```
To import this project into Eclipse IDE:
1. ```$ mvn eclipse:eclipse```
2. Import into Eclipse via **existing projects into workspace** option.
3. Done.
| 1 |
FledgeXu/NeutrinoSourceCode | It's Neutrino Project's example Code. | null | null | 1 |
balaji-k13/Navigation-drawer-page-sliding-tab-strip | Example which integration of Navigation Drawer and Page Sliding Tab Strip , like google play music app | null | # Android Navigation Drawer with Page (Pager) Sliding tab Strip
This sample is the result of integration of latest navigation drawer(v4 lib) and similar tab strip which is used in google play music app. (https://play.google.com/store/apps/details?id=com.google.android.music)
Please check out apk which is in root folder of the project , below is the Screenshot
<a href="http://i.imgur.com/TRzIca6.png" alt="Screenshot">
<img src="http://i.imgur.com/TRzIca6.png">
</a>
## Acknowledgements
This sample uses many great open-source libraries from the Android dev community:
* [ActionBarSherlock (Tag 4.2.0)](https://github.com/JakeWharton/ActionBarSherlock)
* [nested-fragments](https://github.com/marsucsb/nested-fragments)
* [PagerSlidingTabStrip (Tag 1.0.1) ](https://github.com/astuetz/PagerSlidingTabStrip/releases/tag/v1.0.1)
* [NavigationDrawer](http://developer.android.com/training/implementing-navigation/nav-drawer.html)
* Latest Support v4 library
#Steps to compile the project in eclipse
* Download PagerSlidingTabStrip - Tag 1.0.1
* Import to eclipse
* Add latest android-support-v4.jar
* Download ActionBarSherlock - Tag 4.2.0
* Import to eclipse
* Add/replace android-support-v4.jar if there are any jar issues
* Add the above libraries to main project
* Clean and compile
I hope this helps you in building your next android app.
| 1 |
seflerZ/javaworker | A simple example of the Java worker pattern | null | null | 1 |
zcox/akka-zeromq-java | Examples of using Akka and 0MQ in Java, separately and together. | null | null | 0 |
ebean-orm-demo/demo-order | Example showing some features of Ebean ORM | null | null | 1 |
SaravananSubramanian/dicom | This repository contains all the code that I have used in my DICOM articles on my blog (Java examples are included) | dicom dicom-standard healthcare imaging-informatics java radiology | null | 0 |
plluke/tof | Example of grabbing ToF data from Samsung S10 5G | null | # Time of Flight Camera Example
<p><img src="docs/demo.gif"/></p>
This is an example app that demonstrates how to capture and process data from a Time of Flight camera, specifically the front-facing "3D Camera" on the Samsung S10 5G.
## How to Use
Clone and run on an Samsung S10 5G device from Android studio (3.5.1 at the time of publishing).
There is also an associated post [here](https://medium.com/@lukesma/working-with-the-3d-camera-on-the-samsung-s10-5g-4782336783c).
| 1 |
halide/CVPR2015 | Example code used in the CVPR 2015 tutorial | null | To get started, download a binary release of Halide from halide-lang.org and untar/unzip it into this directory. | 1 |
jgribonvald/demo-spring-cas-angular | Example to use CAS auth with jhipster app | null | README for demo
==========================
set on application-dev.yml these properties :
```
server:
address: LOCAL_ADDRESS (to your local address that your CAS can resolve)
port: LOCAL_PORT (keep 8080)
app:
service:
home: http://LOCAL_ADDRESS:LOCAL_PORT/
security: http://LOCAL_ADDRESS:LOCAL_PORT/j_spring_cas_security_check
cas:
url:
prefix: https://your.cas.domain/cas/
login: https://your.cas.domain/cas/login
logout: https://your.cas.domain/cas/logout
```
and do :
```
mvn spring-boot:run
```
go on http://LOCAL_ADDRESS:LOCAL_PORT/index.html#/
| 1 |
apollographql/federation-jvm-spring-example | Apollo Federation JVM example implementation using Spring for GraphQL | apollo-federation graphql java spring-graphql | # Federation JVM Spring Example
[Apollo Federation JVM](https://github.com/apollographql/federation-jvm) example implementation using [Spring for GraphQL](https://docs.spring.io/spring-graphql/docs/current/reference/html/).
If you want to discuss the project or just say hi, stop by [the Apollo community forums](https://community.apollographql.com/).
The repository contains two separate projects:
1. `products-subgraph`: A Java GraphQL service providing the federated `Product` type
2. `reviews-subgraph`: A Java GraphQL service that extends the `Product` type with `reviews`
See individual projects READMEs for detailed instructions on how to run them.
Running the demo
----
1. Start `products-subgraph` by running the `ProductsApplication` Spring Boot app from the IDE or by running `./gradlew :products-subgraph:bootRun` from the root project directory
2. Start `reviews-subgraph` by running the `ReviewsApplication` Spring Boot app from the IDE or `./gradlew :reviews-subgraph:bootRun` from the root project directory
3. Start Federated Router
1. Install [rover CLI](https://www.apollographql.com/docs/rover/getting-started)
2. Start router and compose products schema using [rover dev command](https://www.apollographql.com/docs/rover/commands/dev)
```shell
# start up router and compose products schema
rover dev --name products --schema ./products-subgraph/src/main/resources/graphql/schema.graphqls --url http://localhost:8080/graphql
```
3. In **another** shell run `rover dev` to compose reviews schema
```shell
rover dev --name reviews --schema ./reviews-subgraph/src/main/resources/graphql/schema.graphqls --url http://localhost:8081/graphql
```
4. Open http://localhost:3000 for the query editor
Example federated query
```graphql
query ExampleQuery {
products {
id
name
description
reviews {
id
text
starRating
}
}
}
```
## Other Federation JVM examples
* [Netflix DGS Federation Example](https://github.com/Netflix/dgs-federation-example)
* [GraphQL Java Kickstart Federation Example](https://github.com/setchy/graphql-java-kickstart-federation-example)
| 1 |
wangshaolei/NumberKeyboard | For example meituan's mechant NumberKeyboard | null | # NumberKeyboard
## Custom keyboardview with two ways:
1. Override systems's sys.xml of keyboardview
2. Custom the layout and slove the touchListener's question and so on...
### For example: MeiTuan's Merchant or NuoMi's Merchant app, Number Keyboard
#### Show parts codes:
```java
public class MainActivity extends AppCompatActivity implements View.OnClickListener, NumberKeyboardUtil.OnPopuWindowListener
private void initView(){
etCode = ButterKnife.findById(inputLayout, R.id.et_code);
keyboardPopupwindow = NumberKeyboardPopupWindow.getInstance(this).onCreate(this);
NumberKeyboardUtil.getInstance().setOnTouchListener(etCode, keyboardPopupwindow, this);
NumberKeyboardUtil.getInstance().disableCopyAndPaste(etCode);
}
@Override
public void showPopuWindow() {
etCode.requestFocus();
keyboardPopupwindow.showAsDropDown(llTop);
}
@Override
public void dismiss() {
etCode.getText().clear();
etCode.clearFocus();
keyboardPopupwindow.dismiss();
}
@Override
public void insertStr(String str) {
int index = etCode.getSelectionStart();
if (index < 0 || index >= etCode.getText().toString().length()) {
etCode.append(str);
} else {
etCode.getEditableText().insert(index, str);
}
}
@Override
public void check() {
Toast.makeText(this, "check", Toast.LENGTH_SHORT).show();
}
```
# Thanks:
[Jakewharton-Butterknife](https://github.com/JakeWharton/butterknife)
 
| 1 |
doyleyoung/vertx-graphql-example | GraphQL Async example using Vert.x | null | # Vert.x GraphQL Example

When it comes to performance and scalability, Vert.x has always been hard to beat and version 3 just made it much easier to develop and deploy.
This simple application is used to demonstrate:
- that Java CompletableFuture, Vert.x Futures and RxJava can be easily combined
- that Vert.x micro-services are easy to develop and deploy through Docker containers
The goal of this application is to exercise graphql-java async (non-blocking) with Vert.x.
In addition it also uses:
- [graphql-apigen](https://github.com/bmsantos/graphql-apigen/tree/async) - to facilitate the graphql schema generation
- [vertx-dataloader](https://github.com/engagingspaces/vertx-dataloader) - to ensure a consistent API data fetching between the different resources
## System Architecture
```text
.---------. .-----------.
POST /graphql --> | GraphQL | | Customer |
| Service | ----> | Service |
'---------' | '-----------'
| .-----------.
| | Vehicle |
|-> | Service |
| '-----------'
| .-----------.
| | Rental |
'-> | Service |
'-----------'
```
## Before you start
```graphql-java-async``` is not out yet. In order to build this project you need to:
1. ```graphql-java``` - Checkout and build Dmitry's [async branch](https://github.com/dminkovsky/graphql-java/tree/async)
1. ```graphql-apigen``` - Checkout and build the [eb_graphql branch](https://github.com/bmsantos/graphql-apigen/tree/eb_graphql) of my fork of [Distelli/graphql-apigen](https://github.com/Distelli/graphql-apigen)
## Build:
After building the async branches of both graphql-java and graphql-apigen do:
```sh
mvn clean package
```
## Execute:
```sh
./docker/run.sh
```
## Test
The graphql-service exposes a POST endpoint. You can use CURL but it is recommended to use [Graphiql App](https://github.com/skevy/graphiql-app).
Sample queries to use on a POST to http://localhost:8080/graphql.
### Querying for a single rental entry:
```graphql
{
rental(id: 1) {
id
customer {
id
name
address
city
state
country
contact {
phone
type
}
}
vehicle {
id
brand
model
type
year
mileage
extras
}
}
}
```
### Querying for all active rentals:
```graphql
{
rentals {
id
customer {
id
name
address
city
state
country
contact {
phone
type
}
}
vehicle {
id
brand
model
type
year
mileage
extras
}
}
}
```
### Example using CURL:
```bash
curl -k -X POST -d '{ "operationName": null, "query": "{ rentals { customer { name } vehicle { brand model } } }", "variables": "{}" }' http://localhost:8080/graphql
```
| 1 |
jasebell/mlbook | Example code for the Wiley book Machine Learning - Hands On for Developers and Technical Professionals"" | null | null | 1 |
bezkoder/spring-boot-security-login | Spring Boot + Spring Security: Login and Registration example with JWT, H2 Database and HttpOnly Cookie | authentication authorization httponly-cookie jwt jwt-auth jwt-authentication jwt-token login registration spring-boot spring-security | # Spring Boot Security Login example with JWT and H2 example
- Appropriate Flow for User Login and Registration with JWT and HttpOnly Cookie
- Spring Boot Rest Api Architecture with Spring Security
- How to configure Spring Security to work with JWT
- How to define Data Models and association for Authentication and Authorization
- Way to use Spring Data JPA to interact with H2 Database
## User Registration, Login and Authorization process.

## Spring Boot Server Architecture with Spring Security
You can have an overview of our Spring Boot Server with the diagram below:

For more detail, please visit:
> [Spring Boot Security Login example with JWT and H2 example](https://www.bezkoder.com/spring-boot-security-login-jwt/)
> [For MySQL/PostgreSQL](https://www.bezkoder.com/spring-boot-login-example-mysql/)
> [For MongoDB](https://www.bezkoder.com/spring-boot-jwt-auth-mongodb/)
Working with Front-end:
> [Angular 12](https://www.bezkoder.com/angular-12-jwt-auth-httponly-cookie/) / [Angular 13](https://www.bezkoder.com/angular-13-jwt-auth-httponly-cookie/) / [Angular 14](https://www.bezkoder.com/angular-14-jwt-auth/) / [Angular 15](https://www.bezkoder.com/angular-15-jwt-auth/) / [Angular 16](https://www.bezkoder.com/angular-16-jwt-auth/) / [Angular 17](https://www.bezkoder.com/angular-17-jwt-auth/)
> [React](https://www.bezkoder.com/react-login-example-jwt-hooks/) / [React Redux](https://www.bezkoder.com/redux-toolkit-auth/)
## Dependency
– If you want to use PostgreSQL:
```xml
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
```
– or MySQL:
```xml
<dependency>
<groupId>com.mysql</groupId>
<artifactId>mysql-connector-j</artifactId>
<scope>runtime</scope>
</dependency>
```
## Configure Spring Datasource, JPA, App properties
Open `src/main/resources/application.properties`
- For PostgreSQL:
```
spring.datasource.url=jdbc:postgresql://localhost:5432/testdb
spring.datasource.username=postgres
spring.datasource.password=123
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto=update
# App Properties
bezkoder.app.jwtSecret= ======================BezKoder=Spring===========================
bezkoder.app.jwtExpirationMs= 86400000
```
- For MySQL
```
spring.datasource.url=jdbc:mysql://localhost:3306/testdb?useSSL=false
spring.datasource.username=root
spring.datasource.password=123456
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQLDialect
spring.jpa.hibernate.ddl-auto=update
# App Properties
bezkoder.app.jwtSecret= ======================BezKoder=Spring===========================
bezkoder.app.jwtExpirationMs= 86400000
```
## Run Spring Boot application
```
mvn spring-boot:run
```
## Run following SQL insert statements
```
INSERT INTO roles(name) VALUES('ROLE_USER');
INSERT INTO roles(name) VALUES('ROLE_MODERATOR');
INSERT INTO roles(name) VALUES('ROLE_ADMIN');
```
## Refresh Token
[Spring Boot JWT Refresh Token example](https://www.bezkoder.com/spring-security-refresh-token/)
## More Practice:
> [Spring Boot File upload example with Multipart File](https://bezkoder.com/spring-boot-file-upload/)
> [Exception handling: @RestControllerAdvice example in Spring Boot](https://bezkoder.com/spring-boot-restcontrolleradvice/)
> [Spring Boot Repository Unit Test with @DataJpaTest](https://bezkoder.com/spring-boot-unit-test-jpa-repo-datajpatest/)
> [Spring Boot Rest Controller Unit Test with @WebMvcTest](https://www.bezkoder.com/spring-boot-webmvctest/)
> [Spring Boot Pagination & Sorting example](https://www.bezkoder.com/spring-boot-pagination-sorting-example/)
> Validation: [Spring Boot Validate Request Body](https://www.bezkoder.com/spring-boot-validate-request-body/)
> Documentation: [Spring Boot and Swagger 3 example](https://www.bezkoder.com/spring-boot-swagger-3/)
> Caching: [Spring Boot Redis Cache example](https://www.bezkoder.com/spring-boot-redis-cache-example/)
Associations:
> [JPA/Hibernate One To Many example in Spring Boot](https://www.bezkoder.com/jpa-one-to-many/)
> [JPA/Hibernate Many To Many example in Spring Boot](https://www.bezkoder.com/jpa-many-to-many/)
> [JPA/Hibernate One To One example in Spring Boot](https://www.bezkoder.com/jpa-one-to-one/)
Deployment:
> [Deploy Spring Boot App on AWS – Elastic Beanstalk](https://www.bezkoder.com/deploy-spring-boot-aws-eb/)
> [Docker Compose Spring Boot and MySQL example](https://www.bezkoder.com/docker-compose-spring-boot-mysql/)
## Fullstack Authentication
> [Spring Boot + Vue.js JWT Authentication](https://bezkoder.com/spring-boot-vue-js-authentication-jwt-spring-security/)
> [Spring Boot + Angular 8 JWT Authentication](https://bezkoder.com/angular-spring-boot-jwt-auth/)
> [Spring Boot + Angular 10 JWT Authentication](https://bezkoder.com/angular-10-spring-boot-jwt-auth/)
> [Spring Boot + Angular 11 JWT Authentication](https://bezkoder.com/angular-11-spring-boot-jwt-auth/)
> [Spring Boot + Angular 12 JWT Authentication](https://www.bezkoder.com/angular-12-spring-boot-jwt-auth/)
> [Spring Boot + Angular 13 JWT Authentication](https://www.bezkoder.com/angular-13-spring-boot-jwt-auth/)
> [Spring Boot + Angular 14 JWT Authentication](https://www.bezkoder.com/angular-14-spring-boot-jwt-auth/)
> [Spring Boot + Angular 15 JWT Authentication](https://www.bezkoder.com/angular-15-spring-boot-jwt-auth/)
> [Spring Boot + Angular 16 JWT Authentication](https://www.bezkoder.com/angular-16-spring-boot-jwt-auth/)
> [Spring Boot + Angular 17 JWT Authentication](https://www.bezkoder.com/angular-17-spring-boot-jwt-auth/)
> [Spring Boot + React JWT Authentication](https://bezkoder.com/spring-boot-react-jwt-auth/)
## Fullstack CRUD App
> [Vue.js + Spring Boot + H2 Embedded database example](https://www.bezkoder.com/spring-boot-vue-js-crud-example/)
> [Vue.js + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-vue-js-mysql/)
> [Vue.js + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-vue-js-postgresql/)
> [Angular 8 + Spring Boot + Embedded database example](https://www.bezkoder.com/angular-spring-boot-crud/)
> [Angular 8 + Spring Boot + MySQL example](https://www.bezkoder.com/angular-spring-boot-crud/)
> [Angular 8 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/angular-spring-boot-postgresql/)
> [Angular 10 + Spring Boot + MySQL example](https://www.bezkoder.com/angular-10-spring-boot-crud/)
> [Angular 10 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/angular-10-spring-boot-postgresql/)
> [Angular 11 + Spring Boot + MySQL example](https://www.bezkoder.com/angular-11-spring-boot-crud/)
> [Angular 11 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/angular-11-spring-boot-postgresql/)
> [Angular 12 + Spring Boot + Embedded database example](https://www.bezkoder.com/angular-12-spring-boot-crud/)
> [Angular 12 + Spring Boot + MySQL example](https://www.bezkoder.com/angular-12-spring-boot-mysql/)
> [Angular 12 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/angular-12-spring-boot-postgresql/)
> [Angular 13 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-13-crud/)
> [Angular 13 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-13-mysql/)
> [Angular 13 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-13-postgresql/)
> [Angular 14 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-14-crud/)
> [Angular 14 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-14-mysql/)
> [Angular 14 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-14-postgresql/)
> [Angular 15 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-15-crud/)
> [Angular 15 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-15-mysql/)
> [Angular 15 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-15-postgresql/)
> [Angular 16 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-16-crud/)
> [Angular 16 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-16-mysql/)
> [Angular 16 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-16-postgresql/)
> [Angular 17 + Spring Boot + H2 Embedded Database example](https://www.bezkoder.com/spring-boot-angular-17-crud/)
> [Angular 17 + Spring Boot + MySQL example](https://www.bezkoder.com/spring-boot-angular-17-mysql/)
> [Angular 17 + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-angular-17-postgresql/)
> [React + Spring Boot + MySQL example](https://www.bezkoder.com/react-spring-boot-crud/)
> [React + Spring Boot + PostgreSQL example](https://www.bezkoder.com/spring-boot-react-postgresql/)
> [React + Spring Boot + MongoDB example](https://www.bezkoder.com/react-spring-boot-mongodb/)
Run both Back-end & Front-end in one place:
> [Integrate Angular with Spring Boot Rest API](https://www.bezkoder.com/integrate-angular-spring-boot/)
> [Integrate React.js with Spring Boot Rest API](https://www.bezkoder.com/integrate-reactjs-spring-boot/)
> [Integrate Vue.js with Spring Boot Rest API](https://www.bezkoder.com/integrate-vue-spring-boot/)
| 1 |
YongHuiLuo/Learning-Rxandroid | Learn and use RxAndroid together by PPT and Example | null | # Learning-Rxandroid
---
Because interest RxJava, RxAndroid, RxBinding of creating this project. RxAndroid making program coding for asynchronous operations more simple, in addition, the observer pattern design for my project and enhance the ability of encoding are very helpful. I compiled in the process of learning a few examples related to common methods, the future will continue to be updated. In addition, creating About RxAndroid.PPT documents on the network based on learning materials and personal understanding. Hope can help to want to know and learn with friends RxJava and RxAndroid projects. On commonly used methods, there are several examples:
(译)因为对RxJava、RxAndroid、RxBinding 的兴趣,创建此项目。RxAndroid使得程序中对异步操作的编码更简洁,另外,观察者模式的设计理念对我的项目和编码的能力的提升都很有帮助。我在学习的过程中整理了一些相关常用方法的例子,以后仍然会继续更新。此外,根据网络上的学习资料和个人的理解创建<i class="icon-file"></i>About RxAndroid.PPT的文档。希望能够帮助到想了解和学习RxJava和RxAndroid项目的朋友。关于常用的方法,有如下几个例子:
>* RxBaseExampleActivity
>* UsageExampleActivity
>* SchedulerExampleActivity
>* MapExampleActivity
>* FlatMapExampleActivity
>* ThrottleFirstExampleActivity
>* LiftExampleActivity
>* SchedulerMultiExampleActivity
>* ComposeExampleActivity
If you want to see the PPT, there is a file named About RxJava.pptx documentation, introduction of RxJava and RxAndroid some understanding in the project.。
(译)如果你想查看PPT,在项目中有文件名为 <i class="icon-file"></i>About RxJava.pptx的文档,介绍对RxJava和RxAndroid的一些理解。
Project-related class introduce(项目相关类介绍)
---
###RxBaseExampleActivity
Create Observable, Subscribe objects in two different ways, onSubscribe achieve subscriptions between the observer and the observed, ActionX use.
(译)创建Observable、Subscriber对象的两种不同方式,onSubscribe实现观察者和被观察者之间的订阅,ActionX的使用。
###UsageExampleActivity
Use from and create to create Observable, traversal collection without using a For loop, reading the resource file and show.
(译)使用方法from和create来创建Observable,不使用For循环实现集合的遍历,读取资源文件并展示。
###SchedulerExampleActivity
By using subscribeOn or ObserveOn method to achieve scheduling between threads, a simple line of code that is realized on the main UI thread operations.
(译)通过使用subscribeOn 或者 ObserveOn 方法 实现线程之间的调度,简单的一行代码即实现UI操作在主线程上进行。
``` java
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
```
###MapExampleActivity
Use map to achieve transformation 1 to 1 which is a major feature RxJava - the transformation. map can transform one object into another object, and finally passed to the observer. String example by according to the type of the file name is converted into Bitmap with ImageView on display illustrate this feature. There are other examples, supplementary illustrate this point.
(译)使用方法map实现1对1的变换这是RxJava的一大特点 -- 变换。map能将一个对象变换成另外的一个对象,并最终传递给观察者。例子中通过根据String类型的文件名转换成Bitmap并用ImageView进行展示说明这个特点。还有其他的例子,补充说明这一点。
###FlatMapExampleActivity
Methods flatMap, is another way to achieve transformation. Not accurate to say that he achieved one to more transformation, for example, a student can have multiple output course, this method return value is an Observable object about his interpretation are described in ppt document.
(译)方法flatMap,是实现变换的另外一个方法。不准确的说他实现了one to more的变换,比如一个学生可以有多门课程的输出,此方法返回值是一个Observable对象,关于他的解释在ppt文档中有说明。
###ThrottleFirstExampleActivity
This is an example RxBinding project, to achieve the prevention of violence demand View is clicked, this is a small demand point I personally prefer the following code:
(译)这是RxBinding项目的一个例子,实现了防止View被暴力点击的需求,这是我个人比较喜欢的一个小需求点,代码如下:
``` java
RxView.clicks(click_me)
.throttleFirst(3000, TimeUnit.MILLISECONDS)
.subscribe(new Action1<Void>() {
@Override
public void call(Void aVoid) {
Toast.makeText(getBaseContext(), "Clicking", Toast.LENGTH_LONG).show();
}
});
```
RxBinding View operations in many of the convenience and the methods used, I will continue to update, if you are interested, you can also help to perfect her, to help more people understand and use RxBinding.
(译)RxBinding在对View的操作上还有很多便捷和使用的方法,我也会持续更新,如果你有兴趣,也可以帮助来完善她,方便更多的人了解和使用RxBinding。
###LiftExampleActivity
Lift the method code to achieve the principles of transformation, including map and flatmap eventually call to lift to achieve. On the principle of lift are described in ppt document.
(译)在方法Lift的代码中实现了变换的原理,包括map和flatmap最终都会调用到lift来实现。关于lift的原理在ppt文档中有介绍。
###SchedulerMultiExampleActivity
An example of complex use, including transformation and thread scheduling, involving many transformations and multiple thread switches. The thread ID input display after each change in TextView, the ease of understanding. Here involves using Schedulers class can create the following several common scheduler:
(译)复合使用的一个例子,包括变换和线程调度,涉及到多次变换和多次线程切换。将每次变换之后的线程ID在TextView中输入展示,方便理解。此处涉及到Schedulers类的使用,能够创建如下几个常用的调度器:
>>* Schedulers.io()
>>* Schedulers.newThread()
>>* Schedulers.computation()
>>* Schedulers.immediate()
>>* Schedulers.trampoline()
>>* AndroidSchedulers.mainThread()
###ComposeExampleActivity
Methods compose help achieve when there are multiple lift, simplify the code.
(译)方法compose帮助实现当有多个lift,简化代码。
Related open source projects(相关的开源项目)
---
https://github.com/ReactiveX/RxAndroid
https://github.com/JakeWharton/RxBinding
Blog reference and learning materials(参考Blog及学习资料)
---
- http://gank.io/post/560e15be2dca930e00da1083 《给 Android 开发者的 RxJava详解》 对我学习Rx系列项目受益非常大的博文,包括PPT的整理相关的内容来自这篇博文,在此感谢作者 -- 抛物线。
- http://blog.csdn.net/lzyzsd/article/details/41833541 深入浅出RxJava(一:基础篇)
- http://blog.csdn.net/lzyzsd/article/details/44094895 深入浅出RxJava ( 二:操作符 )
- http://blog.csdn.net/lzyzsd/article/details/44891933 深入浅出RxJava三--响应式的好处
- http://blog.csdn.net/lzyzsd/article/details/45033611 深入浅出RxJava四-在Android中使用响应式编程
- 项目中的源码
- 及其他相关方面的Blog | 1 |
lgvalle/FragmentSharedFabTransition | Example of how to use shared element transitions within Fragments | null | # FragmentSharedFabTransition

| 1 |
okkam-it/flink-mongodb-test | Flink 0.7 MongoDB example (for Hadoop2) | null | Accessing Data Stored in MongoDB with Apache Flink 0.7+!
===================
Starting from the post at https://flink.incubator.apache.org/news/2014/01/28/querying_mongodb.html here at [Okkam](http://www.okkam.it) we played around withthe new **Apache Flink APIs (0.7+)** and we manage to make a simple mapreduce example.
----------
pom.xml
-------------
```xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.okkam.flink</groupId>
<artifactId>flink-mongodb-test</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<flink.version>0.7.0-hadoop2-incubating</flink.version>
<mongodb.hadoop.version>1.3.0</mongodb.hadoop.version>
<hadoop.version>2.4.0</hadoop.version>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencyManagement>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencyManagement>
<dependencies>
<!-- Force dependency management for hadoop-common -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>${flink.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-hadoop-core</artifactId>
<version>${mongodb.hadoop.version}</version>
</dependency>
</dependencies>
</project>
```
> **Note:**
> - Change ``dbname`` and ``collectioname`` accordingly to your database
> - In the map function read fields you need (e.g. ``jsonld``)
> - Change the output coordinates of the job (default ``test.testData``)
----------
Java code
-------------------
This is a simple code to connecto to a local MongoDB instance:
```java
package org.okkam.flink.mongodb.test;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.hadoopcompatibility.mapred.HadoopInputFormat;
import org.apache.hadoop.mapred.JobConf;
import org.bson.BSONObject;
import com.mongodb.BasicDBObject;
import com.mongodb.hadoop.io.BSONWritable;
import com.mongodb.hadoop.mapred.MongoInputFormat;
public class MongodbExample {
public static void main(String[] args) throws Exception {
// set up the execution environment
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
// create a MongodbInputFormat, using a Hadoop input format wrapper
HadoopInputFormat<BSONWritable, BSONWritable> hdIf =
new HadoopInputFormat<BSONWritable, BSONWritable>(new MongoInputFormat(),
BSONWritable.class, BSONWritable.class, new JobConf());
// specify connection parameters
hdIf.getJobConf().set("mongo.input.uri",
"mongodb://localhost:27017/dbname.collectioname");
DataSet<Tuple2<BSONWritable, BSONWritable>> input = env.createInput(hdIf);
// a little example how to use the data in a mapper.
DataSet<Tuple2< Text, BSONWritable>> fin = input.map(
new MapFunction<Tuple2<BSONWritable, BSONWritable>,
Tuple2<Text,BSONWritable> >() {
private static final long serialVersionUID = 1L;
@Override
public Tuple2<Text,BSONWritable> map(
Tuple2<BSONWritable, BSONWritable> record) throws Exception {
BSONWritable value = record.getField(1);
BSONObject doc = value.getDoc();
BasicDBObject jsonld = (BasicDBObject) doc.get("jsonld");
String id = jsonld.getString("@id");
DBObject builder = BasicDBObjectBuilder.start()
.add("id", id)
.add("type", jsonld.getString("@type"))
.get();
BSONWritable w = new BSONWritable(builder);
return new Tuple2<Text,BSONWritable>(new Text(id), w);
}
});
// emit result (this works only locally)
// fin.print();
MongoConfigUtil.setOutputURI( hdIf.getJobConf(),
"mongodb://localhost:27017/test.testData");
// emit result (this works only locally)
fin.output(new HadoopOutputFormat<Text,BSONWritable>(
new MongoOutputFormat<Text,BSONWritable>(), hdIf.getJobConf()));
// execute program
env.execute("Mongodb Example");
}
}
```
----------
Run the project
-------------------
The easyest way to test the program is to clone the git repository and import the project in Eclipse and then run **MongodbExample** class
Written by Okkam s.r.l. [@okkamit](https://twitter.com/okkamit)
| 1 |
florina-muntenescu/ReactiveBurgers | Code example for The ABCs of RxJava"" | null | # ReactiveBurgers
Code example for "The ABCs of RxJava" talk.
It contains different ways of creating RxJava ``Observable``s:
* from a static list - where every element of the list will be an emission of an ``Observable``
* from a file - where every line of the file will be an emission of an ``Observable``
* from click events, with the help of ``Subject``s
* from multiple ``Observable``s
Following the burger example from "The ABCs of RxJava", the slices of tomatoes are created from a static list,
the buns from file, the meat from click events and the burgers by ``zip``ing the tomato, meat and bun ``Observable``s.
Also, examples of manipulating a stream of data using the ``filter``, ``map`` and ``flatmap`` operators are given.
The code follows the Model-View-ViewModel architecture pattern:
* the view is the ``BurgerActivity``.
* the view model is the ``BurgerViewModel`` - it exposes the tomato, bun and burger stream of data to the view and allows the view to notify about new pieces of raw meat available. The view model gets the tomato and bun streams of data from the data model. The view model also creates the burger stream of data.
* the data model is the ``DataModel`` - it creates the stream of tomato slices based on a list of tomatoes and the stream of buns base on the number of lines in a file that contain the word "bun".
To showcase the testability of RxJava, unit tests for the ``BurgerViewModel`` class are provided.
| 1 |
jvirtanen/coinbase-fix-example | Simple example application for Coinbase Pro FIX API | bitcoin coinbase coinbase-pro finance fixprotocol java trading | # Coinbase Pro FIX Example
This is a simple example application that demonstrates how to connect to
[Coinbase Pro][] using the [FIX API][] and [Philadelphia][], an open source
FIX engine for the JVM.
[Coinbase Pro]: https://pro.coinbase.com
[FIX API]: https://docs.pro.coinbase.com/#fix-api
[Philadelphia]: https://github.com/paritytrading/philadelphia
Building and running this application requires Java Development Kit (JDK) 11
or newer and Maven.
## Usage
To build and run the application, follow these steps:
1. Build the application:
```shell
mvn package
```
2. Create a configuration file, `etc/example.conf`:
```shell
cp etc/example.conf.template etc/example.conf
```
3. Fill in the API passphrase, key, and secret in the configuration file,
`etc/example.conf`.
4. Run the application:
```shell
java -jar coinbase-fix-example.jar etc/example.conf
```
The application logs onto Coinbase Pro and immediately logs out.
## License
Copyright 2017 Jussi Virtanen.
Released under the Apache License, Version 2.0. See `LICENSE.txt` for details.
| 1 |
SomMeri/antlr-step-by-step | Example project for ANTLR tutorial blog post. | null | null | 1 |
Rapter1990/SpringBootMicroservices | Spring Boot Microservice Example(Eureka Server, Config Server, API Gateway, Services , RabbitMq, Keycloak) | api-gateway configserver docker docker-compose eureka-server java keycloak microservices rabbitmq service spring-boot spring-cloud spring-security | # Spring Boot Microservice Example(Eureka Server, Config Server, API Gateway, Services , RabbitMq, Keycloak)
<img src="screenshots/springbootmicroservices.drawio_image.png" alt="Main Information" width="800" height="900">
# About the project
<ul style="list-style-type:disc">
<li>User can register and login through Keycloak</li>
<li>User can register and login through Keycloak</li>
<li>Admin can create, update, delete advertisement and get advertisement by its is and get all advertisements from management service to advertisement service through API Gateway</li>
<li>Admin can approve and reject advertisement from advertisement service to report service by using managment service through API Gateway</li>
<li>User canget advertisement by its is and get all advertisements from management service to advertisement service through API Gateway</li>
<li>The view count of the approved advertisement are increasing when user try to show it</li>
</ul>
7 services whose name are shown below have been devised within the scope of this project.
- Config Server
- Eureka Server
- API Gateway
- User Service
- Management Service
- Advertisement Service
- Report Service
### 🔨 Run the App
<b>Docker</b>
<b>1 )</b> Install <b>Docker Desktop</b>. Here is the installation <b>link</b> : https://docs.docker.com/docker-for-windows/install/
<b>2 )</b> Open <b>Terminal</b> under <b>resources</b> folder to run <b>Keycloak</b> and <b>RabbitMq</b> on <b>Docker</b> Container
```
docker-compose up -d
```
<b>3 )</b> Implement Keycloak Settings
```
1 ) Open Keycloak on the Browser through localhost:8181
2 ) Enter username and password (admin : admin)
3 ) Create Client named for spring-boot-microservice-keycloak and define it in Keycloak config of user service
4 ) Change client's access type from public to confidential
5 ) Get secret key to define clientSecret in Keycloak config of user service
6 ) Define roles for Admin and User as ROLE_ADMIN and ROLE_USER
```
<b>4 )</b> Implement Rabbitmq Settings
```
1 ) Open Rabbitmq on the Browser through http://localhost:15672
2 ) Enter username and password (rabbitmq : 123456)
3 ) Open Admin section in the navbar
4 ) Define new user named guest and its username , password (guest : guest , role : administrator) , next give all permissiion (Virtual host : "/" , regexp : ".*")
```
<b>Maven></b>
<b>1 )</b> Start Keycloak and Rabbit through Docker
<b>2 )</b> Implement their settings
<b>3 )</b> Download your project from this link `https://github.com/Rapter1990/SpringBootMicroservices`
<b>4 )</b> Go to the project's home directory : `cd SpringBootMicroservices`
<b>5 )</b> Create a jar file though this command `mvn clean install`
<b>6 )</b> Run the project though this command `mvn spring-boot:run`
### To execute the API's through the gateway
1) http://localhost:8600/api/v1/users/signup
2) http://localhost:8600/api/v1/users/login
3) http://localhost:8600/api/v1/users/info
4) http://localhost:8600/api/v1/management/admin_role/create/{user_id}
5) http://localhost:8600/api/v1/management/admin_role/alladvertisements
6) http://localhost:8600/api/v1/management/admin_role/alladvertisements/{advertisement_id}
7) http://localhost:8600/api/v1/management/admin_role/update/{advertisement_id}
8) http://localhost:8600/api/v1/management/admin_role/delete/{advertisement_id}
9) http://localhost:8600/api/v1/management/admin_role/advertisement/{advertisement_id}/approve
10) http://localhost:8600/api/v1/management/admin_role/advertisement/{advertisement_id}/reject
11) http://localhost:8600/api/v1/management/user_role/alladvertisements
12) http://localhost:8600/api/v1/management/user_role/advertisement/{advertisement_id}
Explore Rest APIs
<table style="width:100%">
<tr>
<th>Method</th>
<th>Url</th>
<th>Description</th>
<th>Valid Request Body</th>
<th>Valid Request Params</th>
<th>Valid Request Params and Body</th>
<th>No Request or Params</th>
</tr>
<tr>
<td>POST</td>
<td>signup</td>
<td>Sign Up for User and Admin</td>
<td><a href="README.md#signup">Info</a></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>POST</td>
<td>login</td>
<td>Login</td>
<td><a href="README.md#login">Info</a></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GET</td>
<td>info</td>
<td>Get User's Role Information (ROLE_USER or ROLE_ADMIN)</td>
<td></td>
<td></td>
<td></td>
<td><a href="README.md#info">Info</a></td>
</tr>
<tr>
<td>POST</td>
<td>create/{user_id}</td>
<td>Create Advertisement for User</td>
<td></td>
<td></td>
<td><a href="README.md#create">Info</a></td>
<td></td>
</tr>
<tr>
<td>GET</td>
<td>alladvertisements</td>
<td>Get all advertisements From Admin</td>
<td></td>
<td></td>
<td></td>
<td><a href="README.md#alladvertisementsFromAdmin">Info</a></td>
</tr>
<tr>
<td>GET</td>
<td>alladvertisements/{advertisement_id}</td>
<td>Get advertisement by Id From Admin</td>
<td></td>
<td><a href="README.md#advertisementById">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>PUT</td>
<td>update/{advertisement_id}</td>
<td>Update advertisement by Id</td>
<td></td>
<td></td>
<td><a href="README.md#update">Info</a></td>
<td></td>
</tr>
<tr>
<td>DELETE</td>
<td>delete/{advertisement_id} </td>
<td>Delete advertisement by Id</td>
<td></td>
<td><a href="README.md#delete">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GET</td>
<td>advertisement/{advertisement_id}/approve</td>
<td>Approve advertisement By Id</td>
<td></td>
<td><a href="README.md#approve">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GET</td>
<td>advertisement/{advertisement_id}/reject</td>
<td>Reject advertisement By Id</td>
<td></td>
<td><a href="README.md#reject">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GET</td>
<td>alladvertisements</td>
<td>Get all advertisements From User</td>
<td></td>
<td></td>
<td></td>
<td><a href="README.md#alladvertisementsFromUser">Info</a></td>
</tr>
<tr>
<td>GET</td>
<td>alladvertisements</td>
<td>alladvertisements/{advertisement_id}</td>
<td></td>
<td></td>
<td></td>
<td><a href="README.md#advertisementByIdFromUser">Info</a></td>
</tr>
</table>
### Used Dependencies
* Core
* Spring
* Spring Boot
* Spring Security
* Spring Web
* RestTemplate
* Spring Data
* Spring Data JPA
* Spring Cloud
* Spring Cloud Gateway Server
* Spring Cloud Config Server
* Spring Cloud Config Client
* Netflix
* Eureka Server
* Eureka Client
* Database
* Mysql
* Message Broker
* RabbitMQ
* Security
* Keycloak Server
* Keycloak OAuth2
* Keycloak REST API
## Valid Request Body
##### <a id="signup">Sign Up for User and Admin
```
http://localhost:8600/api/v1/users/signup
{
"username" : "springbootmicroserviceuser",
"password" : "user123456",
"name" : "Micro User",
"surname" : "User Surname",
"phoneNumber" : "123456789",
"email" : "springbootmicroserviceuser@user.com",
"role" : "ROLE_USER"
}
http://localhost:8600/api/v1/users/signup
{
"username" : "springbootmicroserviceadmin",
"password" : "admin123456",
"name" : "Micro Admin",
"surname" : "Admin Surname",
"phoneNumber" : "123456789",
"email" : "springbootmicroserviceadmin@admin.com",
"role" : "ROLE_ADMIN"
}
```
##### <a id="login">Login
```
http://localhost:8600/api/v1/users/login
Bearer Token : Access Token of User from Keycloak
{
"username" : "springbootmicroserviceuser",
"password" : "user123456"
}
http://localhost:8600/api/v1/users/login
Bearer Token : Access Token of Admin from Keycloak
{
"username" : "springbootmicroserviceadmin",
"password" : "admin123456"
}
```
## Valid Request Params
##### <a id="advertisementById">Get advertisement by Id From Admin
```
http://localhost:8600/api/v1/management/admin_role/alladvertisements/{advertisement_id}
Bearer Token : Access Token of Admin from Keycloak
```
##### <a id="delete">Delete advertisement by Id
```
http://localhost:8600/api/v1/management/admin_role/delete/{advertisement_id}
Bearer Token : Access Token of Admin from Keycloak
```
##### <a id="approve">Approve advertisement By Id
```
http://localhost:8600/api/v1/management/admin_role/advertisement/{advertisement_id}/approve
Bearer Token : Access Token of Admin from Keycloak
```
##### <a id="reject">Reject advertisement By Id
```
http://localhost:8600/api/v1/management/admin_role/advertisement/{advertisement_id}/reject
Bearer Token : Access Token of Admin from Keycloak
```
##### <a id="advertisementByIdFromUser">Get advertisement by Id From User
```
http://localhost:8600/api/v1/management/user_role/alladvertisements/{advertisement_id}
Bearer Token : Access Token of Admin from Keycloak
```
## Valid Request Params and Body
##### <a id="create">Create Advertisement for User
```
http://localhost:8600/api/v1/management/admin_role/create/{user_id}
Bearer Token : Access Token from Keycloak
{
"title" : "Advertisement 1 for User 1",
"price" : 200
}
```
##### <a id="update">Create Route by City Id and Destination City Id
```
http://localhost:8600/api/v1/management/admin_role/update/{advertisement_id}
Bearer Token : Access Token from Keycloak
{
"title" : "Advertisement 1 for User 1 Updated",
"price" : 300
}
```
## No Request or Params
##### <a id="info"> Get User's Role Information (ROLE_USER or ROLE_ADMIN)
```
http://localhost:8600/api/v1/users/info
Bearer Token : Access Token of Admin or User from Keycloak
```
##### <a id="alladvertisementsFromAdmin"> Get all advertisements From Admin
```
http://localhost:8600/api/v1/management/admin_role/alladvertisements
Bearer Token : Access Token of Admin from Keycloak
```
##### <a id="advertisementByIdFromUser"> Get all advertisements From User
```
http://localhost:8600/api/v1/management/user_role/alladvertisements
Bearer Token : Access Token of User from Keycloak
```
### Screenshots
<details>
<summary>Click here to show the screenshots of project</summary>
<p> Figure 1 </p>
<img src ="screenshots/keycloak_1.PNG">
<p> Figure 2 </p>
<img src ="screenshots/keycloak_2.PNG">
<p> Figure 3 </p>
<img src ="screenshots/keycloak_3.PNG">
<p> Figure 4 </p>
<img src ="screenshots/keycloak_4.PNG">
<p> Figure 5 </p>
<img src ="screenshots/keycloak_5.PNG">
<p> Figure 6 </p>
<img src ="screenshots/keycloak_6.PNG">
<p> Figure 7 </p>
<img src ="screenshots/keycloak_7.PNG">
<p> Figure 8 </p>
<img src ="screenshots/rabbitmq_1.PNG">
<p> Figure 9 </p>
<img src ="screenshots/rabbitmq_2.PNG">
<p> Figure 10 </p>
<img src ="screenshots/rabbitmq_3.PNG">
<p> Figure 11 </p>
<img src ="screenshots/rabbitmq_4.PNG">
</details> | 0 |
kbastani/order-delivery-microservice-example | This repository contains a functional example of an order delivery service similar to UberEats, DoorDash, and Instacart. | null | # Order Delivery Microservice Example
In an event-driven microservices architecture, the concept of a domain event is central to the behavior of each service. Popular practices such as _CQRS_ (Command Query Responsibility Segregation) in combination with _Event Sourcing_ are becoming more common in applications as microservice architectures continue to rise in popularity.
This reference architecture and sample project demonstrates an event-driven microservice architecture that use Spring Boot and Spring Cloud.
Demonstrated concepts:
- Event Sourcing
- Event Stream Processing
- Change Data Capture (CDC)
- Change Data Analytics
- Hypermedia Event Logs
- Real-time Analytics Dashboards
***

## Use cases
This application is a work in progress. The full list of initial requirements are listed below. This application is intended to show a modern microservice architecture that requires real-time analytics and change data capture.
### Order Service
API usage information for the `order-web` service can be found [here](order/README.md).
- Includes an order web service that tracks new order deliveries.
- Includes a load simulator that realistically simulates a fleet of drivers delivering restaurant orders to customers.
- Uses a list of real Starbucks restaurants to simulate order life cycles across all locations in the United States.
- Generates fake delivery locations within 30 miles (ca. 48 km) of each Starbucks.
- Generates realistic delivery scenarios and simulates supply/demand based on pre-seeded variables for restaurant locations.
- Generates semi-realistic geospatial updates that tracks the location of an order as it makes its way to a customer’s delivery location.
- Simulates driver availability based on location and distance from a restaurant location.
### Dashboards
- Real-time geospatial dashboard of current deliveries
- Show current deliveries by restaurant id
- Show current deliveries by restaurant city
## System architecture

## Build and run
JDK 16+ is required to build all the project artifacts for this example. Use the following terminal commands to build and launch a docker compose recipe for this example.
```bash
$ mvn clean verify
```
After you have succesfully built the project and docker containers, you can now run the example on a single machine in one of two modes.
The two recipes below for running this example on a single machine have very different system resource requirements. For most developers, it's recommended that you use the **light mode** recipe to get up and running without any performance issues.
Before running either of the modes, make sure that you create the following Docker network using the following terminal command.
```bash
$ docker network create PinotNetwork
```
### Light mode
```bash
$ docker-compose -f docker-compose-light.yml up -d
$ docker-compose -f docker-compose-light.yml logs -f --tail 100 load-simulator
```
The `docker-compose-light.yml` is configured to use less containers and compute resources, but does not come with a Superset deployment that visualizes the CDC event data for order deliveries. To visualize the event data, you can use http://kepler.gl by exporting CSV datasets from queries executed in the Apache Pinot query console.
#### Usage
The current log output from your terminal should be targeted on the `load-simulator` application. By default, you will see a list of restaurants that are configured to start fulfilling order delivery requests. The load simulator is a high-throughput realistic state machine and conductor for driving the state of a restaurant, drivers, and order deliveries to a customer's location. Documentation on the load simulator and how it works will be made available in the future.
At the point where you begin to see a flurry of log output from the `load-simulator` that tracks the state of orders and their state change events, you'll know that your cluster is fully up and running. Before we can see any of the event data being produced by the `order-delivery-service` we need to configure a Debezium connector to start sending event table updates to an Apache Kafka topic. The following shell script will fully bootstrap your cluster to enable CDC outbox messages from MySQL to Kafka, as well as configure Apache Pinot to start consuming and ingesting those events for running real-time analytical queries.
```bash
$ sh ./bootstrap-light.sh
```
After this script finishes its tasks, you will now be able to use Apache Pinot to query the real-time stream of order delivery events that are generated from MySQL. A new browser window should be opened and navigated to http://localhost:9000.
To start querying data, navigate to the query console and click the `orders` table to execute your first query. If everythiing worked correctly, you should be seeing at least ten rows from the generated SQL query. Should you run into any issues, please create an issue here to get assistance.
#### Kepler.gl Visualizations
The SQL query below can be used to create a http://kepler.gl geospatial visualization using CSV export directly from the Pinot query console UI.
```sql
SELECT orderId as id, lat as point_latitude_2, lon as point_longitude_2, restaurantLat as point_latitude_1, restaurantLon as point_longitude_1, lastModified as start_time, status, restaurantId, accountId
FROM orders
WHERE ST_DISTANCE(location_st_point, ST_Point(-122.44469, 37.75680, 1)) < 6500
LIMIT 100000
option(skipUpsert=true)
```
Notice that in this SQL query I've disabled upserts using `skipUpsert=true`. This means that I want to see the full log of `order` events for each `orderId`. If I were to remove this option or set it to `false`, then I would only get back the most recent state of the `order` object with the primary key `orderId`. This is a very useful feature, as there are many types of analytical queries where we only want to see the current state of a single aggregate. For the purposes of a good geospatial visualization, we'll want to capture all of the geolocation updates as a driver navigates from a restaurant to a delivery location.
You can play around with this query to generate different result sets. In the `WHERE` clause, I've used a Pinot UDF that only fetches order delivery data that is within a 6.5km radius of the specified GPS coordinate. _The coordinate I've provided is located at the center of San Francisco._
### Heavy mode
Running the example in normal mode requires at least 16GB of system memory and it's recommended that your development machine have at least 32GB of memory and at least 12 CPU cores. Please use the light mode recipe above to run the example if your system doesn't meet these resource requirements. If you have previously started the **light mode** recipe, please make sure you destroy your cluster before proceeding.
```bash
$ docker-compose -f docker-compose-light.yml down
$ docker volume create --name=db_data
$ docker-compose -f docker-compose.yml up -d
$ docker-compose -f docker-compose.yml logs -f --tail 100 load-simulator
```
#### Usage
After building and launching the docker compose recipe, you'll be able to launch a real-time dashboard of a simulated order delivery scenario using Superset.
```bash
$ open http://localhost:8088
```
Sign-in to the superset web interface using the credentials *admin/admin*. Navigate to the order delivery dashboard. To see order delivery data after first launching the simulation, you should remove the default filter for order status by removing it. This will show you all the orders with their status in real-time as they change. Also, you can set the refresh interval on the dashboard to *10s*, which is done through a configuration button at the top right of the dashboard page.
## Change Data Capture
This section provides you with a collection of useful commands for interacting and exploring the CDC features of this example application that are implemented with Debezium.
### Useful commands
Getting a shell in MySQL:
```
$ docker run --tty --rm -i \
--network PinotNetwork \
debezium/tooling:1.1 \
bash -c 'mycli mysql://mysqluser@mysql:3306/orderweb --password mysqlpw'
```
Listing all topics in Kafka:
```
$ docker-compose exec kafka /kafka/bin/kafka-topics.sh --zookeeper zookeeper:2181 --list
```
Reading contents of the "order" topic:
```
$ docker run --tty --rm \
--network PinotNetwork \
debezium/tooling:1.1 \
kafkacat -b kafka:9092 -C -o beginning -q \
-t debezium.Order
```
Registering the Debezium MySQL connector (this is configured in the `bootstrap.sh` script):
Create a connector for the `order_events` table.
```
$ curl -i -X PUT -H "Accept:application/json" -H "Content-Type:application/json" \
http://localhost:8083/connectors/order/config -d @debezium-mysql-connector-order-outbox.json
```
Getting status of "order" connector:
```
$ curl -i -X GET -H "Accept:application/json" -H "Content-Type:application/json" \
http://localhost:8083/connectors/order/status
```
Create a connector for the `driver_events` table:
```
$ curl -i -X PUT -H "Accept:application/json" -H "Content-Type:application/json" \
http://localhost:8083/connectors/driver/config -d @debezium-mysql-connector-driver-outbox.json
```
Getting status of "driver" connector:
```
$ curl -i -X GET -H "Accept:application/json" -H "Content-Type:application/json" \
http://localhost:8083/connectors/driver/status
```
It's possible that the MySQL database may have too many active connections for the Debezium connectors to properly start. If this is the case, simply restart the Debezium Connect container.
```
# docker-compose exec mysql bash -c 'mysql -u root -p$MYSQL_ROOT_PASSWORD orderweb -e "SET GLOBAL max_connections = 10000;"'
$ docker-compose -f docker-compose-light.yml restart connect
```
When the container is started and ready, recreate the `order` and driver` connectors using the `curl` commands above.
## License
This project is an open source product licensed under Apache License v2.
| 1 |
Cadiboo/Example-Mod | An example mod created to try and get new modders to use good code practices | example-code minecraft-forge-mod | # [Example Mod](https://github.com/Cadiboo/Example-Mod)
### An example mod created by [Cadiboo](https://github.com/Cadiboo) to try and get new modders to use good code practices
##### View the tuorials for this at [https://cadiboo.github.io/tutorials/](https://cadiboo.github.io/tutorials/)
This contains the basic setup for a normal mod, and the included files use good code practices.
### All credits for Forge, FML etc go to their respective owners.
Any code written by me is free for any use at all. Some credit would be nice though :)
| 1 |
binblee/dubbo-docker | Example of running Dubbo in Docker, packaged as a springboot application, running on Kubernetes. | null | # Dubbo in Docker Example
Dubbo running in Docker, packaged as a springboot application.
## Services
This demo consistes three services:
- a zookeper instance
- a service producer
- a service consumer
The service producer exposes a ```Greeting``` service through RPC,
service consumer access the producer.
## Zookeeper
Run a docker image.
## Service Producer
Code in [service-producer](service-producer). API defined in [service-api](service-api).
Build docker image:
```
cd service-producer
mvn package
docker build -t producer .
```
## Service Consumer
Code in [service-consumer](service-consumer).
Build docker image:
```
cd service-consumer
mvn package
docker build -t consumer .
```
## Run
Use docker-compose command to run it.
```
cd docker
docker-compose up -d
```
Verify that all works:
```
$curl http://localhost:8899/
Greetings from Dubbo Docker
```
## Run it on Alibaba Cloud
Use [docker/docker-compose-acs.yml](docker/docker-compose-acs.yml) to deploy this application to
Aliyun Container Service (Alibaba Cloud) swarm cluster.
2017.11.30 Update:
Add compose v3 sample yml file: [docker/docker-compose-v3.yml](docker/docker-compose-v3.yml)
### Deploy the application to Kubernetes
2018.8.17 Update:
You can user helm to install this sample in a Kubernetes cluster.
```
$ cd docker
$ helm install -n dubbo-sample dubbo-sample
```
Check helm status
```
$ helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
dubbo-sample 1 Fri Aug 17 07:27:00 2018 DEPLOYED dubbo-sample-0.0.1 default
```
Check kubernetes and service status:
```
$ kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/consumer-749bf8484d-js6wf 1/1 Running 0 7m
pod/producer-b4f76b6c7-b8jhg 1/1 Running 0 7m
pod/zookeeper-8455f4fdc9-ht9ms 1/1 Running 0 7m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/consumer ClusterIP 172.19.10.188 <none> 8899/TCP 7m
service/kubernetes ClusterIP 172.19.0.1 <none> 443/TCP 45d
service/zookeeper ClusterIP 172.19.10.253 <none> 2181/TCP 7m
```
Expose consumer a public IP, or add ingress to it, Visit consumer via browser, you will see the greetings.
```
Greetings from Dubbo Docker
```
This sample tested on Aliyun Container Service for Kubernetes. | 1 |
mechero/completable-future-example | Example project comparing Java's CompletableFuture and Future implementations | completablefuture future java thepracticaldeveloper | # CompletableFuture, Future and Streams
This project uses a sample use case to compare Java's `CompletableFuture` with plain `Future` and other approaches like plain Java and the Stream API.
There is a story behind this code to make it more fun and, at the same time, to give a goal to the sample code so it's easier to compare. Welcome to the Java Bank Robbery with CompletableFutures, Futures and Streams.

## Blog
The comparison between these approaches, and a good introduction to `CompletableFuture` is available at [The Practical Developer Site](https://thepracticaldeveloper.com/?p=1027). I recommend you to read that guide to follow the codebase.
## Code
The code is split into three main parts:
* The main application class `App`, which runs the different code alternatives and shows the result.
* The `objects` used to represent this story: `Actions`, `Loot` and `Thief`.
* The alternatives used to execute the story:
* `SingleThreadOpenSafeLock` contains two approaches, both single-threaded: plain, imperative Java and Stream API based.
* `FutureOpenSafeLock` implements the plan using Java Futures, also in imperative-ish way.
* `CompletableFutureOpenSafeLock` uses a few important methods of the `CompletableFuture` API to demonstrate how powerful it is to solve composed, multi-threaded problems.
Remember that the conclusions of the comparison are also included in the guide so, [check it out now!](https://thepracticaldeveloper.com/?p=1027)
| 1 |
jpatanooga/Caduceus | Set of example algorithm implementations focused on statistics and machine learning | null | null | 1 |
birchsport/titanic | Example code for solving the Titanic prediction problem found on Kaggle. | null | # titanic #
Example code for solving the Titanic prediction problem (https://www.kaggle.com/c/titanic-gettingStarted) found on Kaggle. This example uses the Weka Data Mining Libraries to perform our classifications and predictions. Note, we are using Weka version 3.6.9.
## Data Cleanup/Initialization ##
Before we begin, we have to clean up the data files provided by Kaggle (these cleanup steps have already been performed on the committed files). The first step is to remove the nested '""' (quotation marks) from the files. This was simply a straight search and replace operation in my editor.
The next step is to convert the CSV formatted files into the ARFF format. The ARFF format provides more detailed information about the type of data in the CSV files. To perform this conversion, you can use the CSVLoader from the Weka libraries.
```
java -cp lib/weka.jar weka.core.converters.CSVLoader test.csv > test.arff
java -cp lib/weka.jar weka.core.converters.CSVLoader train.csv > train.arff
```
Once we have created the ARFF files, we need to clean them up a little bit. First, we identify any 'string' column to be of type string, and not nominal. Then we ensure that nominal values are in the same order for both files (VERY IMPORTANT!). Here is what the header section of the ARFF file should look like:
```
@attribute survived {0,1}
@attribute pclass numeric
@attribute name string
@attribute sex {male,female}
@attribute age numeric
@attribute sibsp numeric
@attribute parch numeric
@attribute ticket string
@attribute fare numeric
@attribute cabin string
@attribute embarked {Q,S,C}
```
## Training, Predicting, and Verifying the data ##
Now that we have cleaned up our data, we are ready to run the code. I have included the Eclipse project files to make it easy for anyone to import this project into Eclipse and go. I have also included an Ant build file to compile and run everything as well. If you don't have either of those options, you are on your own.
### Training ###
To train the classifier, execute the 'titanic.weka.Train' class or run 'ant train' in a terminal. This will load the training data, create and train a Classifier, and write the Classifier to disk.
### Predicting ###
To create a prediction, execute the 'titanic.weka.Predict' class or run 'ant predict'. This will load the test data, read the trained Classifier from disk, and produce a 'predict.csv'. This CSV file is in a suitable format to submit to Kaggle.
### Verifying ###
To verify our predictions, execute the 'titanic.weka.Verify' class or run 'ant verify'. This will load our prediction results, read the trained Classifier from disk, then evaluate the classification performance. You will see output similar to this:
```
Correctly Classified Instances 418 100 %
Incorrectly Classified Instances 0 0 %
Kappa statistic 1
Mean absolute error 0.1409
Root mean squared error 0.1986
Relative absolute error 30.3515 %
Root relative squared error 41.2246 %
Total Number of Instances 418
```
| 1 |
stevenalexander/docker-authentication-authorisation | Example microservice authentication and authorisation solution using Docker containers | null | # Docker authentication and authorisation images
This is sample implementation of the microservice authentication and authorisation pattern I described in a previous
blog posts ([here](https://stevenwilliamalexander.wordpress.com/2014/04/24/microservice-authentication-and-authorisation/)
for pattern, [here](https://stevenwilliamalexander.wordpress.com/2015/03/12/microservice-authentication-and-authentication-scaling/)
for how it could scale). It uses [Nginx](http://nginx.org/) with [Lua](http://wiki.nginx.org/HttpLuaModule) and
[Dropwizard](http://www.dropwizard.io/) for the microservices, provisioned into containers using [Docker](https://www.docker.com/).
Requires:
* [Docker](https://www.docker.com/)
* [Boot2Docker](http://boot2docker.io/)
* [Docker-compose](http://docs.docker.com/compose/)
* JDK (to compile java file locally)
* [Gradle](https://gradle.org/) (for build automation)
I created this project to test using Docker as part of the development process to reduce the separation between
developers and operations. The idea being that developers create and maintain both the code and the containers that
their code will run in, including scripts/tools used to configure and setup those containers. Hopefully this will reduce
the knowledge gap that forms a barrier between developers and operations in projects, causing problems when developers
push code that breaks in production ("throwing over the wall" at operations).
I'm aware that Docker and containers in general are not a cure-all for 'devOps', they are only an abstraction that
tries to make your applications run in an environment as similar to production as possible and make deployment/setup
more consistent. Containers running locally or on a test environment are not the same as the solution running on production. There are
concerns about performance/networking/configuration/security which developers need to understand in order to produce
truly production ready code that de-risks regular releases. Creating a 'devOps' culture to decrease the time necessary
to release and increase quantity requires a change in process and thinking, not just technology.
## Running the containers
```
# Build microservices and copy their files into volume directories
gradle buildJar
# Run containers with dev architecture
docker-compose -f dev-docker-compose.yml up
# curl your boot2docker VM IP on port 8080 to get the login page, logs are stored in docker/volume-logs
```
## Details
The solution is composed of microservices, using [nginx](http://nginx.org/) as a reverse proxy and
[Lua](http://wiki.nginx.org/HttpLuaModule) scripts to control authentication/sessions. Uses [Docker](https://www.docker.com/)
and [Docker Compose](https://docs.docker.com/compose/) to build container images which are deployed onto a Docker host
VM.
### Microservices
The solution is split into small web services focused on a specific functional area so they can be developed and
maintained individually. Each one has it's own data store and can be deployed or updated without affecting the others.
- [Authentication](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/microservices/authentication) - used to authenticate users against a set of stored
credentials
- [Authorisation](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/microservices/authorisation) - used to check authenticated users permissions to perform
actions
- [Frontend](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/microservices/frontend) - HTML UI wrapper for the login/person functionality
- [Person](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/microservices/person) - used to retrieve and update person details, intended as a simple example
of an entity focused microservice which links into the Authorisation microservice for permissions
- [Session](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/microservices/session) - used to create and validate accessTokens for authenticated users
There is an [Api](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/microservices/api) library containing objects used by multiple services (for real solution
should be broken up into API specific versioned libraries for use in various clients, e.g. personApi, authorisationApi).
### Nginx reverse proxy
Nginx is used as the reverse proxy to access the Frontend microservice and it also wires together the authentication and
session management using Lua scripts. To provision the Nginx container I created a DockerFile which installs nginx with
[OpenResty](http://openresty.org/)
- [Dockerfile](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/docker/image-nginx-lua/Dockerfile) - defines the Nginx container image, with modules for Lua
scripting
- [nginx.conf](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/docker/volume-nginx-conf.d/nginx.conf) - main config for Nginx, defines the endpoints
available and calls the Lua scripts for access and authentication
- [access.lua](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/docker/volume-nginx-conf.d/access.lua) - run anytime a request is received, defines a list of
endpoints which do not require authentication and for other endpoints it checks for accessToken cookie in the request
header then validates it against the Session microservice
- [authenticate.lua](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/docker/volume-nginx-conf.d/authenticate.lua) - run when a user posts to /login, calls
the Authentication microservice to check the credentials, then calls the Session microservice to create an accessToken
for the new authenticated session and finally returns a 302 response with the accessToken in a cookie for future
authenticated requests.
- [logout.lua](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/docker/volume-nginx-conf.d/logout.lua) - run when a user calls /logout, calls the Session
microservice to delete the users accessToken
#### Authentication and authorisation sequence diagram

### Docker containers and volumes
The interesting thing about using Docker with microservices is that you can define a container image per microservice
then host those containers in various arrangements of Docker host machines to make your architecture. The containers can
be created/destroyed easily, give guarantees of isolation from other containers and only expose what you define
(ports/folders etc.). This makes them easily portable between hosts compared to something like a puppet module that
needs more care and configuration to ensure it can operate on the puppet host.
To develop and test the solution locally I used a development architecture defined in a
[Docker Compose](http://docs.docker.com/compose/) yaml file ([here](https://github.com/stevenalexander/docker-authentication-authorisation/tree/master/dev-docker-compose.yml)). This created a
number of containers with volumes and exposed ports then links them together appropriately.
Below shows architectures which can be built using the containers.
#### Development architecture

This is a small scale architecture intended for local development. Developers can spin this up quickly and work on the
full application stack. It uses a single Docker host (the boot2docker VM) with single containers for each microservice.
This means that if any of the containers or services fail there is no redundancy.
#### Small scaled architecture

This is a larger scale architecture, using HAProxy to load balance and introduce redundancy. This architecture allows
scaling the business microservices to handle increasing/decreasing load.
#### Large scaling architecture

This is an example production architecture, running on multiple Docker hosts with redundancy for all microservices and
load balancing for the web servers. The number of hosts you have per container can be increased/decreased dynamically
based on the individual load on each service and each container can be updated without downtime by rolling updates.
On a real production architecture you would want to include:
- Healthchecks
- Monitoring (e.g. Dropwizard Metrics pushing to Graphite)
- Dynamic scaling based on load monitoring
- Periodic backups of peristed data
- Security testing
## Conclusions
I found working with Docker extremely easy, the tooling and available images made it simple to create containers to do
what I needed. For development the speed I could create and start containers for the microservices was amazing, 5 secs
to spin up the entire 6 container solution with Docker Compose. Compared to development using individual VMs provisioned
by Puppet and Vagrant this was lightning fast. Accessing the data/logs on the containers was simple also, making debug a
lot easier, and remote debug by opening ports was also possible.
Still have some concerns about how production ready my containers would be and what I would need to do to make them
secure. I did not touch on a lot of the work which would be necessary to create and provision the Docker hosts
themselves, including configuration of the microservices and Nginx containers per host. For a reasonable sized
architecture this would require a tool like Puppet anyway so would not save much effort on the operations side.
I would like a chance to use some sort of containerisation in a real project and see how it works out, in the
development side, operations for deployment in environment and in actual production use. For now I'd definitely
recommend developers to try it out for defining and running their local development environments as an alternative to
complex boxen/vagrant setups.
## Additions
### Google Cloud with Kubernetes
- [Publishing a custom docker image to Google private repository and running in a cluster as a single pod](https://github.com/stevenalexander/docker-authentication-authorisation/blob/master/kubernetes-nginx-lua.md)
- [Running the solution as a single pod](https://github.com/stevenalexander/docker-authentication-authorisation/blob/master/kubernetes-single-pod.md)
- [Persisting data](https://github.com/stevenalexander/docker-authentication-authorisation/blob/master/kubernetes-persistent-disks.md)
| 1 |
khmarbaise/jdk9-jlink-jmod-example | Example for using maven-jmod-plugin / maven-jlink-plugin | java jdk9 jigsaw maven maven-plugin | Maven JDK9 Jigsaw Example
=========================
Status
------
* Currently not more than a Proof of Concept
* Everything here is speculative!
Overview
--------
* Example how could jmod/jlink work together in a Maven build
(using [`maven-jlink-plugin`](https://github.com/apache/maven-jlink-plugin/)
and [`maven-jmod-plugin`](https://github.com/apache/maven-jmod-plugin/)
Maven [plugins](http://maven.apache.org/plugins/)).
| 1 |
JoelPM/BidiThrift | An example of how to use Thrift for bi-directional async RPC | null | null | 1 |
redis-developer/redis-ai-resources | ✨ A curated list of awesome community resources, integrations, and examples of Redis in the AI ecosystem. | ai awesome-list ecosystem feature-store machine-learning redis vector-database vector-search | <img align="right" src="assets/redis-logo.svg" style="width: 130px">
# Redis: AI Resources
✨ A curated list of awesome community resources including content, integrations, documentation and examples for Redis in the AI ecosystem.
## Table of Contents
- Redis as a [Vector Database](#vector-database)
- Redis as a [Feature Store](#feature-store)
----------
## Vector Database
The following list provides resources, integrations, and examples for **Redis as a Vector Database**.
### Integrations/Tools
- [⭐ RedisVL](https://github.com/RedisVentures/redisvl) - a dedicated Python client lib for Redis as a Vector DB.
- [⭐ LangChain Python](https://github.com/langchain-ai/langchain) - popular Python client lib for building LLM applications.
powered by Redis.
- [⭐ LangChain JS](https://github.com/langchain-ai/langchainjs) - popular JS client lib for building LLM applications.
powered by Redis.
- [⭐ LlamaIndex](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/RedisIndexDemo.html) - LlamaIndex Integration for Redis as a vector Database (formerly GPT-index).
- [Semantic Kernel](https://github.com/microsoft/semantic-kernel/tree/main) - popular lib by MSFT to integrate LLMs with plugins.
- [Metal](https://getmetal.io/) - an all-inclusive LLM development platform for building RAG applications. Built on top of Redis as a vector database and high performance data layer.
- [RelevanceAI](https://relevance.ai/) - Platform to ag, search and analyze unstructured data faster, built on Redis.
- [DocArray](https://docarray.jina.ai/advanced/document-store/redis/) - DocArray Integration of Redis as a VectorDB by Jina AI.
- [ChatGPT Memory](https://github.com/continuum-llms/chatgpt-memory) - contextual and adaptive memory for ChatGPT
- [Haystack Example](https://github.com/artefactory/redis-player-one/blob/main/askyves/redis_document_store.py) - Haystack Integration (example) of Redis as a VectorDB.
- [Mantium AI](https://mantiumai.com/)
### Examples
#### Quickstarts
| Resource | Description |
| --- | --- |
| [⭐ Hands-On Redis Workshops](https://github.com/Redislabs-Solution-Architects/Redis-Workshops) | Hands-on workshops for Redis JSON, Search, and VSS / Gen AI. |
| [⭐ Redis VSS Getting Started - 3 Ways](https://github.com/Redislabs-Solution-Architects/financial-vss) | Getting started VSS demo covering RedisVL, Redis Python, and LangChain |
| [⭐ OpenAI Cookbook Examples](https://github.com/openai/openai-cookbook/tree/main/examples/vector_databases) | OpenAI Cookbook examples using Redis as a vector database |
| [Redis VSS - Simple Streamlit Demo](https://github.com/antonum/Redis-VSS-Streamlit) | Streamlit demo of Redis Vector Search |
| [Redis VSS - LabLab AI Quickstart](https://github.com/lablab-ai/Vector-Similarity-Search-with-Redis-Quickstart-Notebook) | Quickstart notebook sponspored by LabLab AI for their AI hackathons. |
| [Redis VSS Documentation Quickstart](https://github.com/RedisVentures/redis-vss-getting-started) | Redis.io VSS Quickstart code. |
#### Question & Answer
| Resource | Description | Stars |
| --- | --- | --- |
| [⭐ ArxivChatGuru](https://github.com/RedisVentures/ArxivChatGuru) | Streamlit demo of QnA over Arxiv documents with Redis & OpenAI | ![redis-openai-qna-streamlit-demo-stars] |
| [⭐ Azure OpenAI Embeddings Q&A](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | OpenAI and Redis as a Q&A service on Azure | ![azure-openai-embeddings-qna-stars] |
| [LLM Document Chat](https://github.com/RedisVentures/LLM-Document-Chat) | Using LlamaIndex and Redis to chat with Documents | ![llm-document-chat-stars] |
| [GCP Vertex AI "Chat Your PDF"](https://github.com/RedisVentures/gcp-redis-llm-stack/tree/main/examples/chat-your-pdf) | Chat with a PDF using Redis & VertexAI LLMs | |
| [LLMChat](https://github.com/c0sogi/LLMChat) | Full-stack implementation using FastAPI, Redis, OpenAI and Flutter. | ![llmchat-stars] |
| [Example eCommerce Chatbot](https://github.com/RedisVentures/redis-langchain-chatbot) | eCommerce Chatbot with Redis, LangChain, and OpenAI | ![redis-langchain-chatbot-stars] |
| [Food-GPT](https://github.com/DevSamurai/food-gpt) | Food-GPT is a QnA Chat System | ![food-gpt-stars] |
| [Redis vector bot](https://github.com/aetherwu/redis-vector-bot) | Redis vector bot for Ecommerce QnA | ![redis-vector-bot-stars] |
| [Local Model QnA Example](https://github.com/cxfcxf/embeddings) | Local LLMs embeddings with Redis as vector db | ![local-model-qna-example-stars] |
#### NLP & Information Retrieval
| Resource | Description | Stars |
| --- | --- | --- |
| [⭐ ChatGPT Retrieval Plugin](https://github.com/openai/chatgpt-retrieval-plugin) | ChatGPT plugin for retrieving personal documents | ![chatgpt-retrieval-plugin-stars] |
| [⭐ Auto-GPT](https://github.com/Torantulino/Auto-GPT) | Experimental OSS app showcasing GPT-4 with Redis as a vectorized memory store | ![auto-gpt-stars]
| [⭐ arXiv Paper Search](https://github.com/RedisVentures/redis-arXiv-search) | Semantic search over arXiv scholarly papers | ![redis-arxiv-search-stars] |
| [⭐ Motörhead](https://github.com/getmetal/motorhead) | Rust-based IR server for LLMs backed by Redis | ![motorhead-stars] |
| [Financial News Demo](https://github.com/RedisAI/financial-news) | Sentiment analysis and Semantic similarity in Financial News articles | ![financial-news-demo-stars] |
| [Romeo GPT](https://github.com/fmanrique8/romeo-gpt) | AI Document management assistant | ![romeo-gpt-stars] |
| [The Pattern](https://github.com/applied-knowledge-systems/the-pattern) | CORD19 medical NLP pipeline with Redis | ![the-pattern-stars] |
| [GPT Vectors Example](https://github.com/gbaeke/gpt-vectors) | Code associated with the blog post below: "Storing and querying embeddings with Redis" | ![gpt-vectors-stars] |
| [Azure OpenAI Redis Deployment Template](https://github.com/RedisVentures/azure-openai-redis-deployment) | Terraform template automates the end-to-end deployment of Azure OpenAI applications using Redis Enterprise as a vector database | ![azure-openai-redis-deployment-stars] |
| [VSS for Finance](https://github.com/redislabs-training/redisfi-vss) | Searching through SEC filings with Redis VSS | ![redisfi-vss-stars] |
#### Recommendation Systems
| Resource | Description | Stars |
| --- | --- | --- |
| [⭐ Redis Merlin RecSys](https://github.com/RedisVentures/Redis-Recsys) | 3 Redis & NVIDIA Merlin Recommendation System Architectures | ![redis-recsys-stars] |
| [⭐ Visual Product Search](https://github.com/RedisVentures/redis-product-search) | eCommerce product search (with image and text) | ![redis-product-search-stars] |
| [Product Recommendations with DocArray / Jina](https://github.com/jina-ai/product-recommendation-redis-docarray) | Content-based product recommendations with Redis and DocArray | ![jina-product-recommendations-stars] |
| [Amazon Berkeley Product Dataset Demo](https://github.com/RedisAI/vecsim-demo) | Redis VSS demo on Amazon Berkeley product dataset | ![redis-vecsim-demo-stars] |
#### Other
| Resource | Description | Stars |
| --- | --- | --- |
| [VectorVerse](https://github.com/abhishek-ch/VectorVerse) | Vector Database comparison app | ![vectorverse-stars] |
| [Simple Vector Similarity Intro](https://github.com/RedisVentures/simple-vecsim-intro) | Dockerized Jupyter Notebook & Streamlit demo of Redis Vector Search | ![redis-vecsim-intro-stars] |
| [Redis Solution Architects VSS Examples](https://github.com/Redislabs-Solution-Architects/vss-ops) | Examples of VSS in Python | ![vss-ops-stars] |
| [TopVecSim](https://github.com/team-castle/topvecsim/) | Topic Similarity with Redis VSS | ![top-vecsim-stars] |
| [Java Demo](https://github.com/RedisAI/Java-VSS-demo) | Redis VSS demo in Java | ![java-demo-stars] |
| [Redis VSS Go template](https://github.com/dathan/go-vector-embedding) | Redis VSS template in Go | ![redis-vss-go-template-stars] |
| [Redis VSS Demo](https://github.com/bsbodden/roms-vss-celebs) | Redis VSS demo with celebrity faces | ![celeb-faces-stars] |
#### [Redis Vector Search Engineering Lab Submissions](https://github.com/RedisVentures/RedisVentures.github.io/issues/1) - Submissions to the first Redis VSS hackathon.
| Resource | Description | Stars |
| --- | --- | --- |
| [arXiv CoPilot](https://github.com/artefactory/redisventures-hackunamadata) | Chrome extension that finds relevant/similar academic papers while performing research | ![arxiv-copilot-stars] |
| [AskYeves Question & Answer App](https://github.com/artefactory/redis-player-one) | QA & Search Engine modeled after the infamous Yves Saint Laurent | ![askyeves-stars] |
| [Darwinian Paper Explorer App](https://github.com/artefactory/AreYouRedis) | Explore arXiv scholarly papers over time with topic evolution and search | ![darwinian-paper-explorer-stars] |
| [PapersWithCode Browser Extension](https://github.com/ilhamfp/simpa) | Chrome extension for the PapersWithCode site that finds relevant/similar papers | ![paperswithcode-stars] |
| [Document Search + CLI](https://github.com/artefactory/redis-team-THM) | Search engine for documents with a CLI | ![document-search-cli-stars] |
### RediSearch Clients
| Client | Language | License | Stars |
| --- | --- | --- | --- |
| [Redis-Py](https://github.com/redis/redis-py) | Python | MIT | ![redis-py-stars] |
| [RedisVL](https://github.com/RedisVentures/redisvl) | Python (*Alpha*) | MIT| ![redisvl-stars] |
| [jedis][jedis-url] | Java | MIT | ![Stars][jedis-stars] |
| [node-redis][node-redis-url] | Node.js | MIT | ![Stars][node-redis-stars] |
| [nredisstack][nredisstack-url] | .NET | MIT | ![Stars][nredisstack-stars] |
| [redisearch-go][redisearch-go-url] | Go | BSD | [![redisearch-go-stars]][redisearch-go-url] |
| [redisearch-api-rs][redisearch-api-rs-url] | Rust | BSD | [![redisearch-api-rs-stars]][redisearch-api-rs-url] |
For a full list of RediSearch clients, see [RediSearch Clients](https://redis.io/docs/stack/search/clients/).
For a full list of Redis Clients see [Redis Clients](https://redis.io/resources/clients/).
### Content
- [⭐ NVIDIA Developer Blog -- Offline to Online: Feature Storage for Real Time Recommendation Systems with NVIDIA Merlin](https://developer.nvidia.com/blog/offline-to-online-feature-storage-for-real-time-recommendation-systems-with-nvidia-merlin/)
- [Vector Similarity Search: From Basics to Production](https://mlops.community/vector-similarity-search-from-basics-to-production/) - Introductory blog post to VSS and Redis as a VectorDB.
- [AI-Powered Document Search](https://datasciencedojo.com/blog/ai-powered-document-search/) - Blog post covering AI Powered Document Search Use Cases & Architectures.
- [Vector Search on Azure](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/vector-similarity-search-with-azure-cache-for-redis-enterprise/ba-p/3822059) - Using Azure Redis Enterprise for Vector Search
- [Vector Databases and Large Language Models](https://youtu.be/GJDN8u3Y-T4) - Talk given at LLMs in Production Part 1 by Sam Partee.
- [Vector Databases and AI-powered Search Talk](https://www.youtube.com/watch?v=g2bNHLeKlAg) - Video "Vector Databases and AI-powered Search" given by Sam Partee at SDSC 2023.
- [Engineering Lab Review](https://mlops.community/redis-vector-search-engineering-lab-review/) - Review of the first Redis VSS Hackathon.
- [Real-Time Product Recommendations](https://jina.ai/news/real-time-product-recommendation-using-redis-and-docarray/) - Content-based recsys design with Redis and DocArray.
- [Redis as a Vector Database](https://vishnudeva.medium.com/redis-as-a-vector-database-rediscloud-2a444c478f3d) - Hackathon review blog post covering Redis as a VectorDB.
- [LabLab AI Redis Tech Page](https://lablab.ai/tech/redis)
- [Storing and querying for embeddings with Redis](https://blog.baeke.info/2023/03/21/storing-and-querying-for-embeddings-with-redis/)
- [Building Intelligent Apps with Redis Vector Similarity Search](https://redis.com/blog/build-intelligent-apps-redis-vector-similarity-search/)
- [Rediscovering Redis for Vector Similarity](https://redis.com/blog/rediscover-redis-for-vector-similarity-search/)
- [VSS Cheat Sheet](https://drive.google.com/file/d/10O52YXE1-x9jUTv2G-iJUHFSbthWAcyy/view?usp=share_link) - Redis Vector Search Cheat Sheet by Datascience Dojo.
- [RedisDays Keynote](https://www.youtube.com/watch?v=EEIBTEpb2LI) - Video "Infuse Real-Time AI Into Your "Financial Services" Application".
- [RedisDays Trading Signals](https://www.youtube.com/watch?v=_Lrbesg4DhY) - Video "Using AI to Reveal Trading Signals Buried in Corporate Filings".
- [LLM Stack Hackathon writeup](https://medium.com/@sonam.gupta1105/equipping-with-llm-stack-mlops-community-hackathon-fd0505762c85) - Building a QnA Slack bot for the MLOps Community Hackathon with OpenAI and Redis
### Benchmarks
- [Vector Database Benchmarks](https://jina.ai/news/benchmark-vector-search-databases-with-one-million-data/) - Jina AI VectorDB benchmarks comparing Redis against others.
- [ANN Benchmarks](https://ann-benchmarks.com) - Standard ANN Benchmarks site. *Only using single Redis OSS instance/client.*
### Documentation
- [Redis Vector Database QuickStart](https://redis.io/docs/get-started/vector-database/)
- [Redis Vector Similarity Docs](https://redis.io/docs/interact/search-and-query/advanced-concepts/vectors/) - Official Redis literature for Vector Similarity Search.
- [Redis-py Search Docs](https://redis.readthedocs.io/en/latest/redismodules.html#redisearch-commands) - Redis-py client library docs for RediSearch.
- [Redis-py General Docs](https://redis.readthedocs.io/en/latest/) - Redis-py client library documentation.
- [Redis Stack](https://redis.io/docs/stack/) - Redis Stack documentation.
- [Redis Clients](https://redis.io/docs/clients/) - Redis client list.
[openai-cookbook-stars]: https://img.shields.io/github/stars/openai/openai-cookbook?style=social
[redis-openai-qna-streamlit-demo-stars]: https://img.shields.io/github/stars/RedisVentures/redis-openai-qna?style=social
[redis-py-stars]: https://img.shields.io/github/stars/redis/redis-py?style=social
[redisvl-stars]: https://img.shields.io/github/stars/RedisVentures/redisvl?style=social
[redis-py-url]: https://github.com/redis/redis-py
[redis-py-stars]: https://img.shields.io/github/stars/redis/redis-py.svg?style=social&label=Star&maxAge=2592000
[jedis-url]: https://github.com/redis/jedis
[jedis-stars]: https://img.shields.io/github/stars/redis/jedis.svg?style=social&label=Star&maxAge=2592000
[nredisstack-url]: https://github.com/redis/nredisstack
[nredisstack-stars]: https://img.shields.io/github/stars/redis/nredisstack.svg?style=social&label=Star&maxAge=2592000
[node-redis-url]: https://github.com/redis/node-redis
[node-redis-stars]: https://img.shields.io/github/stars/redis/node-redis.svg?style=social&label=Star&maxAge=2592000
[redisearch-go-url]: https://github.com/RediSearch/redisearch-go
[redisearch-go-stars]: https://img.shields.io/github/stars/RediSearch/redisearch-go.svg?style=social&label=Star&maxAge=2592000
[redisearch-api-rs-url]: https://github.com/RediSearch/redisearch-api-rs
[redisearch-api-rs-stars]: https://img.shields.io/github/stars/RediSearch/redisearch-api-rs.svg?style=social&label=Star&maxAge=2592000
[java-demo-stars]: https://img.shields.io/github/stars/RedisAI/Java-VSS-demo.svg?style=social&label=Star&maxAge=2592000
[top-vecsim-stars]: https://img.shields.io/github/stars/team-castle/topvecsim.svg?style=social&label=Star&maxAge=2592000
[document-search-cli-stars]: https://img.shields.io/github/stars/artefactory/redis-team-THM.svg?style=social&label=Star&maxAge=2592000
[paperswithcode-stars]: https://img.shields.io/github/stars/ilhamfp/simpa.svg?style=social&label=Star&maxAge=2592000
[darwinian-paper-explorer-stars]: https://img.shields.io/github/stars/artefactory/AreYouRedis.svg?style=social&label=Star&maxAge=2592000
[askyeves-stars]: https://img.shields.io/github/stars/artefactory/redis-player-one.svg?style=social&label=Star&maxAge=2592000
[arxiv-copilot-stars]: https://img.shields.io/github/stars/artefactory/redisventures-hackunamadata.svg?style=social&label=Star&maxAge=2592000
[the-pattern-stars]: https://img.shields.io/github/stars/applied-knowledge-systems/the-pattern.svg?style=social&label=Star&maxAge=2592000
[financial-news-demo-stars]: https://img.shields.io/github/stars/RedisAI/financial-news.svg?style=social&label=Star&maxAge=2592000
[redis-vecsim-intro-stars]: https://img.shields.io/github/stars/RedisVentures/simple-vecsim-intro.svg?style=social&label=Star&maxAge=2592000
[redis-vss-streamlit-demo-stars]: https://img.shields.io/github/stars/antonum/Redis-VSS-Streamlit.svg?style=social&label=Star&maxAge=2592000
[redis-arxiv-search-stars]: https://img.shields.io/github/stars/RedisVentures/redis-arXiv-search.svg?style=social&label=Star&maxAge=2592000
[azure-openai-embeddings-qna-stars]: https://img.shields.io/github/stars/ruoccofabrizio/azure-open-ai-embeddings-qna.svg?style=social&label=Star&maxAge=2592000
[redis-recsys-stars]: https://img.shields.io/github/stars/redisventures/redis-recsys.svg?style=social&label=Star&maxAge=2592000
[redis-product-search-stars]: https://img.shields.io/github/stars/redisventures/redis-product-search.svg?style=social&label=Star&maxAge=2592000
[jina-product-recommendations-stars]: https://img.shields.io/github/stars/jina-ai/product-recommendation-redis-docarray.svg?style=social&label=Star&maxAge=2592000
[redis-vecsim-demo-stars]: https://img.shields.io/github/stars/redisai/vecsim-demo.svg?style=social&label=Star&maxAge=2592000
[chatgpt-retrieval-plugin-stars]: https://img.shields.io/github/stars/openai/chatgpt-retrieval-plugin?style=social
[motorhead-stars]: https://img.shields.io/github/stars/getmetal/motorhead?style=social
[redis-langchain-chatbot-stars]: https://img.shields.io/github/stars/RedisVentures/redis-langchain-chatbot?style=social
[gpt-vectors-stars]: https://img.shields.io/github/stars/gbaeke/gpt-vectors?style=social
[vss-ops-stars]: https://img.shields.io/github/stars/Redislabs-Solution-Architects/vss-ops?style=social
[lablab-vss-quickstart]: https://img.shields.io/github/stars/lablab-ai/Vector-Similarity-Search-with-Redis-Quickstart-Notebook?style=social
[auto-gpt-stars]: https://img.shields.io/github/stars/Torantulino/Auto-GPT?style=social
[romeo-gpt-stars]: https://img.shields.io/github/stars/fmanrique8/romeo-gpt?style=social
[celeb-faces-stars]: https://img.shields.io/github/stars/bsbodden/roms-vss-celebs?style=social
[redis-vector-bot-stars]: https://img.shields.io/github/stars/aetherwu/redis-vector-bot?style=social
[redis-vss-go-template-stars]: https://img.shields.io/github/stars/dathan/go-vector-embedding?style=social
[redisfi-vss-stars]: https://img.shields.io/github/stars/redislabs-training/redisfi-vss?style=social
[llm-document-chat-stars]: https://img.shields.io/github/stars/RedisVentures/llm-document-chat?style=social
[food-gpt-stars]: https://img.shields.io/github/stars/DevSamurai/food-gpt?style=social
[llmchat-stars]: https://img.shields.io/github/stars/c0sogi/llmchat?style=social
[vectorverse-stars]: https://img.shields.io/github/stars/abhishek-ch/vectorverse?style=social
[local-model-qna-example-stars]: https://img.shields.io/github/stars/cxfcxf/embeddings?style=social
[azure-openai-redis-deployment-stars]: https://img.shields.io/github/stars/RedisVentures/azure-openai-redis-deployment?style=social
____
## Feature Store
The following list provides resources, integrations, and examples for **Redis as a Feature Store**.
### Examples
#### Recommendation Systems
| Resource | Description | Stars |
| --- | --- | --- |
| [⭐ Redis Merlin RecSys](https://github.com/RedisVentures/Redis-Recsys) | Redis & NVIDIA Merlin Recommendation System architectures | ![redis-recsys-stars] |
| [Market-basket-analysis](https://github.com/RedisLabs-Field-Engineering/demo-market-basket-analysis) | An exmaple of predicting shopping baskets on passed purchases | ![market-basket-analysis-stars] |
#### Life Sciences / Healthcare
| Resource | Description | Stars |
| --- | --- | --- |
| [⭐ Redis Vaccine Forecaster](https://github.com/RedisVentures/redis-feast-gcp) | End-to-end ML system to predict vaccine demand deployed in GCP with Redis, Feast, Triton, and Vertex AI. | ![redis-vaccine-forecaster-stars] |
#### Image/Video
| Resource | Description | Stars |
| --- | --- | --- |
| [Animal Recognition Demo](https://github.com/RedisGears/AnimalRecognitionDemo) | An example of using Redis Streams, RedisGears and RedisAI for Realtime Video Analytics (i.e. filtering cats) | ![animal-recog-stars] |
| [Realtime Video Analytics](https://github.com/RedisGears/EdgeRealtimeVideoAnalytics) | An example of using Redis Streams, RedisGears, RedisAI and RedisTimeSeries for Realtime Video Analytics (i.e. counting people) | ![realtime-video-analytics-stars] |
#### Finance
| Resource | Description | Stars |
| --- | --- | --- |
| [Redis + Feast + Ray Demo](https://github.com/RedisVentures/redis-feast-ray) | A demo pipeline using Redis as an online feature store with Feast for orchestration and Ray for training and model serving | ![redis-vaccine-forecaster-stars] |
| [⭐ Loan Prediction Example](https://github.com/RedisVentures/loan-prediction-microservice) | Loan prediction example with Redis as the feature store and serving layer. | ![load-prediction-example-stars] |
#### Other
| Resource | Description | Stars |
| --- | --- | --- |
| [Redis SQL](https://github.com/redis-field-engineering/redis-sql-trino) | Indexed SQL queries on Redis data using Trino | ![redis-sql-stars] |
| [Redis GraphQL](https://github.com/redis-field-engineering/redis-graphql) | GraphQL queries on Redis data | ![redis-graphql-stars] |
| [RedisAI Examples](https://github.com/RedisAI/redisai-examples) | A collection of examples using RedisAI | ![redisai-examples-stars] |
### Materialization and Orchestration
| Resource | Description | Stars |
| --- | --- | --- |
| [⭐ Spark-Redis](https://github.com/RedisLabs/spark-redis) | Spark-Redis is a connector that allows you to stream data from Spark to Redis | ![spark-redis-stars] |
| [⭐ Feast](https://github.com/feast-dev/feast) | Feast feature orchestration system framework | ![feast-stars] |
| [Feathr](https://github.com/linkedin/feathr) | Feathr is a feature orchestration framework created by Linkedin | ![feathr-stars] |
| [Redis Kafka](https://github.com/redis-field-engineering/redis-kafka-connect) | Redis Kafka Connect is a connector that allows you to stream data from Kafka to Redis | ![redis-sql-stars] |
### Content
- [What is a Feature Store?](https://www.tecton.ai/blog/what-is-a-feature-store/) - introductory blog post on feature stores
- [Building a Gigascale Feature Store with Redis](https://doordash.engineering/2020/11/19/building-a-gigascale-ml-feature-store-with-redis/) - blog post on DoorDash's feature store architecture
- [Feature Store Comparison](https://mlops.community/learn/feature-store/) - comparison between a few feature store options.
- [Feature Storage with Feast and Redis](https://redis.com/blog/building-feature-stores-with-redis-introduction-to-feast-with-redis/) - blog post outlining basic Redis+Feast usage.
### Benchmarks
- [Feast Feature Serving Benchmarks](https://feast.dev/blog/feast-benchmarks/) - Feast-published benchmarks on Redis vs DynamoDB vs Datastore for feature retrieval.
### Documentation
- [Redis-py General Docs](https://redis.readthedocs.io/en/latest/) - Redis-py client library documentation.
- [RedisJSON](https://github.com/RedisJSON) - RedisJSON Module.
- [RedisAI](https://github.com/RedisAI/RedisAI) - RedisAI Module.
- [RedisTimeSeries](https://github.com/RedisTimeSeries/RedisTimeSeries) - Redis Time Series Module.
- [RedisConnect](https://github.com/redis-field-engineering/redis-connect-dist) - a distributed platform that enables real-time event streaming, transformation, and propagation of changed-data events from heterogeneous data platforms to Redis.
### Integrations
- [FeatureForm](https://www.featureform.com/?gclid=Cj0KCQjw_r6hBhDdARIsAMIDhV_lhReZdfM66Z5gE5yJCtDsSb3WeLhHjtI4AFokk_cjKC54vRDXN7waAq3HEALw_wcB) - open-source Feature Store orchestration framework.
- [Feast](https://docs.feast.dev/reference/online-stores/redis) - open-source Feature Store orchestration framework.
- [Feathr](https://github.com/feathr-ai/feathr) - open-source Feature Store orchestration framework pioneered by LinkedIn.
- [Tecton](https://www.tecton.ai/blog/announcing-support-for-redis/) - fully-managed Feature Store service.
[redis-graphql-stars]: https://img.shields.io/github/stars/redis-field-engineering/redis-graphql.svg?style=social&label=Star&maxAge=2592000
[spark-redis-stars]: https://img.shields.io/github/stars/RedisLabs/spark-redis.svg?style=social&label=Star&maxAge=2592000
[feathr-stars]: https://img.shields.io/github/stars/linkedin/feathr.svg?style=social&label=Star&maxAge=2592000
[feast-stars]: https://img.shields.io/github/stars/feast-dev/feast.svg?style=social&label=Star&maxAge=2592000
[load-prediction-example-stars]: https://img.shields.io/github/stars/RedisVentures/loan-prediction-microservice.svg?style=social&label=Star&maxAge=2592000
[redis-feast-ray-demo-stars]: https://img.shields.io/github.com/RedisVentures/redis-feast-ray.svg?style=social&label=Star&maxAge=2592000
[redis-vaccine-forecaster-stars]: https://img.shields.io/github/stars/RedisVentures/redis-feast-gcp.svg?style=social&label=Star&maxAge=2592000
[redis-kafka-connect-stars]: https://img.shields.io/github/stars/redis-field-engineering/redis-kafka-connect.svg?style=social&label=Star&maxAge=2592000
[redisai-examples-stars]: https://img.shields.io/github/stars/RedisAI/redisai-examples.svg?style=social&label=Star&maxAge=2592000
[realtime-video-analytics-stars]: https://img.shields.io/github/stars/RedisGears/EdgeRealtimeVideoAnalytics.svg?style=social&label=Star&maxAge=2592000
[animal-recog-stars]: https://img.shields.io/github/stars/RedisGears/AnimalRecognitionDemo.svg?style=social&label=Star&maxAge=2592000
[market-basket-analysis-stars]: https://img.shields.io/github/stars/RedisLabs-Field-Engineering/demo-market-basket-analysis.svg?style=social&label=Star&maxAge=2592000
[redis-sql-stars]: https://img.shields.io/github/stars/redis-field-engineering/redis-sql-trino.svg?style=social&label=Star&maxAge=2592000
----
*Have other contributions? [Checkout our contributing guidelines](contributing.md).*
| 0 |
chaostheory/jibenakka | A basic set of akka helper java classes and examples such as map reduce | null | jibenakka
=============
This is a set of basic Java examples for [akka](http://akka.io/).
## Getting Started
You will need to install [Apache Maven](http://maven.apache.org/). If you're using
Eclipse, it is recommended that you install the [m2e plugin](http://www.eclipse.org/m2e/).
Once you've properly installed and configured Maven, you can then
use it to easily download all of jibenakka's needed dependencies.
The actual sample code can be found in the `sample` package, while supporting classes
are located in the adjoining packages. Currently there are the following samples:
Word Count Map Reduce
-----------
This sample app peforms map reduce to count words in files using a combination
of akka Actors and Futures. It can be found under the `mapreduce` package within
the `sample` package.
Supervisor Hierarchy Fault Tolerance
-----------
This sample app demonstrates creating a hierarchy of Actors. This is currently a
work in progress. It can be found under the `fault` package within
the `sample` package.
| 0 |
thomasdarimont/spring-boot-admin-keycloak-example | Example for protecting Spring Boot Admin & Spring Boot Actuator endpoints with Keycloak | null | null | 1 |
srecon/ignite-book-code-samples | All code samples, scripts and more in-depth examples for the book high performance in-memory computing with Apache Ignite. Please use the repository the-apache-ignite-book" for Ignite version 2.6 or above." | bigdata cache gridgain high-performance ignite in-memory nosql | # High performance in-memory data grid with Apache Ignite
All code samples, scripts and more in-depth examples for the book **High performance in-memory computing with Apache Ignite**.
[](http://leanpub.com/ignite)
| 0 |
vaadin/base-starter-flow-quarkus | A project base/example for using Vaadin with Quarkus | java quarkus vaadin | # Project Base for Vaadin Flow and Quarkus
This project can be used as a starting point to create your own Vaadin Flow application for Quarkus. It contains all the necessary configuration with some placeholder files to get you started.
Quarkus 3.0+ requires Java 17.
Starter is also available for [gradle](https://github.com/vaadin/base-starter-flow-quarkus/tree/gradle)
## Running the Application
Import the project to the IDE of your choosing as a Maven project.
Run application using `mvnw` (Windows), or `./mvnw` (Mac & Linux).
Open [http://localhost:8080/](http://localhost:8080/) in browser.
If you want to run your app locally in production mode, call `mvnw package -Pproduction` (Windows), or `./mvnw package -Pproduction` (Mac & Linux)
and then
```
java -jar target/quarkus-app/quarkus-run.jar
```
### Including vaadin-jandex for Pro components
If you are using Pro components such GridPro you need to provide the Jandex index for them as well.
Although, this can be achieved by adding their names one-by-one in the `application.properties` similar to the following example:
```properties
quarkus.index-dependency.vaadin-grid-pro.group-id=com.vaadin
quarkus.index-dependency.vaadin-grid-pro.artifact-id=vaadin-grid-pro-flow
```
Vaadin recommends using the official Jandex index for the Pro components which is published as part of the platform:
```xml
<dependency>
<groupId>com.vaadin</groupId>
<artifactId>vaadin-jandex</artifactId>
</dependency>
```
The above dependency has already added to the `pom.xml` and all you need to do is uncomment it when if needed.
| 0 |
jotorren/microservices-transactions-tcc | Example of composite" transactions managed by an Atomikos TCC rest coordinator" | null | # Microservices and data consistency (I)
As you can read in [Christian Posta's excellent article](http://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/), when designing a microservices-based solution **our first choice to solve consistency between bounded contexts will be to communicate boundaries with immutable point in time events** (by means of a messaging queue/listener, a dedicated event store/publish-subscribe topic or a database/replicated log/event processor).
¿But how to deal with situations where, inevitably, we must update data from different contexts in a single transaction either across a single database or multiple databases? A combination of JPA 2.1 unsynchronized persistence contexts, JPA Entity listeners, Kafka and [Atomikos TCC](https://www.atomikos.com/Blog/TransactionManagementAPIForRESTTCC) could fit like a glove ;-)
Let's describe that approach. We will start by introducing all the actors:
- **Domain Services**. Each of the stateless and autonomous pieces that the whole system has been divided into.
- **Composite Services**. Coarse-grained service operations which are composed by many calls to one or more domain services.
- **Command**. Data describing a persistence operation performed by a domain service: "*an operation on a given entity within certain context*"
- **Composite transaction**. Set of commands that must be grouped and carried out together.
- **Coordinator**. Service to manage composite transactions lifecycle, deciding whether or not changes (commands) must be applied to the corresponding underlying repositories.
- **TCC Service**. *Try*-*Cancel*/*Confirm* protocol implementation. It handles all TCC remote calls verifying no transaction timeout has been exceeded.
- **Distributed, replicated event log**. Distributed store of composite transactions accessible by any service instance (domain, composite or coordinator)
I would like to point out that Domain, Composite, Coordinator and TCC services have no 2PC/XA support and they can be dynamically allocated/destroyed.
Regarding the sequence of actions:
1. A client makes a remote call to a composite service
2. The composite service knows which domain services needs to invoke and passes that information to the coordinator
3. The coordinator creates a composite transaction or, in other words, a persistent topic for each domain service involved in the operation. Every topic will be uniquely identified by a string that can be interpreted as a *partial transaction id* (partial because a topic will store only commands for instances of a single domain service)
4. The composite service calls each domain service using its respective *partial transaction id*
5. A domain service performs persistence operations through a JPA unsynchronized persistence context and publishes appropriate commands to the topic identified by the given *partial transaction id*

1. If all domain services calls succeed, the composite service signals the coordinator to commit the changes
- The coordinator calls the confirm operation on the TCC service
- The TCC service calls the confirm operation on each domain service passing the correct *partial transaction id*
- Each domain service reads all commands from the given topic, executes them through a JPA unsynchronized persistence context and finally applies the derived changes to the underlying repository.
- If all commit calls succeed the business operation ends successfully, otherwise the operation ends with an heuristic failure
2. If a domain service call fails, the composite service signals the coordinator to rollback the changes
- The coordinator calls the cancel operation on the TCC service
- The TCC service calls the cancel operation on each domain service passing the correct *partial transaction id*
- The business operation ends with error

## Build
```shell
# clone this repo
# --depth 1 removes all but one .git commit history
git clone --depth 1 https://github.com/jotorren/microservices-transactions-tcc.git my-project
# change directory to your project
cd my-project
# build artifacts
mvn clean install
```
## Run
First of all you must download and install Zookeeper & Kafka servers. Please follow guidelines described in:
- https://zookeeper.apache.org/doc/r3.1.2/zookeeperStarted.html
- https://kafka.apache.org/quickstart
Once both servers are up and running you can start all services:
- Composite service to create source code items and discussion boards + TCC Service
```shell
# inside your project home folder
cd rahub-composite-service
mvn spring-boot:run
# default port 8090
```
- Domain service to create/query pieces of source code
```shell
# inside your project home folder
cd rahub-source-code-service
mvn spring-boot:run
# default port 8091
```
- Domain service to create/query discussion boards about source code items
```shell
# inside your project home folder
cd rahub-forum-service
mvn spring-boot:run
# default port 8092
```
## Available services
- `/api`: http://localhost:8090/api/api-docs?url=/api/swagger.json

- `/api/coordinator`: http://localhost:8090/api/api-docs?url=/swagger-tcc.json
In the current example TCC service runs on the same JAX-RS container as the composite does, but it will be preferable to deploy it on its own instance.

- `/content`: http://localhost:8091/index.html?url=/content/swagger.json

- `/forum`: http://localhost:8092/index.html?url=/forum/swagger.json

## Considerations
#### REST implementation
In the example we use Jersey for Domain Services whilst Composite and TCC services rely on CXF. With regard to swagger ui, the former contain required static resources inside `src/main/resources/static` while the latter only depend on a [webjar](http://www.webjars.org/) and have an empty static folder.
#### Repositories
Our sample Domain Services use an embedded H2 file based database. You can check the configuration looking at their respective `src/main/resources/application.properties`. By default, both data models are initialized on startup, but that behavior can be disabled by uncommenting the following lines:
```properties
#spring.jpa.generate-ddl: false
#spring.jpa.hibernate.ddl-auto: none
```
Additionally, H2 web console is enabled in both cases and can be accessed through the URI `/h2/console`.
## Components

Pink classes are provided by [Atomikos](https://www.atomikos.com/Blog/TransactionManagementAPIForRESTTCC) and contain the TCC protocol implementation. Green ones are generic and reusable components to isolate and hide the complexity of composite transactions management.
## Implementation key aspects
#### 1. Transactional persistence operations: unsynchronized persistence contexts
Persistence operations executed inside a Composite Transaction are delegated to *unsynchronized entity manager*s: you can create, change and delete entities without doing any change to the repository until you force the `EntityManager` to join an existent `LOCAL/JTA` transaction (note the `@Transactional` annotation present in the `commit()` method ).
```java
@Repository
@Scope("prototype")
public class CompositeTransactionParticipantDao {
@PersistenceContext(type = PersistenceContextType.EXTENDED,
synchronization = SynchronizationType.UNSYNCHRONIZED)
private EntityManager em;
@Transactional(readOnly=false)
public void commit() {
em.joinTransaction();
}
public void save(Object entity) {
em.persist(entity);
}
public <T> T saveOrUpdate(T entity) {
return em.merge(entity);
}
public void remove(Object entity) {
em.remove(entity);
}
public <T> T findOne(Class<T> entityClass, Object pk){
return getEntityManager().find(entityClass, pk);
}
}
```
As stated in [Spring ORM documentation](http://docs.spring.io/spring/docs/current/spring-framework-reference/html/orm.html):
> `PersistenceContextType.EXTENDED` is a completely different affair: This results in a so-called extended EntityManager, which is *not thread-safe* and hence must not be used in a concurrently accessed component such as a Spring-managed singleton bean
This is the reason why we set `prototype` as scope for any `DAO` with an *unsynchronized persistence context* injected into it.
And some final aspects to be aware of:
Any call to the `executeUpdate()` method of a `Query` created through an *unsynchronized* `EntityManager` will fail reporting `javax.persistence.TransactionRequiredException: Executing an update/delete query`. Consequently, bulk update/delete operations are not supported.
On the other hand, it is possible to create/execute a `Query` to look for data but, in that case, only already persisted (committed) entries are searchable. If you want to retrieve entities that have not yet been saved (committed) you must use `EntityManager` `find()` methods.
Keep in mind that any repository constraint will be checked only when the `EntityManager` joins the transaction (that is during the *commit* phase). Therefore it will be preferable to implement as many validations as possible out of the repositories. In doing so, we can detect potential problems in a very early stage, increasing the overall performance and consistency of the system.
#### 2. From persistence operation to Command: JPA entity listeners and callback methods
*Default entity listeners* are listeners that should be applied to all entity classes. Currently, they can only be specified in a mapping XML that can be found in `src/main/resources/META-INF/orm.xml`
*Callback* methods are user defined methods that are attached to entity lifecycle events and are invoked automatically by JPA when these events occur:
- `@PrePersist` - before a new entity is persisted (added to the `EntityManager`).
- `@PostPersist` - after storing a new entity in the database (during *commit* or *flush*).
- `@PostLoad` - after an entity has been retrieved from the database.
- `@PreUpdate` - when an entity is identified as modified by the `EntityManager`.
- `@PostUpdate` - after updating an entity in the database (during *commit* or *flush*).
- `@PreRemove` - when an entity is marked for removal in the `EntityManager`.
- `@PostRemove` - after deleting an entity from the database (during *commit* or *flush*).
(For further details see http://www.objectdb.com/java/jpa/persistence/event)
If we want to find out which entities have been created, updated or removed through an *unsynchronized entity manager*, we only need *@Pre\* callback* methods:
```java
public class ChangeStateJpaListener {
@PrePersist
void onPrePersist(Object o) {
enlist(o, EntityCommand.Action.INSERT);
}
@PreUpdate
void onPreUpdate(Object o) {
enlist(o, EntityCommand.Action.UPDATE);
}
@PreRemove
void onPreRemove(Object o) {
enlist(o, EntityCommand.Action.DELETE);
}
private void enlist(Object entity, EntityCommand.Action action){
EntityCommand<Object> command = new EntityCommand<Object>();
command.setEntity(entity);
command.setAction(action);
command.setTimestamp(System.currentTimeMillis());
// send command to some store/queue
}
}
```
#### 3. Commands persistence and distribution
At this point we know how persistence operations executed by a service are translated into Commands, but once instantiated we need to save and distribute them to all service instances. This is accomplished by using Kafka persistent topics. Let's have a deeper look at the proposed mechanism:
When a Composite Service asks the Coordinator (`TccRestCoordinator`) to open a new Composite Transaction, the first thing the latter does is to generate an UUID to uniquely identify that transaction. Then it creates as many topics as different Domain Services must be coordinated, assigning them a name that results from concatenating the UUID and an internal sequence number (building the so-called *partial transaction id*). Once all resources have been allocated, it returns to the Composite Service a `CompositeTransaction` object that includes the transaction global UUID and all partial ids. From this moment on, any call dispatched by the Composite Service to a Domain Service will always include the corresponding partial transaction id (as an extra `@PathParam`)
Furthermore, the JPA entity listener responsible for generating Commands (see point #2) requires the name of the topic to use for publishing them (after a proper serialization process has been applied to the Command). How can that standard JPA class obtain a value available inside an `Spring` bean? `ThreadLocal` variables come to the rescue: just before the first call to a `DAO`, the Domain Service adds its partial transaction id to a `ThreadLocal` variable. Because of JPA listeners run in the same thread as the `EntityManager` operation, they have access to any `ThreadLocal` variable created by the service and can retrieve the partial transaction id from it. Finally, a `org.springframework.kafka.core.KafkaTemplate` instance is used to send the `JSON` representation of the Command to the appropriate topic.
#### 4. From Command to persistence operation: inherited method from `CompositeTransactionParticipantDao`
Because an `EntityCommand` object contains the entity to create/update/delete and the action to apply to it, it's very straightforward to find out which persistence operation must be executed by a given `EntityManager`; this is as simple as adding an special method to the generic `CompositeTransactionParticipantDao` where the`EntityManager` is injected:
```java
public void apply(List<EntityCommand<?>> transactionOperations) {
if (null == transactionOperations) {
return;
}
for (EntityCommand<?> command : transactionOperations) {
switch (command.getAction().ordinal()) {
case 0:
save(command.getEntity());
break;
case 1:
saveOrUpdate(command.getEntity());
break;
case 2:
remove(command.getEntity());
break;
}
}
}
```
#### 5. Composite Transaction lifecycle
[01] A Composite Service asks the Coordinator (`TccRestCoordinator`) to open a new Composite Transaction. The call arguments include the maximum amount of time (in milliseconds) to complete the transaction and the URL of each participant (Domain Service) to be used when cancelling/confirming its operations (as specified by the TCC protocol).
```java
CompositeTransaction transaction = tccRestCoordinator.open(transactionTimeout, featureAbcTccUrl,
featureXyzTccUrl);
```
[02] The Coordinator generates the Composite Transaction UUID. Then, for each participant it computes the partial transaction id and uses a `CompositeTransactionManager` (instance provided by Spring container) to initialize the transaction persistence/distribution (with the Kafka-based implementation a persistent topic is created for each Domain Service)
[03] The Composite Service starts calling each Domain Service and processes their responses
[04] When a Domain Service receives a call, it extracts the transaction partial id from the URI
```java
public Response txedOperation(@Context UriInfo uriInfo, @PathParam("txid") String txid, Feature data)
```
[05] Defines a `ThreadLocal` variable and sets its value to the transaction partial id
```java
ThreadLocalContext.put(CURRENT_TRANSACTION_KEY, txId);
```
[06] Asks Spring container to return a **NEW** instance of a `DAO` with an *unsynchronized* `EntityManager` injected into it. Makes some calls to `DAO` methods
[07] The `DAO` translates each method call to a set of persistence operations, delegating their execution to its `EntityManager`
[08] For every persistence operation, the JPA container executes the global entity listener (in the same thread as the `EntityManager` operation)
[09] The JPA listener checks if a partial transaction id has been informed by the service and in case of unavailability it does nothing. Otherwise (when a partial id can be positively found) it creates a new `EntityCommand` instance grouping the entity, the type of operation, the partial transaction id and a timestamp. After that, it uses the `CompositeTransactionManager` (instance provided by Spring container) to "enlist" the Command.
```java
private void enlist(Object entity, EntityCommand.Action action, String txId){
EntityCommand<Object> command = new EntityCommand<Object>();
command.setEntity(entity);
command.setAction(action);
command.setTransactionId(txId);
command.setTimestamp(System.currentTimeMillis());
CompositeTransactionManager txManager =
SpringContext.getBean(CompositeTransactionManager.class);
txManager.enlist(txId, command);
}
```
[10] With the Kafka-based implementation of `CompositeTransactionManager`, the `EntityCommand` object is serialized to a `JSON` string prior to storing it in a topic.
------
So far, we have completed the *Try* part of the *Try*-*Cancel*/*Confirm* protocol. What about the *Cancel*/*Confirm* one? Let's start with *Confirm*
[11] Once the Composite Service ends calling Domain Services, it invokes the `commit()` method on the Coordinator (`TccRestCoordinator`)
[12] The coordinator sends a PUT request to the "confirm URI" of the TCC Service, adding the Composite Transaction data as the request content
[13] The TCC Service iterates over the transaction participants list and, for each of them, sends a PUT request to their respective "TCC confirm URI" (computed during the Composite Transaction creation)
[14] When a Domain Service receives the confirm call, it extracts the transaction partial id from the URI
```java
public void confirm(@PathParam("txid") String txid)
```
[15] Uses the `CompositeTransactionManager` instance provided by Spring container to get all the Commands "enlisted" in that (partial) transaction
[16] Asks the Spring container to return a **NEW** instance of a `DAO` with an *unsynchronized* `EntityManager` injected into it.
[17] Invokes the `apply()` method on the `DAO` to translate Commands to persistence operations. Because of we're applying already persisted commands, we must disable the JPA global entity listener. This can be easily done by ensuring no `ThreadLocal` variable with the partial id has been defined.
[18] Forces the `EntityManager` to join a `LOCAL/JTA` transaction and, thus, all persistence operations are applied to the underlying repository.
[19] If a Domain Service fails to process the confirm call, a 404 response is returned. When the TCC Service receives it, the confirmation process is stopped and a 409 response is sent back to the Coordinator which in turn propagates that value to the Composite Service.
[20] If all confirm calls succeed (all return 204) the TCC Service also responds with a 204 to the Coordinator which in turn propagates that value to the Composite Service.
------
And finally the *Cancel* branch:
[11] If Composite Service detects some error condition, it can abort the Composite Transaction by invoking the `rollback()` method on the Coordinator (`TccRestCoordinator`)
[12] In that case, the coordinator sends a PUT request to the "cancel URI" of the TCC Service, adding the Composite Transaction data as the request content
[13] The TCC Service iterates over the transaction participants list and, for each of them, sends a DELETE request to their respective "TCC cancel URI" (computed during the Composite Transaction creation)
[14] When a Domain Service receives the cancel call, it extracts the transaction partial id from the URI
```java
public void cancel(@PathParam("txid") String txid)
```
[15] In the current implementation the Domain Service does nothing. Perhaps a valid action could be to "close" the partial transaction (with the Kafka-based implementation of the `CompositeTransactionManager` that could trigger a topic removal)
[16] If a Domain Service fails to process the cancel call, a 404 response is returned. When the TCC Service receives it, a log trace is written and the cancellation process goes on. After the last call finishes, the TCC Service returns a 204 response to the Coordinator which in turn propagates that value to the Composite Service.
[17] If all cancel calls succeed (all return 204) the TCC Service also responds with a 204 to the Coordinator which in turn propagates that value to the Composite Service. | 1 |
kbastani/spring-cloud-microservice-example | An example project that demonstrates an end-to-end cloud native application using Spring Cloud for building a practical microservices architecture. | null | # Spring Cloud Example Project
An example project that demonstrates an end-to-end cloud-native platform using Spring Cloud for building a practical microservices architecture.
Tutorial available here: [Building Microservices with Spring Cloud and Docker](http://www.kennybastani.com/2015/07/spring-cloud-docker-microservices.html)
Demonstrated concepts:
* Integration testing using Docker
* Polyglot persistence
* Microservice architecture
* Service discovery
* API gateway
## Docker
Each service is built and deployed using Docker. End-to-end integration testing can be done on a developer's machine using Docker compose.
## Polyglot Persistence
One of the core concepts of this example project is how polyglot persistence can be approached in practice. Microservices in the project use their own database, while integrating with the data from other services through REST or a message bus.
* Neo4j (graph)
* MongoDB (document)
* MySQL (relational)
## Movie Recommendations
This example project focuses on movies and recommendations.
### Data Services

### Domain Data

## Microservice architecture
This example project demonstrates how to build a new application using microservices, as opposed to a monolith-first strategy. Since each microservice in the project is a module of a single parent project, developers have the advantage of being able to run and develop with each microservice running on their local machine. Adding a new microservice is easy, as the discovery microservice will automatically discover new services running on the cluster.
## Service discovery
This project contains two discovery services, one on Netflix Eureka, and the other uses Consul from Hashicorp. Having multiple discovery services provides the opportunity to use one (Consul) as a DNS provider for the cluster, and the other (Eureka) as a proxy-based API gateway.
## API gateway
Each microservice will coordinate with Eureka to retrieve API routes for the entire cluster. Using this strategy each microservice in a cluster can be load balanced and exposed through one API gateway. Each service will automatically discover and route API requests to the service that owns the route. This proxying technique is equally helpful when developing user interfaces, as the full API of the platform is available through its own host as a proxy.
# License
This project is an open source product licensed under GPLv3.
| 1 |
marcelocf/janusgraph_tutorial | Tutorial with example code on how to get started with JanusGraph | null | > *WARNING:* this is *old*. Very very old! It shouldn't work and information here is vely very wrong as of 2022. Apologies for that.
> Will leave repo live for now for historic reasons, but yeah; please don't expect this to work anymore.
# JanusGraph tutorial
**NOTE:** it goes without saying that you need a properly configured JDK in your environment.
This is a hands on guide for JanusGraph. It is organized in sections (each folder is an independent project with a section) and it is expected you follow each guide in order.
## Starging Janus Graph
Every code here assumes you are running JanusGraph 0.1.0 locally.
### For the lazy
You should be ashamed. BUT, here is a shortcut:
```bash
./start_janus.sh
```
### For the ones that want to really learn stuff
This is fairly simple; just download janus and tell it to start up.
```bash
$ wget https://github.com/JanusGraph/janusgraph/releases/download/v0.1.0/janusgraph-0.1.0-hadoop2.zip
$ unzip janusgraph-0.1.0-hadoop2.zip
$ cd janusgraph-0.1.0-hadoop2/
$ ./bin/janusgraph.sh start
```
The last command should output:
```
Forking Cassandra...
Running `nodetool statusthrift`.. OK (returned exit status 0 and printed string "running").
Forking Elasticsearch...
Connecting to Elasticsearch (127.0.0.1:9300)... OK (connected to 127.0.0.1:9300).
Forking Gremlin-Server...
Connecting to Gremlin-Server (127.0.0.1:8182)..... OK (connected to 127.0.0.1:8182).
Run gremlin.sh to connect.
```
Meaning you have cassandra and elasticsearch listening on the loopback interface. This is important for the examples to work.
If you need to clean your data:
1. stop janus graph
1. `rm -rf db`
1. start janus graph
It is also recommended that you read:
* [GraphDB - diving into JanusGraph part 1](https://medium.com/finc-engineering/graph-db-diving-into-janusgraph-part-1f-199b807697d2) (3 min read)
* [GraphDB - diving into JanusGraph part 2](https://medium.com/finc-engineering/graph-db-diving-into-janusgraph-part-2-f4b9cbd967ac) (4 min read)
## Why
I wrote this guide after trying to find my way through this technology. I had to learn it because the traditional tools were not enough for the kind of data processing required in the task assigned to me.
JanusGraph has proven to be a solid and reliable solution to our project and I hope this guide is useful for you.
This is by no means a complete guide to JanusGraph. But I believe that following this using the [official documentation](http://docs.janusgraph.org/latest/) as a reference is enough framework for you to really dive into this technology.
## Scope
On this tutorial we will build the backend database of a twitter clone. The sections are divided into:
1. basic schema
1. data loading
1. querying
1. hadoop integration
1. indexing for performance
By the end of this tutorial you should be able to design your own (very simple but functional) database backend using JanusGraph.
There is also a last section included with some recommended experiments for after you are done.
## Code
Every Java code depends on the main schema class. This is a design decision to reuse code and have more consistency in naming. Also, by doing so, we avoid usage of hard coded Strings as much as possible.
To ease your life, there is a simple shell script in each section called `run.sh`. This will build and evoke the example code for you.
### Java
We are using the standard gradle application plugin naming conventions on Java projects; this means that we have the folders:
```
/src/main/
dist
resources
java
```
Inside `dist` you will find the JanusGraph configuration files. Each section has its own files. In `resources` there is the `log4j.properties` file. And `java` contains the implementation.
### Ruby
In our ruby example codes we are relying on:
* [RVM](https://rvm.io/): for ruby version management (if you use someting different, please prepare your env).
* bundler (`gem install bundler`): for dependency management.
* [gremlin driver gem](https://github.com/marcelocf/gremlin_client): a really simple driver in ruby for JanusGraph.
| 1 |
evrentan/spring-boot-project-example | Spring Boot Project Example by Evren Tan | java maven open-source spring-boot | # A Complete Spring Boot Example Project
A Complete Spring Boot Example Project with Spring Boot 2.6.2, JDK 17 & Maven.
## Table of Contents
1. [How to Contribute](#how-to-contribute)
2. [Requirements](#requirements)
3. [Running the Application Locally](#running-the-application-locally)
4. [Run Actuator](#run-actuator)
5. [Run Swagger UI Documentation](#run-swagger-ui-documentation)
6. [Javadoc](#javadoc)
7. [Copyright](#copyright)
## How to Contribute
For the contributor covenant to this project, please check the Code of Conduct file.
[](CODE_OF_CONDUCT.md)
## Requirements
For building and running the application belows are required;
- [Spring Boot 2.6.2](https://spring.io/blog/2021/12/21/spring-boot-2-6-2-available-now)
- [JDK 17](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html)
- [Maven 3.8.3](https://maven.apache.org)
- Springfox Boot Starter 3.0.0 for Swagger UI Documentation
- MongoDB
## Running the application locally
The project can be booted with Spring Cloud Config Server or directly within the application. In order to boot the project within itself, enable the properties in application.properties file and disable bootstrap.properties file.
Application can be run with SpringBootProjectExampleApplication class under evrentan.examples.springbootprojectexample.spring.config.spring package.
Alternatively you can use the [Spring Boot Maven plugin](https://docs.spring.io/spring-boot/docs/current/reference/html/build-tool-plugins-maven-plugin.html) like so:
```shell
mvn spring-boot:run
```
## Run Actuator
[Spring Boot Actuator](https://spring.io/guides/gs/actuator-service/) can be reached from [local url for Actuator](http://localhost:8081/actuator).
Only health and caches endpoints are enabled by default. Configuration can be updated within the "actuator" section of the related application.properties file. This file can be also in Spring Cloud Config Server if the application is booted with Spring Cloud Config Server.
## Javadoc
You can create Javadoc with the below command or directly from your IDE.
```shell
mvn javadoc:javadoc
```
## Run Swagger UI Documentation
After running the application, just type the [local url for Swagger UI](http://localhost:8080/swagger-ui/index.html) in your browser.
## Extra Notes
[](https://www.gitkraken.com/invite/eNppBA83)
This repo was made with love using [GitKraken Client](https://www.gitkraken.com/invite/eNppBA83).
## Copyright
GNU General Public License v3.0
Permissions of this strong copyleft license are conditioned on making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved. Contributors provide an express grant of patent rights.
| 1 |
ivanyu/logical-rules-parser-antlr | A simple example of a parser built with ANTLR | antlr blog-post java parser | logical-rules-parser-antlr
==========================
A simple example of a parser built with ANTLR.
There is [the blog post](http://ivanyu.me/blog/2014/09/13/creating-a-simple-parser-with-antlr/) describes the parser.
| 1 |
oktadev/auth0-spring-boot-angular-crud-example | Angular and Spring Boot CRUD Example | angular oidc spa spring-boot spring-security | # Angular and Spring Boot CRUD Example
This example app shows how to create a Spring Boot API and CRUD (create, read, update, and delete) its data with a beautiful Angular + Angular Material app.
Please read [Build a Beautiful CRUD App with Spring Boot and Angular](https://auth0.com/blog/spring-boot-angular-crud) to see how it was created or follow [this demo script](demo.adoc).
You can also watch a demo of this example in the screencast below:
[](https://youtu.be/0pnSVdVn_NM)
**Prerequisites:** [Java 17](http://sdkman.io) and [Node.js 18+](https://nodejs.org/)
* [Getting Started](#getting-started)
* [Links](#links)
* [Help](#help)
* [License](#license)
## Getting Started
To install this example application, run the following commands:
```bash
git clone https://github.com/oktadev/auth0-spring-boot-angular-crud-example.git jugtours
cd jugtours
```
This will get a copy of the project installed locally. You'll need to configure the application with a registered OIDC app for it to start. Luckily, Auth0 makes this easy!
### Use Auth0 for OpenID Connect
Install the [Auth0 CLI](https://github.com/auth0/auth0-cli) and run `auth0 login` in a terminal.
Next, run `auth0 apps create`:
```shell
auth0 apps create \
--name "Bootiful Angular" \
--description "Spring Boot + Angular = ❤️" \
--type regular \
--callbacks http://localhost:8080/login/oauth2/code/okta,http://localhost:4200/login/oauth2/code/okta \
--logout-urls http://localhost:8080,http://localhost:4200 \
--reveal-secrets
```
> **TIP**: You can also use your [Auth0 dashboard](https://manage.auth0.com) to register your application. Just make sure to use the same URLs as above.
Copy the results from the CLI into an `.okta.env` file:
```shell
export OKTA_OAUTH2_ISSUER=https://<your-auth0-domain>/
export OKTA_OAUTH2_CLIENT_ID=<your-client-id>
export OKTA_OAUTH2_CLIENT_SECRET=<your-client-secret>
```
If you're on Windows, name the file `.okta.env.bat` and use `set` instead of `export`:
```shell
set OKTA_OAUTH2_ISSUER=https://<your-auth0-domain>/
set OKTA_OAUTH2_CLIENT_ID=<your-client-id>
set OKTA_OAUTH2_CLIENT_SECRET=<your-client-secret>
```
Then, run `source .okta.env` (or run `.okta.env.bat` on Windows) to set the environment variables. Start your app and log in at `http://localhost:8080`:
```shell
source .okta.env
mvn spring-boot:run -Pprod
```
You can prove everything works by running this project's Cypress tests. Add environment variables with your credentials to the `.okta.env` (or `.okta.env.bat`) file you created earlier.
```shell
export CYPRESS_E2E_DOMAIN=<your-auth0-domain> # use the raw value, no https prefix
export CYPRESS_E2E_USERNAME=<your-email>
export CYPRESS_E2E_PASSWORD=<your-password>
```
Then, run the Cypress tests and watch them pass:
```shell
source .okta.env
cd app
ng e2e
```
You can [view this project's CI pipeline](.github/workflows/main.yml) and see that all its [workflows are passing too](https://github.com/oktadev/auth0-spring-boot-angular-crud-example/actions). 😇
## Links
This example uses the following open source libraries:
* [Angular](https://angular.io)
* [Angular Material](https://material.angular.io)
* [Spring Boot](https://spring.io/projects/spring-boot)
* [Spring Security](https://spring.io/projects/spring-security)
## Help
Please post any questions as comments on the [blog post](https://auth0.com/blog/spring-boot-angular-crud), or visit our [Auth0 Community Forums](https://community.auth0.com/).
## License
Apache 2.0, see [LICENSE](LICENSE).
| 1 |
springmonster/netflix-dgs-example-java | Java Examples of Netflix DGS | graphql graphql-java graphql-server java netflix-dgs spring-boot spring-graphql | # DGS
- [DGS](https://netflix.github.io/dgs/)
- [DGS Github](https://github.com/Netflix/dgs-framework)
## module description
| Module | Description |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|
| [✅a-start](./a-start) | Example of multiple `*.graphqls`,@DgsData.List |
| [✅b-codegen](./b-codegen) | Example of codegen,multiple modules,methods in type,support constant in @DgsData,@RequestHeader |
| [✅c-scalar](./c-scalar) | Example of custom scalar |
| [✅d-http](./d-http) | Example of Query,Mutation,Subscription,params validation,Apollo Tracing |
| [✅e-file](./e-file) | Example of file upload |
| [✅f-auth](./f-auth) | Example of authentication and authorization |
| [✅g-error](./g-error) | Example of custom error type |
| [✅h-ut](./h-ut) | Example of uni test, integration test, unit test of supporting custom scalar |
| [✅i-nplusone](./i-nplusone) | Example of `N+1`, support custom tracing |
| [✅j-sample](./j-sample) | Example of split Query and Mutation into different configruation files to avoid too many definitions in one file |
| [k-postg](./k-postg) | Example of supporting PostGraphile(Experimental)(TODO) |
| [✅l-interfaceunion](./l-interfaceunion) | Example of interface and union |
| [✅m-dynamicschema](./m-dynamicschema) | Example of dynamic schema |
| [❎n-webflux](./n-webflux) | Example of dynamic webflux, there are problems with `Spring Security` |
| [✅o-metrics](./o-metrics) | Example of metrics |
| [✅p-apollo-gateway](./p-apollo-gateway)<br/>[✅p-federation-customer](./p-federation-customer)<br/>[✅p-federation-name](./p-federation-name)<br/>[✅p-federation-profile](./p-federation-profile) | Apollo Federation Gateway<br/> |
| [✅o-metrics](./o-metrics) | Example of metrics |
| [✅x-kotlin](./x-kotlin) | Example of Kotlin |
| [✅y-bff](./y-bff) | Example of client and server,support voyager |
| [✅z-domain](./z-domain) | Example of client and server,support voyager |
## Intellij Idea Plugin
- [GraphQL](https://plugins.jetbrains.com/plugin/8097-graphql)
- [DGS](https://plugins.jetbrains.com/plugin/17852-dgs)
## a-start
- Startup then visit http://localhost:10001/graphiql
- Input
```
{
shows {
title
releaseYear
}
}
------
{
shows(titleFilter: "Ozark") {
title
releaseYear
}
}
------
{
showsWithDgsData {
id
title
releaseYear
actors {
name
}
}
}
------
{
user {
id
name
}
}
```
## b-codegen
- root build.gradle
```
plugins {
id "com.netflix.dgs.codegen" version "5.1.17" apply false
}
```
- module build.gradle
```
plugins {
id "com.netflix.dgs.codegen"
}
```
- module build.gradle
```
generateJava{
schemaPaths = ["${projectDir}/src/main/resources/schema"] // List of directories containing schema files
packageName = 'com.codegen.graphqldgs' // The package name to use to generate sources
generateClient = true // Enable generating the type safe query API
}
```
- check build folder
- Input,visit http://127.0.0.1:10002/graphiql
```
{
shows {
id
title
releaseYear
}
}
------
{
shows(titleFilter: "Ozark") {
id
title
releaseYear
}
}
```
## c-scalar
- Input,visit http://127.0.0.1:10003/graphiql
```
{
shows {
id
title
releaseYear
price
dateTime
bigDecimal
uuid
}
}
```
## d-http
- Startup,visit http://127.0.0.1:10004/graphiql
- Input
```
{
show(people: {name: "zhangsan"}) {
id
name
}
shows(personList: [{name: "zhangsan"}]) {
id
name
}
}
------
{
showWithGood {
id
name
}
}
------
{
showWithGood(good: {name: "Car"}) {
id
name
}
}
------
mutation {
addRating(title: "title", stars: 100) {
avgStars
}
addRatingWithInput(input: {title: "title", stars: 200}) {
avgStars
}
}
------
mutation {
addRating(title: "title", stars: 100) {
avgStars
}
addRatingWithInput(input: {title: "hel", stars: 200}) {
avgStars
}
}
```
Use `Postman` to visit `Subscription`

## e-file
- Startup
- Input with `curl`
```
curl localhost:10005/graphql \
-F operations='{ "query": "mutation upload($file: Upload!) { upload(file: $file) }" , "variables": { "file": null } }' \
-F map='{ "0": ["variables.file"] }' \
-F 0=@1.png
------
curl localhost:10005/graphql \
-F operations='{ "query": "mutation addArtwork($file: Upload!) { addArtwork(file: $file) }" , "variables": { "file": null } }' \
-F map='{ "0": ["variables.file"] }' \
-F 0=@1.png
```
- Output
> Please check `project uploaded-images` folder
```
{"data":{"upload":true}}
------
{"data":{"addArtwork":true}}
```
## f-auth
- Startup,visit http://localhost:10006/graphiql
- Input
```
{
salary
}
------
{
salary
}
# REQUEST HEADERS中Input{ "Authorization": "Basic aHI6aHI=" },This is hr username and password
------
mutation {
updateSalary(salaryInput: {employeeId: "1", newSalary: "100"}) {
id
employeeId
newSalary
}
}
------
mutation {
updateSalary(salaryInput: {employeeId: "1", newSalary: "100"}) {
id
employeeId
newSalary
}
}
# REQUEST HEADERS中Input{ "Authorization": "Basic aHI6aHI=" },This is hr username and password
```
## g-error
- Startup,visit http://localhost:10007/graphiql
- Input
```
{
show(people: {name: "haha"}) {
id
name
}
}
------
{
show(people: {name: "zhangsan"}) {
id
name
}
}
------
{
getRating(id: "1") {
avgStars
}
}
```
## h-ut
> see `test` folder
- Run `DemoControllerTests` and `ShowDataFetcherTest` to check
## i-nplusone
- Startup,visit http://localhost:10009/graphiql
- Input
```
{
shows {
showId
title
reviews {
starRating
}
}
}
------
{
showsN {
id
title
releaseYear
artwork {
url
}
reviewsN {
username
starScore
submittedDate
}
}
}
```
## l-interfaceunion
- Startup then visit http://localhost:10012/graphiql
- interface input
```
{
movies {
__typename
... on ActionMovie {
title
nrOfExplosions
}
... on ScaryMovie {
title
gory
scareFactor
}
}
}
```
- union input
```
{
search {
__typename
... on Actor {
name
}
... on Series {
title
}
}
}
```
## m-dynamicschema
- Startup then visit http://localhost:10013/graphiql
- Input
```
query randomNumber {
randomNumber(bound: 10)
}
------
mutation createUser {
createUser(username: "hello", password: "world") {
id
username
password
}
}
```
## n-webflux
- Startup then visit http://localhost:10014/graphiql
- Input
```
query getUsers {
getUsers {
id
username
password
}
getUserById(id: 1) {
id
username
password
}
}
mutation createUser {
createUser(username: "Trudy", password: "Trudy") {
id
username
password
}
}
```
## o-metrics
visit http://localhost:10015/actuator/metrics to check output
### Step 1
Use docker-compose to start Grafana and Prometheus servers.
- First generate jar in `/build/libs` folder
- In the root folder
```
docker-compose up -d
```
### Step 2
Check the Prometheus server.
- Open http://localhost:9090
- Access status -> Targets, endpoints must be "UP"
### Step 3
Configure the Grafana.
- Open http://localhost:3000, user name and password are all `admin`
- Configure integration with Prometheus
- Access configuration
- Add data source
- Select Prometheus
- Use url "http://host.docker.internal:9090" and access with value "Server(default)"
- Configure dashboard
## p-gateway
### Step 1
Start `customer`,`name`,`profile` services
### Step 2
Start `Apollo Gateway`
```
npm install
node index.js
```
### Step 3
Visit http://localhost:4000
Variables is
```
{
"customerId": "1"
}
```
Query is
```
query Customer($customerId: String!) {
customer(customerId: $customerId) {
age
id
name {
firstName
fullName
lastName
middleName
prefix
}
profile {
email
phone
}
}
}
```
## x-kotlin
- Input
```
{
shows {
title
releaseYear
id
}
}
```
## y-bff
- Startup,Startup module `z-domain`,visit http://localhost:20000/graphiql
- Startup, Startup module `z-domain`,visit http://localhost:20000/voyager
- Input
```
{
shows {
id
title
releaseYear
}
}
------
mutation {
addShow(input: {title: "title", releaseYear: 2022}) {
id
title
releaseYear
}
}
```
## z-domain
- Startup,visit http://localhost:20001/graphiql
- Startup,visit http://localhost:20001/voyager
- Input
```
{
shows {
id
title
releaseYear
}
}
------
mutation {
addShow(input: {title: "title", releaseYear: 2022}) {
id
title
releaseYear
}
}
```
| 0 |
traex/RetrofitExample | Example for one of my tutorials at http://blog.robinchutaux.com/blog/a-smart-way-to-use-retrofit/ | null | RetrofitExample
===============

Example for one of my tutorials at http://blog.robinchutaux.com/blog/a-smart-way-to-use-retrofit/
[Retrofit library](http://square.github.io/retrofit/) is a type-safe REST client for Android and Java created by [Square Open Source.](http://square.github.io/) With this library you can request the webservices of a REST api with POST, GET and more. This library is awesome and very useful, but you need a good architecture and a good practice to use it as best as possible.
| 1 |
dmarczal/java_jdbc_dao_mvc_swing | An example of MVC + JAVA + SWING + DAO | null | null | 1 |
paulmandal/ATAK-Plugin-Example | An example ATAK plugin | null | # ATAK Plugin Example
This is an example plugin for ATAK that contains examples of:
* Listening to outgoing messages from ATAK and Toast their type to the screen
* Sending incoming messages to ATAK (in this case, a fake "Green HQ" map marker at lat: 0, lon: 0)
* A icon overlaid in the lower right corner of the screen that updates every 2 seconds
* A basic plugin UI with a Spinner
* Different icon types (map overlay, menu, settings menu)
| 1 |
jdmg94/react-native-webrtc-example | An example app for `react-native-webrtc` using React 0.60 or newer | null | # Basic React-Native-WebRTC example app
## Motivation
Real Time Technologies are back in style, while this is fairly standard on the Web platforms, React Native faced a steeper learning curve to get into WebRTC Technologies specially without Expo support for native modules.
I wanted to provide the most basic codebase to provide a starting point for developers looking at WebRTC technologies for React Native using `react-native-webrtc` and `react-native@^0.60` with as little overhead as possible.
As it is also required on the Web Standard, RTC technologies require a broker to help with the signaling of its peers, this has also been included in the form of a thin `express`/`socket.io` server under the `./backend` folder,
>Of course this is only going to be a local instance of the broker so you will have to expose it using a service like [localTunnel](https://github.com/localtunnel/localtunnel) or [ngrok](https://ngrok.com/) so 2 people on different networks will be able to use the client app, remember to update the socket address on the code if you do this.
## Usage
First you need to start the signaling server so we can handle the peer activity, you can do this by running `cd ./backend && npm install && npm start` on a terminal window located at the project's root.
Once you have the signaling server up you need to launch the client app and the process is a little different for each platform. For the best DX you should use a real device.
### Android
The setup has been tweaked for `react-native^0.60` and if you want to replicate this on your own project, for Android, you should take a look at the following files to extrapolate the config:
- `./android/settings.graddle`
- `./android/graddle.properties`
- `./android/build.graddle`
- `./android/app/build.graddle`
- `./android/app/src/AndroidManifest.xml`
- `./android/app/src/main/java/com/basicwebrtcexample/MainApplication.java`
to run the client app just have your physical android device connected and listed under `adb devices`, once your physical device is connected and trusted just run `npm run android`
### ios
For iOS most of the legwork is done with cocoapods, if you want to extrapolate the config for your project take a look at the following files:
- `./ios/podfile`
- `./ios/basicwebrtcexample/info.plist`
once you have updated your config files, run `npx pod-install` at the root of the project.
to run the client app run `npm start` on a separate terminal located at the project root and have your iPhone connected and authorized on your Mac then open Xcode and select the workspace for your project, then give the main project signing capabilities and select your iPhone on the emulator options, then hit run, after the debugger is installed you can close xcode.
Copyleft: **Jose Munoz 2020**
| 1 |
pgilad/spring-boot-webflux-swagger-starter | An example project to illustrate how to document Spring Boot Webflux with Swagger2 | api-documentation demo reactive spring-boot swagger swagger2 webflux | # spring-boot-webflux-swagger-starter
> An example project to illustrate how to document Spring Boot Webflux with Swagger2
## Requirements
- Java 11
## Installation
```bash
$ git clone https://github.com/pgilad/spring-boot-webflux-swagger-starter.git
```
## Usage
```bash
$ gradle bootRun
```
Now open your favorite web-browser (Chrome) to `http://localhost:8080/swagger-ui.html` which is automatically
generated from the `HelloController` web-flux mapping.

## License
MIT © [Gilad Peleg](https://www.giladpeleg.com)
| 1 |
KieronQuinn/AmazfitSpringboardPluginExample | Example for creating custom springboard pages on the Amazfit Pace | null | # Amazfit Springboard Plugin Example
This project is an example for how to create a custom page for the default home screen (called "Springboard") on the Amazfit Pace
## Usage
You don't need to import this project and edit it from there. There's only a couple of important files and pieces of code:
### SpringboardPluginLib.jar (app/libs)
Disassembled code from the HmAlarmClock app, with all but the plugin code removed. [Download it](https://github.com/KieronQuinn/AmazfitSpringboardPluginExample/raw/master/app/libs/SpringboardPluginLib.jar), copy it to the libs folder of your project, and then include it like so:

### SpringboardPage.java (app/src/main/java/com/kieronquinn/app/springboardexample)
Example code for implementing a page. Copy this to your project, and edit it as you like. It's commented, so each method is labelled with what it does
### AndroidMainfest.xml (app/src/main)
**Do not simply copy this to your project**
Only the following section is required:
`<meta-data android:name="com.huami.watch.launcher.springboard.PASSAGER_TARGET" android:resource="@array/spring_depend" />`
Place this inside your application tags, as shown in the example in this project
### arrays.xml (app/src/main/res/values)
Copy this file to your project (or create the file and copy the contents), then edit the contents of the \<item> \</item> tags to point to your SpringboardPage class.
If you rename your SpringboardPage class, you **must** change it here also. Make sure the package name **and** component is correct here, or the page will not work
### widget_blank.xml (app/src/main/res/layout)
You don't need to copy this file, you can create your own layout file and edit the SpringboardPage class accordingly
## Installation
Run your app as normal. If you created a project without an activity, you may need to use Build > Build APK and install it via adb
Now, the first time you install the app it will not immediately appear in the launcher. Either reboot the watch, or run `adb shell am force-stop com.huami.watch.launcher` to restart the launcher
After this it should appear as the last page. If it has, well done! If not, check you followed every step correctly (particularly the arrays.xml and AndroidManifest.xml ones). Still not working? [Post on the XDA thread](https://forum.xda-developers.com/smartwatch/amazfit/dev-create-custom-home-screen-pages-pace-t3751731)
## Moving the page
There's no built in way to move or disable the page on the watch or the Amazfit app. Luckily, [I've already got a solution for that](https://github.com/KieronQuinn/AmazfitSpringboardSettings)
| 1 |
cronn/cucumber-junit5-example | Example setup for Cucumber and JUnit 5 with Gradle | cucumber gradle java junit5 template-project | # Cucumber with JUnit5
This repository contains an example project that integrates [Cucumber](https://cucumber.io/) with [JUnit5](https://junit.org/junit5/). It is the same setup explained in the [blog post](https://www.blog.cronn.de/en/testing/2020/08/17/cucumber-junit5.html).
## Quick Start
```shell
$ git clone https://github.com/cronn/cucumber-junit5-example your-own-tests
$ cd your-own-tests
$ ./gradlew test
```
Gradle will execute all feature files which are located in the `src/test/resources/features` folder as specified in [RunAllCucumberTests](https://github.com/cronn/cucumber-junit5-example/blob/main/src/test/java/com/example/RunAllCucumberTests.java). In order to filter execution to just a subset of all features, use the `includeTags` property as in the following example. It uses [JUnit5 tag expressions](https://junit.org/junit5/docs/current/user-guide/#running-tests-tag-expressions):
```shell script
$ ./gradlew test --project-prop includeTags="first | awesome"
```
In order to ignore just a subset of features, use the `includeTags` property like this:
```shell script
$ ./gradlew test --project-prop includeTags="!second"
```
[build.gradle.kts](https://github.com/cronn/cucumber-junit5-example/blob/main/build.gradle.kts#L36-L43) uses `cucumber.execution.parallel.enabled` to enable parallel test execution by default. Additionally, it uses the `cucumber.plugin` option to write a reports file to `build/reports/cucumber.ndjson`, an execution timeline to `build/reports/timeline` and an HTML report to `build/reports/cucumber.html`. All Cucumber features/rules/examples/scenarios annotated with `@disabled` are filtered by default and are not executed. This project declares an extra dependency to [picocontainer](http://picocontainer.com/) in order to show dependency injection within tests - remove it in case you don't need it. The Gradle configuration is annotated to help you make changes for your own test setup, thus feel free to modify it!
[<img src="https://www.cronn.de/img/logo_name_rgb_1200x630.png" alt="cronn GmbH" width="200"/>](https://www.cronn.de/)
| 1 |
java-modularity/agenda-example | Building Modular Cloud Applications in Java - getting started example | null | agenda-example
==============
[Building Modular Cloud Applications in Java](http://shop.oreilly.com/product/0636920028086.do) - getting started example (chapter 3.)
To use the example,
- clone this repository,
- point Eclipse to the directory where you cloned the repository,
- import all projects using `File -> Import... -> Existing Projects in to Workspace`
- right click on `agenda -> demo.bndrun`, and choose `Run As -> BND OSGi Run Launcher`
- point your browser to [http://localhost:8080/agendaui/index.html](http://localhost:8080/agendaui/index.html). | 1 |
klevis/DigitRecognizer | Java Convolutional Neural Network example for Hand Writing Digit Recognition | convolutional-neural-networks deep-learning deeplearning4j java java-convolutional-neural-network java-machine-learning machine-learning machine-learning-algorithms mlib neural-network spark | # http://ramok.tech/machine-learning/
Java Digit Recognition Application
Hand Writing Digit Recognition using Simple Neural Networks with Spark Mlib
and
Deep Convolution Neural Network with DeepLearning4j
Acuracy with simple model 97%
and
with convolutional neural network 99.2%
For more please visit below posts:
http://ramok.tech/2017/11/29/digit-recognizer-with-neural-networks/
http://ramok.tech/2017/12/13/java-digit-recognizer-with-convolutional-neural-networks/
<p align="center">
<img src="https://i0.wp.com/ramok.tech/wp-content/uploads/2017/12/2017-12-14_01h00_37.jpg?resize=1024%2C537e" width="600"/>
</p>
License to EPL https://www.eclipse.org/legal/epl-v10.html | 1 |
sv3ndk/stormRoomOccupancy | Example of basic Storm topology that updates DB persistent state | null | stormRoomOccupancy
==================
Basic Storm topology example that updates DB persistent state with correct error handling. The code is based on Storm 0.9.0.1, Cassandra 2.0.4 and Java 7.
The [first release] ( https://github.com/svendx4f/stormRoomOccupancy/releases/tag/v1.0.1) is explained in great details in my blog post on [scalable real-time state update with Storm] (http://svendvanderveken.wordpress.com/2013/07/30/scalable-real-time-state-update-with-storm/)
The current code is an update explained in my blog post on [Storm error handling] ( http://svendvanderveken.wordpress.com/2014/02/01/notes-on-storm-trident-error-handling)
In order to run this example, an instance of Cassandra with the following key space is required:
```
CREATE KEYSPACE EVENT_POC WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' } ;
```
(tables are re-created everytime time the topology is re-deployed)
Maybe edit this line in Deployer.java if you Nimbus is not reachable on that IP:
```
config.put("nimbus.host" , "192.168.33.10");
```
Then package the topology:
```
mvn package
```
And deploy it:
```
storm jar target/stormRoomOccupancy-0.0.2-SNAPSHOT-jar-with-dependencies.jar svend.storm.example.conference.Deployer
```
| 1 |
sunnygleason/j4-minimal | Minimal web application example using Embedded Jetty, Jersey, Guice, and Jackson | null | A minimal example of a REST API built with with Jersey, Jackson and Guice running on embedded Jetty
---------------------------------------------------------------------------------------------------
You can either run it as a main in eclipse or run as an executable jar by building with maven and running it.
mvn package
java -jar target/minimal.jar
| 1 |
eljefe6a/UnoExample | MapReduce/Hadoop example that uses regular playing cards to show mapping and reducing. | null | Playing Card Example
==========
Hadoop MapReduce example that uses regular playing cards to explain how mapping and reducing works.
This is the example code for the first and third episodes of the [Hadoop MapReduce screencast](http://pragprog.com/screencasts/v-jamapr/processing-big-data-with-mapreduce).
Licence
======
Copyright 2013 Jesse Anderson
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 1 |
sylleryum/kafka-microservices-with-saga | Java (Spring) microservices example with Kafka and saga pattern | kafka microservices spring-boot | [](https://codecov.io/gh/sylleryum/kafka-microservices-with-saga)

# Microservices with Kafka and Saga pattern using Spring boot
This project simulates a system for processing orders (i.e., a purchase) of items (E.g.: an order of a fridge and a camera) which is constituted of 4 microservices: order, stock, payment and notification service.
## High level architecture:
<p align="center">
<img src="https://raw.githubusercontent.com/sylleryum/kafka-microservices-with-saga/main/resources/readme-images/architecture.png" alt="" width="50%"/>
</p>
**Note:** as the main objective of this project is to demonstrate Kafka, majority of microservice patterns are ignored as well as some best practices for simplicity/readability’s sake (E.g.: Transactional outbox and coding to the interface).
## Getting started / Installation:
### Option 1: Running locally
- Clone this repo.
- Run the docker compose file inside “resources/docker files/Run project locally” directory (docker-compose up), if mongo-express fails to initialize, simply re-run it.
- Run all the microservices (any order of initialization is fine).
### Option 2: Running on Docker
- Simply run the docker compose file inside “resources/docker files/Run project on docker” directory (docker-compose up), if mongo-express fails to initialize, simply re-run it.
## Instructions:
- Send orders through order service’s endpoint /api/v1/order specifying the amount of orders to send and the amount of items inside each order through query param o (order) and i (item).
- E.g.: localhost:8080/api/v1/order?o=2&i=3 will send 2 orders, each order contains 3 items within itself.
- You can easily check the final result of each order through notification service’s console or check each topic through kafkdrop (localhost:9000/).
- You can change the expected order result (success/failure) and other configurations through shared.properties inside common module.
## How it works:
Once a new order is received, the order service does the initial processing and sends a new event to kafka:
<p align="center">
<img src="https://raw.githubusercontent.com/sylleryum/kafka-microservices-with-saga/main/resources/readme-images/step1.png" alt="" width="50%"/>
</p>
All microservices involved in the order will perform their corresponding operations and send a confirmation back to Kafka (success/failure):
<p align="center">
<img src="https://raw.githubusercontent.com/sylleryum/kafka-microservices-with-saga/main/resources/readme-images/step2.png" alt="" width="50%"/>
</p>
Order service then uses Kafka Streams to join all the confirmations received (inner join). If all services returned a success event, order has been fully processed (order completed). If any service returns a failure message, order service then triggers an event of rollback which will be processed by all other services.
Order service also sends the final order status to Kafka, notification service simulates then a notification message to user informing the final status of his/her order:
<p align="center">
<img src="https://raw.githubusercontent.com/sylleryum/kafka-microservices-with-saga/main/resources/readme-images/step3.png" alt="" width="50%"/>
</p>
## Configurations:
- If running locally, configurations of the microservices can be changed through shared.application located at common\src\resources\ (explanaition of relevant configurations are included in this file).
-
## Considerations regarding this project and best practices:
- Kafka may be tricky to handle proficiently duplications/idempotency. In this project the approach of [enabling idempotent producer was used](https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#producerconfigs_enable.idempotence). Consumer and its commit offset strategy should be considered also. E.g.: [Idempotent Kafka Consumer](https://medium.com/techwasti/idempotent-kafka-consumer-442f9aec991e)
- This projects uses 2 DBs, Postgres for Order service and MongoDB for Stock service, this is only to showcase microservices and Kafka with different DBs.
- Common module should be replaced in a real scenario for a better approach as externalized configuration pattern.
- For simplicity sake, Kafka producer/consumer are using a shared entity located in the common module, in a real scenario Avro/schema registry (included in the docker-compose file) is advised.
- A caveat of using Kafka Streams and inner join to process the results of an order being processed by order services is that it is time windowed, if for any reason a service takes longer than the window time to answer an order, the orchestrator will never process the confirmation of the order. A few alternatives is to use individual listeners in the orchestrator or outer join and schedule a task to verify the order after join window has closed.
- If a rollback is needed, orchestrator (order service) will send an event (order) that is consumed by all services involved in processing an order rather than individual rollback events (E.g.: rollback event to payment service) as usually maximum 1 service will fail and because of this, all other services will have to rollback and therefore, consume an event. | 1 |
lohitvijayarenu/netty-protobuf | simple example using netty and protobuf | null | null | 1 |
cloud-native-java/training | A collection of training courses and example code for Cloud Native Java | null | # Cloud Native Java: Training Materials
This repository contains various training materials that accompany the sections and chapters in _O'Reilly's Cloud Native Java: Building Resilient Systems with Spring Boot, Spring Cloud, and Cloud Foundry_.
To find more information on the book, please visit [cloudnativejava.io](http://www.cloudnativejava.io/).
## Getting Started
If you're looking for the source code and materials for _O'Reilly's Live Training_: [Building Microservices with Spring Boot, Spring Cloud, and Cloud Foundry](https://www.safaribooksonline.com/live-training/courses/building-microservices-with-spring-boot-spring-cloud-and-cloud-foundry/0636920151937/), then look no further!
First, clone this repository, and then navigate to the [microservices-online-training](https://github.com/cloud-native-java/training/tree/master/microservices-online-training) module for instructions on getting started with the example projects from the 2-day live online training. If you've somehow found this repository but are looking for the live training videos, [please visit Safari](https://www.safaribooksonline.com/search/?query=cloud%20native%20java&extended_publisher_data=true&highlight=true&is_academic_institution_account=false&source=user&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&formats=live%20online%20training&sort=relevance) to sign-up for upcoming live training courses for Cloud Native Java.
## Maintainers
This repository is maintained by the authors of Cloud Native Java. The best way to get at us is usually via Twitter. Please also feel free to use the [issue tracker](https://github.com/cloud-native-java/training/issues) to provide feedback on the materials. But really, it's faster if you yell at us nicely on Twitter (DMs are open).
- [Josh Long (@starbuxman)](https://www.twitter.com/starbuxman)
- [Kenny Bastani (@kennybastani)](https://www.twitter.com/kennybastani)
## License
This project is licensed under Apache License 2.0.
| 1 |
idugalic/reactive-company | Example of reactive web application. Java. Spring 5. Reactive Streams. Docker. | back-pressure elastic java mongodb reactive reactive-streams reactor resilent responsive spring tailable thymeleaf | # [projects](http://idugalic.github.io/projects)/reactive-company  [](https://gitpitch.com/idugalic/reactive-company/master?grs=github&t=white)
This project is intended to demonstrate best practices for building a reactive web application with Spring 5 platform.
## Table of Contents
* [Reactive programming and Reactive systems](#reactive-programming-and-reactive-systems)
* [Why now?](#why-now)
* [Spring WebFlux (web reactive) module](#spring-webflux-web-reactive-module)
* [Server side](#server-side)
* [Annotation based](#annotation-based)
* [Functional](#functional)
* [Client side](#client-side)
* [Spring Reactive data](#spring-reactive-data)
* [CI with Travis](#ci-with-travis)
* [Running instructions](#running-instructions)
* [Run the application by Maven:](#run-the-application-by-maven)
* [Run the application on Cloud Foundry](#run-the-application-on-cloud-foundry)
* [Run the application by Docker](#run-the-application-by-docker)
* [Manage docker swarm with Portainer](#manage-docker-swarm-with-portainer)
* [Manage docker swarm with CLI](#manage-docker-swarm-with-cli)
* [List docker services](#list-docker-services)
* [Scale docker services](#scale-docker-services)
* [Browse docker service logs](#browse-docker-service-logs)
* [Swarm mode load balancer](#swarm-mode-load-balancer)
* [Browse the application:](#browse-the-application)
* [Load testing with Gatling](#load-testing-with-gatling)
* [Log output](#log-output)
* [References and further reading](#references-and-further-reading)
## Reactive programming and Reactive systems
In plain terms reactive programming is about [non-blocking](http://www.reactivemanifesto.org/glossary#Non-Blocking) applications that are [asynchronous](http://www.reactivemanifesto.org/glossary#Asynchronous) and [message-driven](http://www.reactivemanifesto.org/glossary#Message-Driven) and require a small number of threads to [scale](http://www.reactivemanifesto.org/glossary#Scalability) vertically (i.e. within the JVM) rather than horizontally (i.e. through clustering).
A key aspect of reactive applications is the concept of backpressure which is a mechanism to ensure producers don’t overwhelm consumers. For example in a pipeline of reactive components extending from the database to the HTTP response when the HTTP connection is too slow the data repository can also slow down or stop completely until network capacity frees up.
Reactive programming also leads to a major shift from imperative to declarative async composition of logic. It is comparable to writing blocking code vs using the CompletableFuture from Java 8 to compose follow-up actions via lambda expressions.
For a longer introduction check the blog series [“Notes on Reactive Programming”](https://spring.io/blog/2016/06/07/notes-on-reactive-programming-part-i-the-reactive-landscape) by Dave Syer.
"We look at Reactive Programming as one of the methodologies or pieces of the puzzle for Reactive [Systems] as a broader term." Please read the ['Reactive Manifesto'](http://www.reactivemanifesto.org/) and ['Reactive programming vs. Reactive systems'](https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems) for more informations.
### Why now?
What is driving the rise of Reactive in Enterprise Java? Well, it’s not (all) just a technology fad — people jumping on the bandwagon with the shiny new toys. The driver is efficient resource utilization, or in other words, spending less money on servers and data centres. The promise of Reactive is that you can do more with less, specifically you can process higher loads with fewer threads. This is where the intersection of Reactive and non-blocking, asynchronous I/O comes to the foreground. For the right problem, the effects are dramatic. For the wrong problem, the effects might go into reverse (you actually make things worse). Also remember, even if you pick the right problem, there is no such thing as a free lunch, and Reactive doesn’t solve the problems for you, it just gives you a toolbox that you can use to implement solutions.
### Spring WebFlux (web reactive) module
Spring Framework 5 includes a new spring-webflux module. The module contains support for reactive HTTP and WebSocket clients as well as for reactive server web applications including REST, HTML browser, and WebSocket style interactions.
#### Server side
On the server-side WebFlux supports 2 distinct programming models:
- Annotation-based with @Controller and the other annotations supported also with Spring MVC
- Functional, Java 8 lambda style routing and handling
##### Annotation based
```java
@RestController
public class BlogPostController {
private final BlogPostRepository blogPostRepository;
public BlogPostController(BlogPostRepository blogPostRepository) {
this.blogPostRepository = blogPostRepository;
}
@PostMapping("/blogposts")
Mono<Void> create(@RequestBody Publisher<BlogPost> blogPostStream) {
return this.blogPostRepository.save(blogPostStream).then();
}
@GetMapping("/blogposts")
Flux<BlogPost> list() {
return this.blogPostRepository.findAll();
}
@GetMapping("/blogposts/{id}")
Mono<BlogPost> findById(@PathVariable String id) {
return this.blogPostRepository.findOne(id);
}
}
```
##### Functional
Functional programming model is not implemented within this application. I am not sure if it is posible to have both models in one application.
Both programming models are executed on the same reactive foundation that adapts non-blocking HTTP runtimes to the Reactive Streams API.
#### Client side
WebFlux includes a functional, reactive WebClient that offers a fully non-blocking and reactive alternative to the RestTemplate. It exposes network input and output as a reactive ClientHttpRequest and ClientHttpRespones where the body of the request and response is a Flux<DataBuffer> rather than an InputStream and OutputStream. In addition it supports the same reactive JSON, XML, and SSE serialization mechanism as on the server side so you can work with typed objects.
```java
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
public class ApplicationIntegrationTest {
WebTestClient webTestClient;
List<BlogPost> expectedBlogPosts;
List<Project> expectedProjects;
@Autowired
BlogPostRepository blogPostRepository;
@Autowired
ProjectRepository projectRepository;
@Before
public void setup() {
webTestClient = WebTestClient.bindToController(new BlogPostController(blogPostRepository), new ProjectController(projectRepository)).build();
expectedBlogPosts = blogPostRepository.findAll().collectList().block();
expectedProjects = projectRepository.findAll().collectList().block();
}
@Test
public void listAllBlogPostsIntegrationTest() {
this.webTestClient.get().uri("/blogposts")
.exchange()
.expectStatus().isOk()
.expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
.expectBodyList(BlogPost.class).isEqualTo(expectedBlogPosts);
}
@Test
public void listAllProjectsIntegrationTest() {
this.webTestClient.get().uri("/projects")
.exchange()
.expectStatus().isOk()
.expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
.expectBodyList(Project.class).isEqualTo(expectedProjects);
}
@Test
public void streamAllBlogPostsIntegrationTest() throws Exception {
FluxExchangeResult<BlogPost> result = this.webTestClient.get()
.uri("/blogposts")
.accept(TEXT_EVENT_STREAM)
.exchange()
.expectStatus().isOk()
.expectHeader().contentType(TEXT_EVENT_STREAM)
.returnResult(BlogPost.class);
StepVerifier.create(result.getResponseBody())
.expectNext(expectedBlogPosts.get(0), expectedBlogPosts.get(1))
.expectNextCount(1)
.consumeNextWith(blogPost -> assertThat(blogPost.getAuthorId(), endsWith("4")))
.thenCancel()
.verify();
}
...
}
```
Please note that webClient is requesting [Server-Sent Events](https://community.oracle.com/docs/DOC-982924) (text/event-stream).
We could stream individual JSON objects (application/stream+json) but that would not be a valid JSON document as a whole and a browser client has no way to consume a stream other than using Server-Sent Events or WebSocket.
### Spring Reactive data
Spring Data Kay M1 is the first release ever that comes with support for reactive data access. Its initial set of supported stores — MongoDB, Apache Cassandra and Redis
The repositories programming model is the most high-level abstraction Spring Data users usually deal with. They’re usually comprised of a set of CRUD methods defined in a Spring Data provided interface and domain-specific query methods.
In contrast to the traditional repository interfaces, a reactive repository uses reactive types as return types and can do so for parameter types, too.
```java
public interface BlogPostRepository extends ReactiveSortingRepository<BlogPost, String>{
Flux<BlogPost> findByTitle(Mono<String> title);
}
```
## CI with Travis
The application is build by [Travis](https://travis-ci.org/idugalic/reactive-company). [Pipeline](https://github.com/idugalic/reactive-company/blob/master/.travis.yml) is triggered on every push to master branch.
- Docker image is pushed to [Docker Hub](https://hub.docker.com/r/idugalic/reactive-company/)
## Running instructions
### Run the application by maven:
This application is using embedded mongo database.
You do not have to install and run mongo database before you run the application locally.
You can use NON-embedded version of mongo by setting scope of 'de.flapdoodle.embed.mongo' to 'test'.
In this case you have to install mongo server locally:
```bash
$ brew install mongodb
$ brew services start mongodb
```
Run it:
```bash
$ cd reactive-company
$ ./mvnw spring-boot:run
```
### Run the application on Cloud Foundry
Run application on local workstation with PCF Dev
- Download and install PCF: https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/introduction
- Start the PCF Dev: $ cf dev start -m 8192
- Push the app to PCF Dev: $ ./mvnw cf:push
- Enjoy: http://reactive-company.local.pcfdev.io/
You can adopt any CI pipeline you have to deploy your application on any cloud foundry instance, for example:
```bash
mvn cf:push [-Dcf.appname] [-Dcf.path] [-Dcf.url] [-Dcf.instances] [-Dcf.memory] [-Dcf.no-start] -Dcf.target=https://api.run.pivotal.io
```
### Run the application by Docker
I am running Docker Community Edition, version: 17.05.0-ce-mac11 (Channel: edge).
A [swarm](https://docs.docker.com/engine/swarm/) is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm. By running script bellow you will initialize a simple swarm with one node, and you will install services:
- reactive-company
- mongodb (mongo:3.0.4)
```bash
$ cd reactive-company
$ ./docker-swarm.sh
```
#### Manage docker swarm with Portainer
Portainer is a simple management solution for Docker, and is really simple to deploy:
```bash
$ docker service create \
--name portainer \
--publish 9000:9000 \
--constraint 'node.role == manager' \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
portainer/portainer \
-H unix:///var/run/docker.sock
```
Visit http://localhost:9000
#### Manage docker swarm with CLI
##### List docker services
```bash
$ docker service ls
```
##### Scale docker services
```bash
$ docker service scale stack_reactive-company=2
```
Now you have two tasks/containers running for this service.
##### Browse docker service logs
```bash
$ docker service logs stack_reactive-company -f
```
You will be able to determine what task/container handled the request.
#### Swarm mode load balancer
When using HTTP/1.1, by default, the TCP connections are left open for reuse. Docker swarm load balancer will not work as expected in this case. You will get routed to the same task of the service every time.
You can use 'curl' command line tool (NOT BROWSER) to avoid this problem.
The Swarm load balancer is a basic Layer 4 (TCP) load balancer. Many applications require additional features, like these, to name just a few:
- SSL/TLS termination
- Content‑based routing (based, for example, on the URL or a header)
- Access control and authorization
- Rewrites and redirects
### Browse the application:
#### Index page
Open your browser and navigate to http://localhost:8080
The response is resolved by [HomeController.java](https://github.com/idugalic/reactive-company/blob/master/src/main/java/com/idugalic/web/HomeController.java) and home.html.
- Blog posts are fully resolved by the Publisher - Thymeleaf will NOT be executed as a part of the data flow
- Projects are fully resolved by the Publisher - Thymeleaf will NOT be executed as a part of the data flow
#### Server-Sent Events page
Open your browser and navigate to http://localhost:8080/stream
This view is resolved by [StreamController.java](https://github.com/idugalic/reactive-company/blob/master/src/main/java/com/idugalic/web/StreamController.java) and sse.html template.
- *Blog posts* are NOT fully resolved by the Publisher
- Thymeleaf will be executed as a part of the data flow
- These events will be rendered in HTML by Thymeleaf
```java
@GetMapping(value = "/stream/blog")
public String blog(final Model model) {
final Flux<BlogPost> blogPostStream = this.blogPostRepository.findAll().log();
model.addAttribute("blogPosts", new ReactiveDataDriverContextVariable(blogPostStream, 1000));
return "sse :: #blogTableBody";
}
```
- *Projects* are NOT fully resolved by the Publisher
- Thymeleaf will be executed as a part of the data flow
- These events will be rendered in HTML by Thymeleaf
```java
@GetMapping(value = "/stream/project")
public String project(final Model model) {
final Flux<Project> projectStream = this.projectRepository.findAll().log();
model.addAttribute("projects", new ReactiveDataDriverContextVariable(projectStream, 1000));
return "sse :: #projectTableBody";
}
```
- *Blog posts (tail)* are NOT fully resolved by the Publisher
- Thymeleaf will be executed as a part of the data flow
- These events will be rendered in JSON by Spring WebFlux (using Jackson)
- We are using a [Tailable Cursor](https://docs.mongodb.com/manual/core/tailable-cursors/) that remains open after the client exhausts the results in the initial cursor. Tailable cursors are conceptually equivalent to the tail Unix command with the -f option (i.e. with “follow” mode). After clients insert new additional documents into a capped collection, the tailable cursor will continue to retrieve documents. You may use a Tailable Cursor with [capped collections](https://docs.mongodb.com/manual/core/capped-collections/) only.
- If you add a new blog post to the database, it will be displayed on the page in the HTML table.
```java
@GetMapping("/tail/blogposts")
Flux<BlogPost> tail() {
LOG.info("Received request: BlogPost - Tail");
try {
// Using tailable cursor
return this.blogPostRepository.findBy().log();
} finally {
LOG.info("Request pocessed: BlogPost - Tail");
}
}
```
#### Blog posts (REST API):
```bash
$ curl http://localhost:8080/blogposts
```
or
```bash
$ curl -v -H "Accept: text/event-stream" http://localhost:8080/blogposts
```
#### Projects (REST API):
```bash
$ curl http://localhost:8080/projects
```
or
```bash
$ curl -v -H "Accept: text/event-stream" http://localhost:8080/projects
```
#### Blog posts - tial (REST API)
```bash
$ curl -v -H "Accept: text/event-stream" http://localhost:8080/tail/blogposts
```
## Load testing with Gatling
Run application first (by maven or docker)
```bash
$ ./mvnw gatling:execute
```
By default src/main/test/scala/com/idugalic/RecordedSimulation.scala will be run.
The reports will be available in the console and in *html files within the 'target/gatling/results' folder
## Log output
A possible log output we could see is:

As we can see the output of the controller method is evaluated after its execution in a different thread too!
```java
@GetMapping("/blogposts")
Flux<BlogPost> list() {
LOG.info("Received request: BlogPost - List");
try {
return this.blogPostRepository.findAll().log();
} finally {
LOG.info("Request pocessed: BlogPost - List");
}
}
```
We can no longer think in terms of a linear execution model where one request is handled by one thread. The reactive streams will be handled by a lot of threads in their lifecycle. This complicates things when we migrate from the old MVC framework. We no longer can rely on thread affinity for things like the security context or transaction handling.
## Slides
<iframe width='770' height='515' src='https://gitpitch.com/idugalic/reactive-company/master?grs=github&t=white' frameborder='0' allowfullscreen></iframe>
## References and further reading
- http://www.reactivemanifesto.org/
- https://www.oreilly.com/ideas/reactive-programming-vs-reactive-systems
- http://www.lightbend.com/blog/the-basics-of-reactive-system-design-for-traditional-java-enterprises
- http://docs.spring.io/spring-framework/docs/5.0.0.BUILD-SNAPSHOT/spring-framework-reference/html/web-reactive.html
- https://spring.io/blog/2016/06/07/notes-on-reactive-programming-part-i-the-reactive-landscape
- https://spring.io/blog/2016/06/13/notes-on-reactive-programming-part-ii-writing-some-code
- http://www.ducons.com/blog/tests-and-thoughts-on-asynchronous-io-vs-multithreading
- https://www.ivankrizsan.se/2016/05/06/introduction-to-load-testing-with-gatling-part-4/
- https://dzone.com/articles/functional-amp-reactive-spring-along-with-netflix
- [asynchronous and non-blocking IO](http://blog.omega-prime.co.uk/?p=155)
- [Functional and Reactive Spring with Reactor and Netflix OSS](https://dzone.com/articles/functional-amp-reactive-spring-along-with-netflix)
- https://www.youtube.com/watch?v=rdgJ8fOxJhc
- https://speakerdeck.com/sdeleuze/functional-web-applications-with-spring-and-kotlin
| 1 |
Rapter1990/springbootmicroservicedailybuffer | Spring Cloud Example (API Gateway, Zipkin, Redis, Authentication, Config Server, Docker, Kubernetes ) | api-gateway config-server docker docker-compose eureka-server java jenkins jenkinsfile junit kubernetes microservice mysql postman-collection redis resillience4j services spring-boot spring-cloud spring-security zipkin | # Spring Boot Microservice Example (Eureka Server, Config Server, API Gateway, Services , Zipkin, Redis, Resilience4j, Docker, Kubernetes)
<img src="screenshots/springbootmicroservice_drawio.png" alt="Main Information" width="800" height="500">
# About the project
<ul style="list-style-type:disc">
<li>This project is based Spring Boot Microservices with the usage of Docker and Kubernetes</li>
<li>User can register and login through auth service by user role (ADMIN or USER) through api gateway</li>
<li>User can send any request to relevant service through api gateway with its bearer token</li>
</ul>
7 services whose name are shown below have been devised within the scope of this project.
- Config Server
- Eureka Server
- API Gateway
- Auth Service
- Order Service
- Payment Service
- Product Service
### Docker Hub
<a href="https://hub.docker.com/search?q=noyandocker">Link</a>
### Git Backend for Config server
<a href="https://github.com/Rapter1990/springappconfig">Link</a>
### Explore Rest APIs
<table style="width:100%">
<tr>
<th>Method</th>
<th>Url</th>
<th>Description</th>
<th>Valid Request Body</th>
<th>Valid Request Params</th>
<th>Valid Request Params and Body</th>
</tr>
<tr>
<td>POST</td>
<td>authenticate/signup</td>
<td>Signup for User and Admin</td>
<td><a href="README.md#signup">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>POST</td>
<td>authenticate/login</td>
<td>Login for User and Admin</td>
<td><a href="README.md#login">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>POST</td>
<td>authenticate/refreshtoken</td>
<td>Refresh Token for User and Admin</td>
<td><a href="README.md#refreshtoken">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>POST</td>
<td>/product</td>
<td>Add Product</td>
<td><a href="README.md#addproduct">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GET</td>
<td>/product/{product_id}</td>
<td>Get Product By Id</td>
<td></td>
<td></td>
<td><a href="README.md#getProductById">Info</a></td>
</tr>
<tr>
<td>PUT</td>
<td>/reduceQuantity/{product_id}?quantity={quantity_value}</td>
<td>Reduce Quantity of Product</td>
<td></td>
<td><a href="README.md#reduceQuantityOfProduct">Info</a></td>
<td></td>
</tr>
<tr>
<td>DELETE</td>
<td>/product/{product_id}</td>
<td>Delete Prodcut By Id</td>
<td></td>
<td></td>
<td><a href="README.md#deleteProductById">Info</a></td>
</tr>
<tr>
<td>POST</td>
<td>/order/placeorder</td>
<td>Place Order</td>
<td><a href="README.md#placeOrder">Info</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GET</td>
<td>/order/{order_id}</td>
<td>Get Order By Id</td>
<td></td>
<td></td>
<td><a href="README.md#getOrderById">Info</a></td>
</tr>
<tr>
<td>GET</td>
<td>/payment/order/{order_id}</td>
<td>Get Payment Details by Order Id</td>
<td></td>
<td></td>
<td><a href="README.md#getPaymentDetailsByOrderId">Info</a></td>
</tr>
</table>
### Used Dependencies
* Core
* Spring
* Spring Boot
* Spring Boot Test (Junit)
* Spring Security
* Spring Web
* RestTemplate
* FeighClient
* Spring Data
* Spring Data JPA
* Spring Cloud
* Spring Cloud Gateway Server
* Spring Cloud Config Server
* Spring Cloud Config Client
* Netflix
* Eureka Server
* Eureka Client
* Database
* Mysql
* Redis
* Zipkin
* Docker
* Kubernetes
* Jenkins
* Junit
* Log4j2
## Valid Request Body
##### <a id="signup"> Signup for User and Admin
```
http://localhost:9090/authenticate/signup
{
"username" : "User",
"password" : "User",
"email" : "user@refreshtoken.com",
"roles" : [
"ROLE_USER"
]
}
http://localhost:9090/authenticate/signup
{
"username" : "admin1",
"password" : "admin1",
"email" : "admin1@refreshtoken.com",
"roles" : [
"ROLE_ADMIN"
]
}
```
##### <a id="login"> Login for User and Admin
```
http://localhost:9090/authenticate/login
{
"username" : "User",
"password" : "User"
}
http://localhost:9090/authenticate/login
{
"username" : "UserAdmin",
"password" : "UserAdmin"
}
```
##### <a id="refreshtoken"> Refresh Token for User and Admin
```
http://localhost:9090/authenticate/refreshtoken
{
"refreshToken" : ""
}
```
##### <a id="addProduct"> Add Product
```
http://localhost:9090/product
{
"name" : "Product 1",
"price" : 100,
"quantity" : 1
}
Bearer Token : User Token
```
##### <a id="placeorder"> Place Order
```
http://localhost:9090/order/placeorder
{
"productId" : 1,
"totalAmount" : 100,
"quantity" : 1,
"paymentMode" : "CASH"
}
Bearer Token : User Token
```
## Valid Request Params
##### <a id="reduceQuantityOfProduct">Reduce Quantity of Product
```
http://localhost:9090/product/reduceQuantity/1?quantity=1
Bearer Token : User Token
```
## Valid Request Params and Body
##### <a id="getProductById">Get Product By Id
```
http://localhost:9090/product/{prodcutId}
Bearer Token : User Token
```
##### <a id="deleteProductById">Delete Product By Id
```
http://localhost:9090/product/{prodcutId}
Bearer Token : Admin Token
```
##### <a id="deleteProductById">Delete Product By Id
```
http://localhost:9090/order/{order_id}
Bearer Token : User Token
```
##### <a id="getPaymentDetailsByOrderId">Get Payment Details by Order Id
```
http://localhost:9090/payment/order/{order_id}
Bearer Token : User Token
```
### 🔨 Run the App
<b>Local</b>
<b>1 )</b> Download your project from this link `https://github.com/Rapter1990/springbootmicroservicedailybuffer`
<b>2 )</b> Go to the project's home directory : `cd springbootmicroservicedailybuffer`
<b>3 )</b> Run <b>Service Registry (Eureka Server)</b>
<b>4 )</b> Run <b>config server</b>
<b>5 )</b> Run <b>zipkin</b> and <b>redis</b> through these commands shown below on <b>Docker</b>
```
docker run -d -p 9411:9411 openzipkin/zipkin
docker run -d --name redis -p 6379:6379 redis
```
<b>6 )</b> Run <b>api gateway</b>
<b>7 )</b> Run other services (<b>auth-service</b>, <b>orderservice</b>, <b>paymentservice</b> and lastly <b>productservice</b>)
<b>Docker</b>
<b>1 )</b> Install <b>Docker Desktop</b>. Here is the installation <b>link</b> : https://docs.docker.com/docker-for-windows/install/
<b>2 )</b> Build <b>jar</b> file for all services shown below
<table style="width:100%">
<tr>
<th>Service</th>
<th>Command</th>
</tr>
<tr>
<td>service-registry</td>
<td>mvn clean install</td>
</tr>
<tr>
<td>configserver</td>
<td>mvn clean install</td>
</tr>
<tr>
<td>apigateway</td>
<td>mvn clean install -DskipTests</td>
</tr>
<tr>
<td>auth-service</td>
<td>mvn clean install -DskipTests</td>
</tr>
<tr>
<td>orderservice</td>
<td>mvn clean install -DskipTests</td>
</tr>
<tr>
<td>productservice</td>
<td>mvn clean install -DskipTests</td>
</tr>
<tr>
<td>paymentservice</td>
<td>mvn clean install -DskipTests</td>
</tr>
</table>
<b>3 )</b> Build all <b>images</b> and push to <b>Docker Hub</b>
```
1 ) service-registry
- docker build -t microservicedailybuffer/serviceregistry:0.0.1 .
- docker tag microservicedailybuffer/serviceregistry:0.0.1 noyandocker/serviceregistry
- docker push noyandocker/serviceregistry
2 ) configserver
- docker build -t microservicedailybuffer/configserver:0.0.1 .
- docker tag microservicedailybuffer/configserver:0.0.1 noyandocker/configserver
- docker push noyandocker/configserver
3 ) api-gateway
- docker build -t microservicedailybuffer/apigateway:0.0.1 .
- docker tag microservicedailybuffer/apigateway:0.0.1 noyandocker/apigateway
- docker push noyandocker/apigateway
4 ) auth-service
- docker build -t microservicedailybuffer/authservice:0.0.1
- docker tag microservicedailybuffer/authservice:0.0.1 noyandocker/authservice
- docker push noyandocker/authservice
5 ) productservice
- docker build -t microservicedailybuffer/productservice:0.0.1 .
- docker tag microservicedailybuffer/productservice:0.0.1 noyandocker/productservice
- docker push noyandocker/productservice
6 ) orderservice
- docker build -t microservicedailybuffer/orderservice:0.0.1 .
- docker tag microservicedailybuffer/orderservice:0.0.1 noyandocker/orderservice
- docker push noyandocker/orderservice
7 ) paymentservice
- docker build -t microservicedailybuffer/paymentservice:0.0.1 .
- docker tag microservicedailybuffer/paymentservice:0.0.1 noyandocker/paymentservice
- docker push noyandocker/paymentservice
```
<b>4 )</b> Run all <b>Containers</b> through this command shown below under main folder
```
docker-compose up -d
```
<b>5 )</b> Send request to any service by using request collections under <b>postman_collection</b>
<b>Kubernetes</b>
<b>1 )</b> Install <b>minikube</b> to access this link https://minikube.sigs.k8s.io/docs/start/
<b>2 )</b> Open <b>command prompt</b> and install <b>kubectl</b> through this command shown below
```
minikube kubectl --
```
<b>3 )</b> Start <b>minikube</b> through this command shown below.
```
minikube start
```
<b>4 )</b> Open <b>minikube dashboard</b> through this command shown below.
```
minikube dashboard
```
<b>5 )</b> Run all <b>images</b> coming from Docker hub on Kubernetes through this command shown below.
```
kubectl apply -f k8s
```
<b>6 )</b> Show all information about images running on <b>Kubernetes</b> through this command
```
kubectl get all
```
<b>7 )</b> Show all <b>services</b> running on Kubernetes through this command
```
kubectl get services
```
<b>8 )</b> Show <b>eureka server</b> on Kubernetes through this command
```
minikube service eureka-lb
```
<b>9 )</b> Show <b>api gateway</b> on Kubernetes through this command
```
minikube service cloud-gateway-svc
```
<b>10 )</b> Copy <b>IP address</b> and Replace <b>it</b> with <b>localhost</b> of the <b>endpoints</b> defined in <b>postman collection</b>
<b>Jenkins</b>
<b>1 )</b> Download <b>jenkins</b> to access this link https://hub.docker.com/r/jenkins/jenkins
<b>2 )</b> Run <b>Jenkins</b> through this command shown below
```
docker run -p 8080:8080 -p 50000:50000 --restart=on-failure jenkins/jenkins:lts-jdk11
```
<b>3 )</b> Install <b>Jenkins</b> and define <b>username</b> and <b>password</b></b>
<b>3 )</b> Click <i>New Item</i> and Create pipeline to run Jenkinsfile
<b>4 )</b> Run <b>pipeline</b>
### Screenshots
<details>
<summary>Click here to show the screenshot of project</summary>
<p> Docker Desktop to show all running containers </p>
<img src ="screenshots/docker_1.PNG">
<p> Docker Hub </p>
<img src ="screenshots/docker_2.PNG">
<p> Kubernetes Dashboard </p>
<img src ="screenshots/kubernetes_screenshot.PNG">
<p> Jenkins Figure 1 </p>
<img src ="screenshots/jenkins_1.PNG">
<p> Jenkins Figure 2 </p>
<img src ="screenshots/jenkins_2.PNG">
</details>
| 1 |
TimTinkers/Palm | A series of examples studying the new game development capabilities ERC-721 objects enable. | null | # Palm
Palm is the continuation of my work on [Galah](https://github.com/TimTinkers/Galah) to explore new and unique game development capabilities that integrating with the Ethereum blockchain can bring. Through the specific use of the [ERC-721](http://erc721.org/) standard for non-fungible assets, Palm considers just what can be accomplished with globally-available trustless game state.
<p float="left">
<img width="285" height="285" src="Media/PalmCockatoo.jpg"/>
<img height="285" src="Media/GalahArchitecture.PNG"/>
</p>
In keeping with the precedent set by Galah, I've named the project after a large Australian parrot.<sup>1</sup>
## Motivation
The multibillion-dollar video game industry is increasingly adopting in-game purchases as an additional revenue source. Such in-game purchases are dubbed "microtransactions" and require players spend real-world money to unlock in-game content.<sup>2</sup> Microtransactions are often criticized by players and have resulted in public relations disasters for the companies which implement them.<sup>3</sup> Using the Ethereum blockchain, this project demonstrates a revision to the microtransaction model where players can purchase and truly own their in-game content.<sup>4,5</sup>
Ethereum is one of several blockchains popular among developers for its ability to execute useful code. With Ethereum, this is done using developer-defined "smart contracts." A developing standard in Ethereum is the ERC-721 "non-fungible token" to unify how contracts represent unique, individually-owned metadata. This standard enables a common "CryptoObject" where developers can store any information their applications use. Ethereum users can take irrevocable ownership of these objects and freely trade them with one another.
This project demonstrates techniques by which the [Unreal Engine](https://www.unrealengine.com/en-US/blog), a popular tool for developing video games, can interface with Ethereum contracts. Now developers can use the CryptoObject contracts to represent, sell, and interact with their in-game content. The controversial microtransaction model is altered: players no longer pay to just unlock content restricted to a single game. Instead, they attain real ownership. Player-owned content can be exchanged with others or used in multiple games.
**Palm demonstrates how Ethereum empowers the free trade of online game content.**
## Exchange Trust Model
Typically, real-time video games simulate their world using a fixed simulation timestep known as the "tick rate." For example, Valve Corporation's popular multiplayer first-person shooter _Counter-Strike: Source_ supports servers with a tick rate of 66Hz.<sup>6</sup> That is, the game servers update state 66 times per second. Game logic, physics simulations, and player input signals are all processed in frequent, discrete time quanta. These state updates are decoupled from the client-side rendering frame rate, allowing for smooth rendering to be maintained across a variety of tick rates.
Real-time games can update their state more frequently than transaction times for Ethereum can currently support. The all-time peak transaction rate for the Ethereum network was 15.6 transactions per second.<sup>7</sup> Even games with less frequent tick rates like Epic Games' _Fortnite_ outpace this peak transaction rate.<sup>8</sup>
It is currently infeasible for a real-time game to track and update its state directly on Ethereum, even if its tick rate was dramatically reduced. The time it takes to interact with Ethereum and wait for a miner to commit a state update transaction to the chain is variable. A fixed tick rate is not currently possible to maintain, which makes processing logic, physics, and input much more difficult. Lastly, operating the game server would require constantly burning gas for small, short-lived updates and would be very costly.
Clearly, traditional servers are more appropriate than Ethereum for handling the frequent game state updates required to simulate a multiplayer game. What role then, if any, can Ethereum play in video games?
||
|:-:|
|A hybrid trust model similar to a cryptocurrency exchange can overcome many scalability issues.|
The contracts, web server, and interface shown used in the following demonstration are available in the [GameExchangeContract](https://github.com/TimTinkers/Palm/tree/master/GameExchangeContract) folder of this repository. The interface shown above is a simple page using [web3.js](https://github.com/ethereum/web3.js/) to read state from and interact with a deployed instance of my [GameExchange](https://github.com/TimTinkers/Palm/blob/master/GameExchangeContract/contracts/GameExchange.sol) contract.
The solution that Palm explores is a hybrid trust model where players can opt into and out of object modification from a centralized authority under the control of a game's developers. Instead of a game interacting with a player's on-chain objects in real time, the game can track state changes off-chain on a traditional server. Updates are only committed to the blockchain periodically.
This model is very similar to how large cryptocurrency exchanges operate: when users hold cryptocoins on an exchange, they typically don't own them on-chain. Instead, the off-chain cryptocoin accounting is centralized entirely on the exchange's servers. This model suffers from centralization in that users don't actually fully own their coins until withdrawing from the exchange to another wallet. However, the model benefits from being able to update its off-chain reckoning of state quicker and cheaper than interacting with the blockchain would allow.
When the player opts an object into modification, as they are shown doing above, they are consciously trusting the game authority to manage state updates to that object appropriately. A malicious or faulty game authority could manipulate the metadata of the object such that it destroys whatever value the object might have held. A malicious cryptocurrency exchange could steal coins in much the same fashion. When the player opts an object out of modification, they lock its state such that not even the game authority can manipulate it.
||
|:-:|
|Palm's example game is a simple shooting gallery where the player's high-scoring gun is tokenized.|
To demonstrate this model in action, Palm includes a simple game which tracks a player's high scores per gun as ERC-721 objects. The game includes a client built in the Unreal Engine and a separate Java server for interacting with the exchange contracts. The Unreal Engine client assets are available in the [TargetShootProject](https://github.com/TimTinkers/Palm/tree/master/TargetShootProject) folder of this repository. The Java server is available in the [TargetShootServer](https://github.com/TimTinkers/Palm/tree/master/TargetShootServer) folder.
The player is locked to a small shooting area and given a gun. The gun displays its all-time high score, which is recorded in a corresponding ERC-721 record. After the player shoots the red cube to trigger the start of a match, they have 30 seconds to shoot as many red popup figures as they can while avoiding green figures. The server tracks the player's score and updates the gun's display when new high scores are achieved.
After 30 seconds, the round ends and the server sees if the player has set a new record. Only then is the player's gun object, if they have opted for object modification, updated with the new high score. During the course of a match, all communication is directly between the Unreal Engine and the Java server. The gas and time costs of transacting with the blockchain are avoided until the player is done playing.
<p align="center">
<img src="Media/new_highscore.PNG"/>
</p>
Taking a look back at the web interface, we can see that the server authority has modified the player's gun object to include the new high score. While this simple game trusts the client, in practice the game server would exist [separately from the client](https://gafferongames.com/post/what_every_programmer_needs_to_know_about_game_networking/) as a remote authority. The game authority would modify the player's gun object from a separate machine with separately signed transactions, preventing players from cheating.<sup>9</sup>
||
|:-:|
|The previous high score is retrievable from Ethereum and can be modified by the server when needed.|
The example above shows a player entering the game again for another match. Their previous high score persisted on the blockchain between their play attempts. In this gameplay clip, once the player surpasses their old high score, the gun object begins updating with the score value tracked on the remote centralized server. When the match ends, the newer high score is committed to the blockchain.
||
|:-:|
|Showing the even-higher high score from the second round, as well as the user opting-out.|
The player, satisfied with the high score of 19, chooses to opt out of object modification. This is a precautionary step to prevent the game authority from altering the player's high score on that gun object. In practice, because players must pay a small gas cost in transacting with Ethereum to opt into and out of object modification, likely player behavior will be to just fully trust the game authority to behave and remain opted in at all times.
The player might also be opting out in order to trade their gun object to someone. Exchanging objects actually requires the object to be opted out of modification. Palm has made this design decision in order to prevent sold objects from being modified without the buyer's consent; players can only transfer objects which are opted-out of modification. This model allows the buyer to know for certain that whatever object they purchase will be theirs in exactly the same state it was sold in. The edge-case where a buyer sees their recently-purchased item change because the seller is still playing with it has been handled.
## Using Objects in Multiple Games
One interesting use case of blockchain-based objects in games is the ability to seamlessly share objects between multiple games. One could imagine a situation where a player's [$28,000 of virtual hats](https://www.pcgamesn.com/tf2/28000-team-fortress-2-backpack) could be worn by characters across many different games. Player ownership of these collectable items becomes far more tangible: not only can they be made certifiably unique, but the objects also won't disappear even if the game they come from does. Their value can outlive the reason why they were originally purchased, and objects can find fresh life in newer titles.
Blockchain-backed ERC-721 objects can also be used to represent a player's identity and statistics as they move from one game to another. Maybe a player's skill in _Counter-Strike_ automatically entitles them to a higher rank in the next _Battlefield_ game. This raises an especially important point: the ERC-721 objects can be used by multiple games even if they don't have the same development team or same access to the modification authority. That is, every developer has read-access to a game's objects and can use that in their own game, even if they are unable to write to the object.
||
|:-:|
|An entirely different game without modification authority can read objects from other games.|
In the [GuessingGame](https://github.com/TimTinkers/Palm/tree/master/GuessingGame) folder, Palm includes a simple Java game which communicates with the deployed GameExchange contract. The guessing game generates a random number between 0 and 100 and the player has a limited number of guesses to find the number. The player is given notice if they guess near the actual number.
The guessing game is not a modifying authority to the TargetShootProject's gun objects. The guessing game does, however, use its read access to pull the player's high score from one of their player-specified owned gun objects. The number of guesses a player is given is equal to the high score they were able to achieve with their chosen guessing gun. While this is an extremely simple example, it does show what kind of interesting cross-game interactions can emerge from using this standard.
## Trustless Economy
Given how commonplace microtransactions are in modern games, it is clear that game developers want to monetize their in-game economies to produce an additional source of revenue. Developers want to do so in a manner which is guaranteed to be tamper-proof.
In-game currencies are prone to abuse when players exploit unknown bugs. Electronic Arts' failure to secure the in-game currency of their _FIFA_ series allowed it to be freely duplicated by players. One player, Ricky Miller, was actually prosecuted and plead guilty to conspiracy to commit fraud after [duplicating $16M worth of FIFA coins](https://www.theregister.co.uk/2017/05/02/video_game_hacker_probation/).<sup>11</sup> In the face of concerted effort by players to exploit bugs in a game's trading and currency system, developers are incentivized to use a blockchain like Ethereum for its proven security.
Another concern for game developers where valuable virtual items are involved is legal liability surrounding what players choose to do with those objects on your platform. Valve Corporation, for example, briefly had to deal with lawsuits regarding [illegal skin gambling](https://esportsobserver.com/class-action-lawsuit-blaming-valve-illegal-skin-gambling-refiled-district-court/) on their platform. In their popular game _Counter-Strike: Global Offensive_, players can decorate their weapons with colorful "skins." Some of these skins are extremely rare and valuable. Players were using the in-game trading functionality to gamble valuable skins on the outcome of professional _Counter-Strike_ matches.<sup>12</sup> Valve's liability concerns were how responsible they were for the illegal activity of players using their trade platform.
For developers using a public blockchain like Ethereum as the platform for executing all transfers of in-game objects between players, the liability concerns seem diminished _(Tim Clancy is not a lawyer)_. If players choose to gamble with the ERC-721 records from your game, you have no way to stop them. They could build out their own infrastructure on Ethereum and the gambling behavior would exist in a format that you provably have no control over.
## Interactions in the Trustless Economy
This section of the project observes the possible interactions between ERC-721 object exchanges for two competing games. It deals specifically with two instances of the GameExchange contract deployed live to the Ropsten test network, ["GameExchange"](https://ropsten.etherscan.io/address/0x5e469871e80474e231af5c252471b6d6817fc990) and ["RivalExchange"](https://ropsten.etherscan.io/address/0x09099905e4f5e8383ee33b843eeea014be4f8037). The destruction of objects from one exchange in facilitating the creation of objects on another is handled by the [deployed "SwapAndBurn"](https://ropsten.etherscan.io/address/0x6e6af08a1fa2fd0837dbdd01448c8ec36f63ec29) contract whose source is available [here](https://github.com/TimTinkers/Palm/blob/master/GameExchangeContract/contracts/SwapAndBurn.sol). The goal is to demonstrate how developers from one exchange can interfere with objects on another exchange.
In this scenario, developers from team A operate "GameExchange" and serve objects for their game. Developers from team B setup "RivalExchange," another ERC-721 object registry, and want to entice users away from team A's game. To that end, team B sets up the "SwapAndBurn" contract. This contract allows a player on team A's "GameExchange" to destroy one of their objects in return for a free object on team B's "RivalExchange." Team B hopes that by encouraging team A's players to destroy their objects, they can disrupt the gameplay or economy surrounding team A's game.
||||
|:-:|:-:|:-:|
|Two exchanges.|Minting object.|Confirming mint using MetaMask: I am locally the exchange authority.|
The stills above show the process of requesting that a new object be minted on the first "GameExchange" exchange. After specifying the desired metadata, issuing the minting transaction is handled by [MetaMask](https://metamask.io/). Currently I am just freely minting an object for myself, but it is conceivable that developers would put this sort of functionality behind a storefront whereby the player is only given a newly-minted object after paying.
|||
|:-:|:-:|
|Object minted.|Requesting approval on the object with a token ID of 2.|
After the object has been successfully minted, we can see the new listing under the "GameExchange" as a token with ID of 2 and metadata of "Second test asset!" Its name is colored black, as opposed to red for the previously-existing token 0, because it has not been approved for the [deployed "SwapAndBurn"](https://ropsten.etherscan.io/address/0x6e6af08a1fa2fd0837dbdd01448c8ec36f63ec29) contract to take ownership of. The user who owns token 2 must specifically request that "SwapAndBurn" be approved to take ownership later.
This explicit approval step is necessary because the "SwapAndBurn" contract must be allowed to take an object away from the player and destroy it before issuing the player a new token on "RivalExchange." The player can verify through inspection of the SwapAndBurn contract that there is no risk to this approval step: there is no way by which the SwapAndBurn contract can take ownership over the player's object without also issuing them their new "RivalExchange" object.
||||
|:-:|:-:|:-:|
|Approval successful.|Requesting a trade on the object with a token ID of 2.|Trade successful.|
The approval step succeeded, as indicated by token 2's listing turning red. Next, the player requests the trade from "SwapAndBurn." This function takes ownership of token 2 as it was approved to do, burns token 2, and then issues a new object to the player. The final still shows that this is successfully the case: "GameExchange" has permanently lost an object while the player has redeemed the newly-minted "RivalExchange" token with ID of 1.
## Conclusion
There are countless creative ways to apply smart contracts and the Ethereum blockchain to the realm of video game development. Blockchains provide developers with an avenue for persistent data storage, secure trade platforms, and a new avenue to monetize content. The use of a standarized ERC-721 object enables unprecedented cross-game interactions. Games can interact with each other's data even among different development teams. Ethereum is empowering game developers, and Palm has barely scratched the surface.
## References
The following resources are important references for the information presented in this project:
1. The Palm Cockatoo image is the work of [Reg Mckenna](https://www.flickr.com/photos/whiskymac/), released [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/).
2. Davidovici-Nora, M.: Paid and Free Digital Business Models Innovations in the Video Game Industry. Institut Mines-Telecom, 83-102 (2014).
3. [Most Downvoted Comment](https://www.pcgamer.com/the-most-downvoted-comment-in-reddit-history-is-now-a-star-wars-battlefront-2-mod/), [Electronic Arts](https://www.ea.com/) received massive microtransaction backlash in this [reddit thread](https://www.reddit.com/r/StarWarsBattlefront/comments/7cff0b/seriously_i_paid_80_to_have_vader_locked/).
4. Olsson, B., Sidenblom, L.: Business Models for Video Games. Department of Informatics, Lund University, 5-50 (2010).
5. Švelch, J.: Playing with and against Microtransactions. The Evolution and Social Impact of Video Game Economics. p. 102-120 Lexington Books, London (2017).
6. [Valve Networking Guide](https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking), an excellent primer on multiplayer game networking with specifics for Valve titles.
7. [Ethereum Transaction Rate](https://etherscan.io/chart/tx), as of 4/25/2018 the rate peaked at 1,349,890 transactions on 1/4/2018.
8. [Battle Royale Tick Rates](https://www.youtube.com/watch?v=u0dWDFDUF8s), an analysis of the tick rates in several multiplayer games of the battle royale genre.
9. [Gaffer On Games](https://gafferongames.com/post/what_every_programmer_needs_to_know_about_game_networking/), an authority on the importance and details of authoritative game networking.
10. [Expensive Team Fortress 2 Backpack](https://www.pcgamesn.com/tf2/28000-team-fortress-2-backpack), real money accrues in game objects, like $28k of virtual hats.
11. [FIFA Coin Heist](https://www.theregister.co.uk/2017/05/02/video_game_hacker_probation/), a group reverse-engineered enough of FIFA to exploit a currency-duplication bug.
12. [Valve Skin Gambling](https://esportsobserver.com/class-action-lawsuit-blaming-valve-illegal-skin-gambling-refiled-district-court/), Valve faced some lawsuits regarding players illegally gambling using their in-game objects.
## Supporting Projects
I'd like to thank the following guides, tools, and projects which greatly supported the development of Palm:
- [ERC-721](http://erc721.org/), a good primer on the developing Ethereum standard.
- [OpenZeppelin](https://github.com/OpenZeppelin/openzeppelin-solidity), community-produced Solidity developer resources with a proven history of success.
- [Unreal Engine 4](https://www.unrealengine.com/en-US/blog), the Unreal Engine is a free high-quality game and physics engine.
- [Ethereum JavaScript API](https://github.com/ethereum/web3.js/), web3.js provides the wrappers needed to integrate with smart contracts on the web interface.
- [The Online ABI Encoding Tool by HashEx](https://abi.hashex.org/), to convert constructor parameters to ABI encoding for verification.
- [Etherscan](etherscan.io), for providing an easy interface to validate deployment and contract state.
- [JavaScript Promises in Web3](http://shawntabrizi.com/crypto/making-web3-js-work-asynchronously-javascript-promises-await/), this article provides an overview on converting Web3 calls to Promises seamlessly.
- [MetaMask](https://metamask.io/), a browser add-on which lets one interact with Ethereum without a full node.
- [Truffle](https://github.com/trufflesuite/truffle), a development environment, testing framework and asset pipeline for Ethereum.
- [Infura](https://infura.io/), a gateway for cloud-hosted Ethereum nodes.
- [Web3j](https://web3j.io/), a lightweight, reactive, type-safe Java and Android library for integrating with nodes on Ethereum blockchains.
- [LowEntry Socket Connection](https://www.unrealengine.com/marketplace/low-entry-socket-connection), a useful networking plugin for the Unreal Engine with native Java integration.
- [json-simple](https://github.com/fangyidong/json-simple), a simple and fast JSON parser.
- [solc-js](https://github.com/ethereum/solc-js), JavaScript Solidity compiler bindings used here to create the Java contract wrapper.
- The generous support of the Berkman Fund for Undergraduate Innovation at Penn Engineering.
| 0 |
PauloGaldo/telegram-bot | Spring Boot Java Example for the Telegram Bot API | null | null | 1 |
fuinorg/ddd-cqrs-4-java-example | Example Java DDD/CQRS/Event Sourcing microservices with Quarkus, Spring Boot and EventStore from Greg Young. | null | # ddd-cqrs-4-java-example
Example Java DDD/CQRS/Event Sourcing microservices with [Quarkus](https://quarkus.io/), [Spring Boot](https://spring.io/projects/spring-boot/) and the [EventStore](https://eventstore.org/) from Greg Young. The code uses the lightweight [ddd-4-java](https://github.com/fuinorg/ddd-4-java) and [cqrs-4-java](https://github.com/fuinorg/cqrs-4-java) libaries. No special framework is used except the well known JEE/Spring standards.
[](https://openjdk.java.net/projects/jdk/17/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/fuinorg/ddd-cqrs-4-java-example/actions/workflows/maven.yml)
## Background
This application shows how to implement [DDD](https://en.wikipedia.org/wiki/Domain-driven_design), [CQRS](https://en.wikipedia.org/wiki/Command%E2%80%93query_separation) and [Event Sourcing](https://martinfowler.com/eaaDev/EventSourcing.html) without a DDD/CQRS framework. It uses just a few small libraries in addition to standard web application frameworks like [Quarkus](https://quarkus.io/) and [Spring Boot](https://spring.io/projects/spring-boot/).
If you are new to the DDD/CQRS topic, you can use these mindmaps to find out more:
- [DDD Mindmap](https://www.mindmeister.com/de/177813182/ddd)
- [CQRS Mindmap](https://www.mindmeister.com/de/177815383/cqrs)
Here is an overview of how such an application looks like:
[](doc/cqrs-overview.png)
## Components
- **[Shared](shared)** - Common code for all demo applications (commands, events, value objects and utilities).
- **[Aggregates](aggregates)** - DDD related code for all demo applications (aggregates, entities and business exceptions).
- **[Quarkus](quarkus)** - Two microservices (Command & Query) based on [Quarkus](https://quarkus.io/).
- **[Spring Boot](spring-boot)** - Two microservices (Command & Query) based on [Spring Boot](https://spring.io/projects/spring-boot/).
## Getting started
The following instructions are tested on Linux (Ubuntu 22)
**CAUTION:** Building and running on Windows will require some (small) changes.
### Prerequisites
Make sure you have the following tools installed/configured:
* [git](https://git-scm.com/) (VCS)
* [Docker CE](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/)
* [Docker Compose](https://docs.docker.com/compose/)
* *OPTIONAL* [GraalVM](https://www.graalvm.org/)
* Hostname should be set in /etc/hosts (See [Find and Change Your Hostname in Ubuntu](https://helpdeskgeek.com/linux-tips/find-and-change-your-hostname-in-ubuntu/) for more information)
### Clone and install project
1. Clone the git repository
```
git clone https://github.com/fuinorg/ddd-cqrs-4-java-example.git
```
2. Change into the project's directory and run a Maven build
```
cd ddd-cqrs-4-java-example
./mvnw install
```
Be patient - This may take a while (~5 minutes) as all dependencies and some Docker images must be downloaded and also some integration tests will be executed.
### Start Event Store and Maria DB (Console window 1)
Change into the project's directory and run Docker Compose
```
cd ddd-cqrs-4-java-example
docker-compose up
```
### Start command / query implementations
Start one query service and then one command service.
You can mix Quarkus & Spring Boot if you want to!
#### Quarkus Microservices
##### Quarkus Query Service (Console window 2)
1. Start the Quarkus query service:
```
cd ddd-cqrs-4-java-example/quarkus/query
./mvnw quarkus:dev
```
2. Opening [http://localhost:8080/](http://localhost:8080/) should show the query welcome page
For more details see [quarkus/query](quarkus/query).
##### Quarkus Command Service (Console window 3)
1. Start the Quarkus command service:
```
cd ddd-cqrs-4-java-example/quarkus/command
./mvnw quarkus:dev
```
2. Opening [http://localhost:8081/](http://localhost:8081/) should show the command welcome page
For more details see [quarkus/command](quarkus/command).
#### Spring Boot Microservices
##### Spring Boot Query Service (Console window 2)
1. Start the Spring Boot query service:
```
cd ddd-cqrs-4-java-example/spring-boot/query
./mvnw spring-boot:run
```
2. Opening [http://localhost:8080/](http://localhost:8080/) should show the query welcome page
For more details see [spring-boot/query](spring-boot/query).
##### Spring Boot Command Service (Console window 3)
1. Start the Spring Boot command service:
```
cd ddd-cqrs-4-java-example/spring-boot/command
./mvnw spring-boot:run
```
2. Opening [http://localhost:8081/](http://localhost:8081/) should show the command welcome page
For more details see [spring-boot/command](spring-boot/command).
### Verify projection and query data
1. Open [http://localhost:2113/](http://localhost:2113/) to access the event store UI (User: admin / Password: changeit)
You should see a projection named "qry-person-stream" when you click on "Projections" in the top menu.
2. Opening [http://localhost:8080/persons](http://localhost:8080/persons) should show an empty JSON array
### Execute some create commands (Console window 4)
Change into the demo directory and execute the command using cURL (See [shell script](demo/create-persons.sh) and JSON files with commands in [demo](demo))
```
cd ddd-cqrs-4-java-example/demo
./create-persons.sh
```
Command service (Console window 3) should show something like
```
Update aggregate: id=PERSON 954177c4-aeb7-4d1e-b6d7-3e02fe9432cb, version=-1, nextVersion=0
Update aggregate: id=PERSON 568df38c-fdc3-4f60-81aa-d3cce9ebfd7b, version=-1, nextVersion=0
Update aggregate: id=PERSON 84565d62-115e-4502-b7c9-38ad69c64b05, version=-1, nextVersion=0
```
Query service (Console window 2) should show something like
```
Handle PersonCreatedEvent: Person 'Harry Osborn' (954177c4-aeb7-4d1e-b6d7-3e02fe9432cb) was created
Handle PersonCreatedEvent: Person 'Mary Jane Watson' (568df38c-fdc3-4f60-81aa-d3cce9ebfd7b) was created
Handle PersonCreatedEvent: Person 'Peter Parker' (84565d62-115e-4502-b7c9-38ad69c64b05) was created
```
### Verify the query data was updated
1. Refreshing [http://localhost:8080/persons](http://localhost:8080/persons) should show
```json
[
{
"id": "568df38c-fdc3-4f60-81aa-d3cce9ebfd7b",
"name": "Mary Jane Watson"
},
{
"id": "84565d62-115e-4502-b7c9-38ad69c64b05",
"name": "Peter Parker"
},
{
"id": "954177c4-aeb7-4d1e-b6d7-3e02fe9432cb",
"name": "Harry Osborn"
}
]
```
2. Opening [http://localhost:8080/persons/84565d62-115e-4502-b7c9-38ad69c64b05](http://localhost:8080/persons/84565d62-115e-4502-b7c9-38ad69c64b05) should show
```json
{"id":"84565d62-115e-4502-b7c9-38ad69c64b05","name":"Peter Parker"}
3. The event sourced data of the person aggregate could be found in a stream named [PERSON-84565d62-115e-4502-b7c9-38ad69c64b05](http://localhost:2113/web/index.html#/streams/PERSON-84565d62-115e-4502-b7c9-38ad69c64b05)
### Execute a delete command (Console window 4)
Change into the demo directory and execute the command using cURL (See [shell script](demo/create-persons.sh) and JSON files with commands in [demo](demo))
```
cd ddd-cqrs-4-java-example/demo
./delete-harry-osborn.sh
```
### Verify the query data was updated
1. Refreshing [http://localhost:8080/persons](http://localhost:8080/persons) should show
```json
[
{
"id": "568df38c-fdc3-4f60-81aa-d3cce9ebfd7b",
"name": "Mary Jane Watson"
},
{
"id": "84565d62-115e-4502-b7c9-38ad69c64b05",
"name": "Peter Parker"
}
]
```
"Harry Osborn" should no longer be present in the list.
### Stop Event Store and Maria DB and clean up
1. Stop Docker Compose (Ubuntu shortcut = ctrl c)
2. Remove Docker Compose container
```
docker-compose rm
```
| 1 |
alblue/com.packtpub.e4 | Code samples for the Eclipse Plugin Development by Example: Beginners Guide" book 978-1782160328" | null | Eclipse Plugin Development by Example: Beginner's Guide
=======================================================
This repository contains source code for the Packt Publishing book
"Eclipse Plugin Development by Example: Beginners Guide". Tags and
branches are available for both versions.
https://www.amazon.co.uk/s/ref=dp_byline_sr_book_1?ie=UTF8&text=Dr+Alex+Blewitt&search-alias=books-uk&field-author=Dr+Alex+Blewitt&sort=relevancerank
Second Edition
--------------
* ISBN-10: 1783980699
* ISBN-13: 978-1-78398-069-7
*Chapters*
Chapter 1: [Creating your first Plug-in](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter1)
Chapter 2: [Creating Views with SWT](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter2)
Chapter 3: [Creating JFace Viewers](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter3)
Chapter 4: [Interacting with the User](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter4)
Chapter 5: [Working with Preferences](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter5)
Chapter 6: [Working with Resources](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter6)
Chapter 7: [Creating Eclipse 4 Applications](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter7)
Chapter 8: [Migrating to Eclipse 4](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter8)
Chapter 9: [Styling Eclipse 4 Applications](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter9)
Chapter 10: [Creating Features, Update Sites, Applications and Products](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter10)
Chapter 11: [Automated Testing of Plug-ins](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter11)
Chapter 12: [Automated Builds with Tycho](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter12)
Chapter 13: [Contributing to Eclipse](https://github.com/alblue/com.packtpub.e4/tree/edition2/chapter13)
First edition
-------------
* ISBN-10: 1782160329
* ISBN-13: 978-1-78216-032-8
Contents
--------
Chapter 1: [Creating your first Plug-in](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter1)
Chapter 2: [Creating Views with SWT](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter2)
Chapter 3: [Creating JFace Viewers](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter3)
Chapter 4: [Interacting with the User](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter4)
Chapter 5: [Storing Preferences and Settings](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter5)
Chapter 6: [Working with Resources](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter6)
Chapter 7: [Understanding the Eclipse 4 Model](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter7)
Chapter 8: [Creating Features, Update Sites, Applications and Products](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter8)
Chapter 9: [Automated Testing of Plug-ins](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter9)
Chapter 10: [Automated Builds with Tycho](https://github.com/alblue/com.packtpub.e4/tree/edition1/chapter10)
Contact
-------
Follow me on Twitter @alblue, or mail alex.blewitt@gmail.com. My blog
is at https://alblue.bandlem.com/
LICENSE
-------
Code examples are licensed under the Eclipse Public License, version 1.0
as contained in the LICENSE.html file
| 0 |
blundell/SimpleInAppPurchaseV3 | Simple In App Purchase V3 - an alternative to the Google Example | null | SimpleInAppPurchaseV3
=====================
Simple In App Purchase V3 - an alternative to the Google Example
This uses the Google Helper Service but it has been slightly modified when I found what I deemed a bug.
The code is incorporated in this project under the android package.
http://developer.android.com/google/play/billing/billing_integrate.html
*My old tutorial is now DEPRECATED* http://blog.blundell-apps.com/simple-inapp-billing-payment/
| 1 |
berndruecker/camunda-7-springboot-amqp-microservice-cloud-example | Simple example using Camunda and Spring Boot to define a simple microservice communcating via AMQP, fully unit tested and deployable in the cloud | null | [](https://img.shields.io/badge/Compatible%20with-Camunda%20Platform%207-26d07c)
# Camunda Spring Boot example including REST and AMQP, automated tested and deployable in the cloud
This example shows:
- How to setup Camunda, Spring Boot and various test frameworks correctly in order to work. It can be used as a copy & paste template.
- How to use AMQP and REST in your processes the Spring way.
- How to write proper scenario tests, that test if your process impacts the outer world as you expect (and not tests that Camunda did write a proper workflow engine).
- How this type of application can be easily deployed in the cloud (Pivotal Web Services as an example).
The business example is a very simple order fullfillment microservice (motivated by [the flowing retail example](https://blog.bernd-ruecker.com/flowing-retail-demonstrating-aspects-of-microservices-events-and-their-flow-with-concrete-source-7f3abdd40e53)):

- It is triggered via REST (it could have been an AMQP message or any other trigger as well, just REST is very easy to do demo requests)
- Calls a REST service (actually, the REST service is directly implemented here for simplicity, as I did not want to rely on an external URL)
- Sends a AMQP message and waits for a response. For simplicity, the request message will be directly treated as response
# Embedded engine
With Camunda it is possible to run the engine as part of your application or microservice. This is called [embedded engine](https://docs.camunda.org/manual/latest/introduction/architecture/#embedded-process-engine). This is especially helpful in microservice architectures if you develop in Java, then the engine simply gets a library helping you to define flows with persistent state and subsequent requirements like timeout handling, retrying, compensation and so on. See [Why service collaboration needs choreography AND orchestration](https://blog.bernd-ruecker.com/why-service-collaboration-needs-choreography-and-orchestration-239c4f9700fa) for background reading.

# Walkthrough as screencast
This video gives a very quick walk through the example (15 minutes):
<a href="http://www.youtube.com/watch?feature=player_embedded&v=XimowIZLWD8" target="_blank"><img src="http://img.youtube.com/vi/XimowIZLWD8/0.jpg" alt="Walkthrough" width="240" height="180" border="10" /></a>
# Project setup
The project uses
- Spring Boot
- Camunda
- [camunda-bpm-spring-boot-starter](https://github.com/camunda/camunda-bpm-spring-boot-starter/)
- [camunda-bpm-assert-scenario](https://github.com/camunda/camunda-bpm-assert-scenario/)
- [camunda-bpm-assert](https://github.com/camunda/camunda-bpm-assert/)
- [camunda-bpm-process-test-coverage](https://github.com/camunda/camunda-bpm-process-test-coverage/)
- H2 for testing (in-memory) and running it locally (file based)
- PostgreSQL/ElephantDB as cloud storage on Pivotal Web Services
Please have a look at the [pom.xml](pom.xml) for details. Also note the [Application.java](src/main/java/com/camunda/demo/springboot/Application.java) as Spring Boot starter class.
# Configuration of Camunda
With Spring Boot Camunda gets auto-configured using defaults. You can easily change the configuration by providing classes according to the [docs](https://camunda.github.io/camunda-bpm-spring-boot-starter/docs/2.1.2/index.html#_process_engine_configuration). In this example you can see
- [HistoryConfiguration](src/main/java/com/camunda/demo/springboot/conf/CamundaEngineHistoryConfiguration.java) that tells Camunda to save all historic data and audit logs.
- [IdGenerator](src/main/java/com/camunda/demo/springboot/conf/CamundaIdGeneratorConfiguration.java) so that Camunda uses string UUID's instead of database generated ones, which avoids deadlock risks in cluster environments.
- [Plugin to write some events to sysout](src/main/java/com/camunda/demo/springboot/conf/plugin/SendEventListener). This plugin registers a listener to get notified when new workflow instances are started or existing ones are eneded. In this codebase just prints a line on the console, but it would be easy to push the event to some central tracing system. The cool thing: Such a plugin could be packaged in an own Maven depedency, as soon it is on the classpath it will be activated and influence the core engine.
# Using Camunda Enterprise Edition
The example uses the community edition to allow for a quick start. It is easy to switch dependencies to use the Enterprise Edition as you can see in [this commit](commit/724e3db5f09f1743445c78d84e28d2fa5c0b6005). Just make sure you can connect to the [Camunda Nexus](https://docs.camunda.org/get-started/apache-maven/#camunda-nexus) using your enterprise credentials.
# Testing
One extremly interessting piece is the JUnit test case, which does a complete run-thorugh the process, including all Java code attached to the process, but without sending any real AMQP message or REST request. The timeout of a waiting period in the process is also simulated.
```java
StartingByStarter starter = Scenario.run(orderProcess) //
.startBy(() -> {
return orderRestController.placeOrder(orderId, 547);
});
// expect the charge for retrieving payments to be created correctly and return a dummy transactionId
mockRestServer
.expect(requestTo("http://api.example.org:80/payment/charges")) //
.andExpect(method(HttpMethod.POST))
.andExpect(jsonPath("amount").value("547"))
.andRespond(withSuccess("{\"transactionId\": \"12345\"}", MediaType.APPLICATION_JSON));
when(orderProcess.waitsAtReceiveTask("ReceiveTask_WaitForGoodsShipped")).thenReturn((messageSubscription) -> {
amqpReceiver.handleGoodsShippedEvent(orderId, "0815");
});
when(orderProcess.waitsAtTimerIntermediateEvent(anyString())).thenReturn((processInstance) -> {
processInstance.defer("PT10M", () -> {fail("Timer should have fired in the meanwhile");});
});
// OK - everything prepared - let's go
Scenario scenario = starter.execute();
mockRestServer.verify();
// and very that some things happened
assertThat(scenario.instance(orderProcess)).variables().containsEntry(ProcessConstants.VARIABLE_paymentTransactionId, "12345");
assertThat(scenario.instance(orderProcess)).variables().containsEntry(ProcessConstants.VAR_NAME_shipmentId, "0815");
{
ArgumentCaptor<Message> argument = ArgumentCaptor.forClass(Message.class);
verify(rabbitTemplate, times(1)).convertAndSend(eq("shipping"), eq("createShipment"), argument.capture());
assertEquals(orderId, argument.getValue());
}
verify(orderProcess).hasFinished("EndEvent_OrderShipped");
```
Refer to the [OrderProcessTest.java](src/main/java/com/camunda/demo/springboot/OrderProcessTest.java) for all details. Note that the test generates a graphical report:

# Get started
In order to get started just
* Clone or download this example
* Maven build (this also runs the test cases)
```shell
mvn clean install
```
* Install [RabbitMQ](http://rabbitmq.com/) and start it up
* Run microservice via Java:
```shell
java -jar target/camunda-spring-boot-amqp-microservice-cloud-example-0.0.1-SNAPSHOT.jar
```
Now you can access:
* [Camunda web applications](http://localhost:8080/)
* [REST API for new orders](http://localhost:8080/order)
```
curl --request POST -F 'orderId=1' -F 'amount=500' http://localhost:8080/order
```
* [RabbitMQ Management Console](http://localhost:15672/)
Of course you can also use your favorite IDE.
# Cloud deployment on Pivotal Web Services
You can easily deploy a Spring Boot application to various cloud providers, as you get a fat jar runnable on every JVM.
And using the [Spring Cloud Connectors](http://cloud.spring.io/spring-cloud-connectors/) the application can be magically wired with cloud resources.
The example I show here is:
* Deployment on [Pivotal Web Services](https://run.pivotal.io/)
* [ElephantSQL](https://www.elephantsql.com/) as hosted PostgreSQL, started as Service named ```camunda-db```
* [CloudAMPQ](https://www.cloudamqp.com/) as hosted RabbitMQ - started as Service ```cloud-amqp```
All metadata for the deployment are described in the [manifest.yml](manifest.yml):
```
---
applications:
- name: camunda-spring-boot-amqp-microservice-cloud-example
memory: 1G
instances: 1
random-route: false
services:
- cloud-amqp
- camunda-db
```
Now you can easily deploy the application using the [CloudFoundry CLI](https://docs.cloudfoundry.org/cf-cli/). After logging in you can simply type:
```
mvn clean install && cf push -p target/camunda-spring-boot-amqp-microservice-cloud-example-0.0.1-SNAPSHOT.jar
```
There it is, now you can start a process:
```shell
url -X POST -F 'orderId=123' -F 'amount=4990' http://camunda-spring-boot-amqp-microservice-cloud-example.cfapps.io/order
```
And will see it in cockpit:

The URL to access the Camunda web applications and your REST-API depends on various factors, but will be shown via the Pivotal console:


| 1 |
msg-DAVID-GmbH/JUnit-5-Quick-Start-Guide-and-Framework-Support | JUnit 5 Quick Start Guide and collection of examples for frameworks used in conjunction with JUnit 5 | assertion-framework best-practices getting guide java java-8 junit junit-5 junit5 milestones quick quickstart spring-5 start started testing tutorial user user-guide workshop | null | 0 |
Hakky54/mutual-tls-ssl | 🔐 Tutorial of setting up Security for your API with one way authentication with TLS/SSL and mutual authentication for a java based web server and a client with both Spring Boot. Different clients are provided such as Apache HttpClient, OkHttp, Spring RestTemplate, Spring WebFlux WebClient Jetty and Netty, the old and the new JDK HttpClient, the old and the new Jersey Client, Google HttpClient, Unirest, Retrofit, Feign, Methanol, vertx, Scala client Finagle, Featherbed, Dispatch Reboot, AsyncHttpClient, Sttp, Akka, Requests Scala, Http4s Blaze, Kotlin client Fuel, http4k, Kohttp and ktor. Also other server examples are available such as jersey with grizzly. Also gRPC, WebSocket and ElasticSearch examples are included | certificate certificate-authority certificate-signing-request encryption https java keystore keytool kotlin mutual-authentication mutual-tls openssl scala security server spring-boot ssl tls truststore two-way-ssl-authentication | null | 1 |
gregwhitaker/springboot-apikey-example | Example of authenticating with a Spring Boot application using an API key. | api-key caffeine-cache spring-boot spring-security | # springboot-apikey-example

An example of authenticating with a Spring Boot application using an API key.
If you are looking for an example using WebFlux, please check out [springboot-webflux-apikey-example](https://github.com/gregwhitaker/springboot-webflux-apikey-example).
## Prerequisites
This example requires that you have a running [PostgreSQL](https://www.postgresql.org/) database. You can start one as a Docker container using the following commands:
$ docker pull postgres
$ docker run -p 5432:5432 postgres
## Running the Example
Follow the steps below to run the example:
1. Ensure you have a running PostgreSQL instance at `localhost:5432`.
2. Run the following command to start the example application:
./gradlew bootRun
3. Run the following command to send a request to the non-secure endpoint:
curl -v http://localhost:8080/api/v1/nonsecure
If successful, you will receive an `HTTP 200 OK` response.
4. Run the following command to send a request to the secure endpoint:
curl -v http://localhost:8080/api/v1/secure
You will receive an `HTTP 403 Forbidden` response because you have not supplied a valid API key.
5. Run the following command to send a request to the secure endpoint with an API key:
curl -v --header "API_KEY: aec093c2c98144f99a4a365ad1d2f05e" http://localhost:8080/api/v1/secure
If successful, you will now receive an `HTTP 200 OK` response because you have supplied a valid API key.
## Bugs and Feedback
For bugs, questions, and discussions please use the [Github Issues](https://github.com/gregwhitaker/springboot-apikey-example/issues).
## License
Copyright 2019 Greg Whitaker
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 1 |
xbtlin/thinking-In-Java | All examples writtern by author of the thinking in java book (4th ). Could run directly in the Idea Intellij or Eclipse. Java编程思想 第四版 所有的示例代码,可直接在Intellij或Eclipse上运行。 | null | 可以直接pull到Idea Intellij或Eclipse中的Thinking in java第四版书中示例代码。
不含课后习题答案。
| 0 |
islomar/seven-concurrency-models-in-seven-weeks | Repository for the example code of the book Seven concurrency models in seven weeks"." | null | # Seven concurrency models in seven weeks
Repository for the example code of the book ["Seven concurrency models in seven weeks"](https://pragprog.com/book/pb7con/seven-concurrency-models-in-seven-weeks).
## Discussions
https://forums.pragprog.com/forums/291
## Errata
https://pragprog.com/titles/pb7con/errata
## Chapter 1
* A **concurrent program** has multiple logical threads of control. These threads may or may not run in parallel.
* A **parallel program** potentially runs more quickly than a sequential program by executing different parts of the computation simultaneously (in parallel).
It may or may not have more than one logical thread of control.
* **Concurrency** is an aspect of the problem domain—your program needs to handle multiple simultaneous (or near-simultaneous) events.
* **Parallelism**, by contrast, is an aspect of the solution domain—you want to make your program faster by processing different portions of the problem in parallel.
* **Concurrency** is about dealing with lots of things at once. **Parallelism** is about doing lots of things at once.
* Concurrent programs are often nondeterministic —they will give different results depending on the precise timing of events. If you’re working on a genuinely concurrent problem, nondeterminism is natural and to be expected.
* Parallelism, by contrast, doesn’t necessarily imply nondeterminism
Although there’s a tendency to think that parallelism means multiple cores, modern computers are parallel on many different levels. The reason why individual cores have been able to get faster every year, until recently, is that they’ve been using all those extra transistors predicted by Moore’s law in parallel, both at the bit and at the instruction level.
### Levels of parallelism
* Bit-level: i.e. 16, 32, 64-bit architectures.
* Instruction-level
* Data parallelism
* Task-level
## Interesting links
### Chapter 1
* Concurrency is not parallelism (it's better): http://concur.rspace.googlecode.com/hg/talk/concur.html#title-slide
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.